text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
My editor Adaptive Message Passing: A General Framework to Mitigate Oversmoothing, Oversquashing, and Underreaching Federico Errica [email protected] NEC Laboratories Europe Henrik Christiansen [email protected] NEC Laboratories Europe Viktor Zaverkin [email protected] NEC Laboratories Europe Takashi Maruyama [email protected] NEC Laboratories Europe Mathias Niepert [email protected] University of Stuttgart & NEC Laboratories Europe Francesco Alesiani [email protected] NEC Laboratories Europe ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Long-range interactions are essential for the correct description of complex systems in many scientific fields. The price to pay for including them in the calculations, however, is a dramatic increase in the overall computational costs. Recently, deep graph networks have been employed as efficient, data-driven surrogate models for predicting properties of complex systems represented as graphs. These models rely on a local and iterative message passing strategy that should, in principle, capture long-range information without explicitly modeling the corresponding interactions. In practice, most deep graph networks cannot really model long-range dependencies due to the intrinsic limitations of (synchronous) message passing, namely oversmoothing, oversquashing, and underreaching. This work proposes a general framework that learns to mitigate these limitations: within a variational inference framework, we endow message passing architectures with the ability to freely adapt their depth and filter messages along the way. With theoretical and empirical arguments, we show that this simple strategy better captures long-range interactions, by surpassing the state of the art on five node and graph prediction datasets suited for this problem. Our approach consistently improves the performances of the baselines tested on these tasks. We complement the exposition with qualitative analyses and ablations to get a deeper understanding of the framework's inner workings.§ INTRODUCTION Complex systems, characterized by interacting entities and emergent behavior, are a cornerstone of research in many scientific disciplines. Mathematical models of such systems should consider the effects of both short and long-range interactions between entities, and the latter are crucial to describe the system's behavior with the highest degree of precision.For instance, in computational physics, it is well-known that electrostatic and gravitational interactions decay very slowly with distance <cit.>; in computational chemistry and material sciences, the accurate modeling of non-local effects, such as non-bonded interactions in molecular systems, is necessary to correctly estimate properties like the free energy <cit.>; in biology, disrupting long-range interactions in mRNA can inhibit slicing <cit.>;in immunology, the distant interactions between a major histocompatibility complex and regions of the T-cell receptor molecule correlate with the binding process of these two compounds <cit.>.A graph is a suitable data structure for representing complex systems of entities intertwined together <cit.>. In a graph, an entity corresponds to a node and a pairwise relationship to an edge. From a computational perspective, modeling long-range interactions often implies that graphs have dense connectivity—every entity is connected with many others. Dense connectivity can severely impair the efficient simulation of molecular systems <cit.>, as in the worst case, the number of interactions is quadratic in the number of entities of the associated graph. Researchers have tried to address these computational problems for a long time, e.g., by cleverly constructing bounds that lead to fast decisions of acceptance or rejection in Monte Carlo simulations <cit.> or by using Machine Learning (ML) models that act as accurate surrogates for computationally demanding simulations <cit.>. Some of these methods rely on Deep Graph Networks (DGNs) <cit.>, deep learning models that learn from graphs of arbitrary topology. Most DGNs implement a message passing paradigm of computation, where nodes repeatedly exchange messages with each other to propagate information across the graph and compute their embeddings. The number of message passing rounds corresponds to the depth of the architecture. In this sense, the graph is both the input and the computational medium used to make predictions.Despite its long-standing history <cit.>, research in graph representation learning has gained more traction in recent years, and there are still many open questions. For instance, it is well-known that message passing architectures are ineffective at capturing long-range dependencies, thus reducing their impact in the scientific fields mentioned before. Researchers relate this problem to at least three others, namely oversmoothing <cit.>, oversquashing <cit.>, and underreaching <cit.>. Briefly, oversmoothing means that the node embeddings of a DGN tend to converge to the same value as the depth increases. In contrast, oversquashing relates to the bottleneck of compressing a (possibly) exponential amount of information from neighboring nodes into a single node embedding. Finally, underreaching refers to the inability of DGNs of depth K to propagate a node's information to more than K hops away.This work provides a general framework for improving the ability of any message passing architecture to capture long-range dependencies; we extend the general message passing formulation to propagate relevant information across the graph. At the heart of our proposal is the idea to let DGNs learn how many layers of message passing to use and when to send specific messages. As a matter of fact, one typically observes oversmoothing and oversquashing when too many messages are propagated, hence learning which messages to discard is important. At the same time, solving underreaching requires a sufficient number of message passing rounds to be performed, and it is crucial to learn this information from that task rather than guessing it via expensive grid searches. In light of these characteristics, we call our approach Adaptive Message Passing ().Our contributions are multi-faceted. We extend a recent variational framework for unbounded depth networks <cit.> to the processing of graphs, and we introduce new families of distributions with specific properties to overcome previous limitations. We also propose a soft message filtering scheme to prune irrelevant information for the task at hand and favor the propagation of messages to distant regions of the input graph. Theoretically, we show how to propagate a message unchanged between any two connected nodes in the graph; thus, underreaching can be mitigated. We complement this result with a discussion on oversmoothing and oversquashing, highlighting that some recently proposed metrics may not convey the complete picture. Empirically,significantly and consistently improves the performances of message passing architectures on five well-known node and graph prediction datasets where long-range information is important. Qualitative analyses provide further evidence thatmitigates oversmoothing and oversquashing as well. Finally, we conduct an in-depth study of our approach via ablations and visualizations of the models' predictions.The rest of the manuscript is organized as follows. Section <ref> reviews related works in the literature for long-range dependencies; Section <ref> presents the technical details of ; in Section <ref> we discuss how our approach can mitigate oversmoothing, oversquashing and underreaching; Section <ref> introduces new families of distributions to be used within the framework of ; Section <ref> describes the experimental setup in detail; Section <ref> presents our empirical results; Section <ref> discusses limitations and future works; we give our concluding remarks in the final Section <ref>. § RELATED WORK This section positions our work in the context of previous contributions to the literature.Oversquashing. There are many methods that attempt to address the oversquashing problem with the goal of better capturing long-range dependencies <cit.>. There is agreement that modifying the message passing scheme leads to improved performances; in this sense, the graph structure does not match exactly the computational graph used to compute the node embeddings. Some works learn how a node should completely stop propagating a message in a fixed-depth architecture <cit.> or if it should only listen, isolate, or receive/broadcast its own message <cit.>. Similarly, one can learn to sample edges at each message passing layer according to some learned parametrization <cit.> or have a completely asynchronous message passing <cit.>. Our work differs from these works as we apply a learned (soft) filtering to all existing messages.Another idea is to modify message passing to avoid backtracking of messages back to the source node, to achieve less redundancy of information <cit.>. While this choice proves effective at several tasks, it is still an open question whether it is always the best choice for the task at hand. In attention-based approaches <cit.>, an edge filter is computed using some non-linear relationship between the embeddings of the source and destination nodes. This can introduce a severe computational burden as the function needs to be applied to all edges.Similarly, GNN-FiLM <cit.> learns a feature-wise linear modulator that depends on the destination node and modulates the magnitude of all incoming messages.On the other hand, rewiring approaches try to alter the graph connectivity rather than the message passing operation. This action is meant to increase the sensitivity <cit.> of a node with respect to another, and it has been theoretically linked to the oversquashing problem. Some recent works try to preserve locality and sparsity of the rewiring process <cit.> or dynamically rewire the graph based on the layers <cit.>. In contrast, others take a probabilistic approach to rewiring based on sampled sub-graphs <cit.>. Recently, a critical perspective on the effectiveness of rewiring approaches has also been given <cit.>. Finally, we mention ordinary differential equation-based message passing approaches, which are provably preserving information regardless of the depth in the network <cit.> and have shown great results on datasets aimed at capturing long-range dependencies. Oversmoothing.Oversmoothing is perhaps one of the first problems that emerged empirically and was then analyzed theoretically <cit.>. Not surprisingly, one practical solution to oversmoothing is dropping edges to reduce the overall flow of messages and, thus, avoid the convergence of all embeddings to the same value <cit.>. Another well-known solution to alleviate oversmoothing is to employ skip/residual connections <cit.>, which consists of summing the representations learned at deeper layers with those of previous ones. Similarly to what is done in this work, the concatenation of node representations across layers is also a way to circumvent oversmoothing, which has been adopted in neural and probabilistic models to improve the downstream performances on several node and graph-related tasks <cit.>. Instead, an orthogonal research direction considers implicit neural networks for graphs that correspond to infinite-depth models and seem to be able to capture long-range dependencies <cit.>. These models simulate synchronous message passing with a potentially infinite number of message-propagation steps, and some of them appear to be empirically robust to the oversmoothing problem. Adaptive Architectures.The last part of this section is dedicated to works that try to learn the architecture of the model during training. Our work is inspired by the unbounded depth networks (UDNs) of <cit.>, who proposed a variational framework for learning the depth in deep neural networks. In the graph domain, the first approach to learning the depth of a DGN was proposed by <cit.>, who applied the cascade correlation algorithm <cit.> to learn a proper depth for the task. In the field of graph representation learning, other works attempted to learn the width of the representation of each message passing layer by exploiting Bayesian non-parametric models <cit.>, which allows to save time and memory when building deeper probabilistic DGNs.Finally, it is important to notice that these works, including this manuscript, are all orthogonal to the popular field of neural architecture search <cit.>: The former attempts at dynamically modifying the architecture during learning, whereas neural architecture search approaches find smarter ways to carry out a grid search. An advantage of adaptive approaches is that they can greatly reduce time and computational costs to perform a hyper-parameter search. § ADAPTIVE MESSAGE PASSING This section introduces the variational framework of . We extend the work of <cit.> to learn unbounded depth architectures for graphs and we introduce a message-filtering mechanism to prune the information exchanged between nodes. Notably, 's behavior ranges from the asynchronous message passing of <cit.> to classical (synchronous) message passing <cit.>. Definitions. We consider directed attributed graphs g=(𝒱, ℰ, 𝒳, 𝒜), each consisting of a set of nodes 𝒱={1,…,n_g} that are connected together via a set of oriented edges ℰ={(u,v) | u,v ∈𝒱}. When a graph is undirected, each edge is converted into two oriented ones, that is, (u,v) and (v,u). The set 𝒳={𝐱_v ∈ℝ^d | v ∈𝒱} defines the d-dimensional attribute vector of each node in the graph, and similarly for the d'-dimensional edge attributes belonging to the set 𝒜={a_uv∈ℝ^d'| (u,v) ∈ℰ}. Finally, we define the neighborhood of a node v as the set of incoming edges 𝒩_v = { u | (u,v) ∈ℰ}. As outlined in previous works <cit.>, each attributed graph can be seen as a realization of some random variable () 𝒢 with support in the graph domain. Similarly to classical machine learning, we do not have access to the data distribution p(𝒢), rather we are interested in modeling the conditional distribution p(𝒯=Y |𝒢=g), where Y stands for the target value(s) to be predicted depending on the nature of the task.Multi-output family of architectures.produces deep graph networks of potentially infinite depth, where each layer is comprised of a message passing operation and a readout mapping <cit.>. Without loss of generality, a message passing layer ℓ > 1 can compute node embeddings h^ℓ_v, ∀ v ∈𝒱 as follows:h_v^ℓ = ϕ^ℓ(h_v^ℓ-1, Ψ({ψ^ℓ(h_u^ℓ-1, a^ℓ_uv) | u ∈𝒩_v} ) ),where ϕ^ℓ and ψ^ℓ are learnable functions and Ψ is a permutation invariant function that aggregates the embeddings of v's neighbors computed at the previous layer. When ℓ = 1, h_v^1 is obtained by applying a learnable transformation of the node v's features h_v^0=x_v. Instead, the readout mapping depends on the task: If node-wise predictions need to be made, then the readout implements a learnable map ŷ_v^ℓ = ρ^ℓ(h_v^ℓ) from h_v^ℓ to a node prediction ŷ_v^ℓ; On the other hand, in the case of whole-graph predictions, a global aggregation has to be performed first:ŷ^ℓ = ρ_2^ℓ(Φ({ρ_1^ℓ(h_v^ℓ) | v ∈𝒱})),where ρ_1,ρ_2 denote learnable functions and Φ is a global pooling function that aggregates all node representations computed at a given layer ℓ. In the following, we describe how to learn a fixed depth for the task at hand and modify the message passing scheme of Equation <ref> to allow messages to be adaptively filtered. The learnable functions ϕ^ℓ, ψ^ℓ, ρ_1^ℓ,ρ_2^ℓ are typically implemented as 1-hidden layer MLPs parametrized by Θ_ℓ. formulation. Figure <ref> represents the probabilistic graphical model associated with , where white and blue circles represent latent and observed s, respectively. Given a dataset 𝒟, Θ_ℓ follows a distribution over the parameters of layer ℓ of an infinite-depth network, and ℒ follows a distribution over layers and is used to truncate the network to a finite depth. For thei-th graph g_i, ℱ_i follows a distribution over soft message filters F_i ∈ [0,1]^|𝒱| × L × d. In particular, given a node v and a layer ℓ, the d-dimensional vector F_i(v,ℓ) specifies how much of h^ℓ_v has to be propagated through the outgoing edges in the next message passing layer. The generative model ofis then written asθ∼ p(Θ) = ∏^∞_ℓ=1p(Θ_ℓ)L ∼ p(ℒ) F_i |g_i, L, θ∼ p(ℱ_i |g_i, L, θ)Y_i | g_i, F_i, L, θ∼ p(𝒯_i;Ω_L(g_i, F_i, θ)),with Ω_L being the infinite DGN truncated at depth L whose output parametrizes the target distribution. The user can refer to <cit.> for an in-depth discussion on the design choices of this generative model, especially as regards the independence of the priors for an efficient approximation of the posterior distribution.In Figure <ref>, we visually represent the effect that message filtering has on the propagation of messages across layers of the DGN. A graph of seven nodes (a) is provided and the message filtering scheme (b) has been discretized in the interest of simplicity. For instance, node 1 will send its message only at message passing layer 1, nodes 2 & 3 will never send a message, and node 4 will send a message only at layer 2. Compared to the standard message passing (c), where all nodes send their messages at each layer,implements a learnable filtering (d), where a subset of all possible messages is propagated at each layer in a way that depends on the task to be solved. In Section <ref>, we discuss the implications of this adaptive message filtering scheme in mitigating the well-known issues of oversquashing, underreaching, and oversmoothing. Notably, message filtering does not introduce a significant computational burden since it has linear complexity in the number of nodes.Choice of the variational distributions. To find a suitable parametrization that maximizes the likelihood of the dataset 𝒟, one needs to compute the joint posterior distribution of the latent variables. Such a computation is intractable for our graphical model, so we resort to a variational inference <cit.> approach where the learnable variational distribution q(θ, L, F_i, | g_i, Y_i) factorizes as q(θ| L ; ν)q(L ; λ)q(F_i | g_i, L, θ), and ν, λ are learnable vectors of parameters. We also assume that the variational posterior does not depend on Y_i (so we can drop the term) to allow for predictions on unseen graphs. Below, we describe how to compute each factor so that the computation of the evidence lower bound (ELBO) becomes efficient.The distribution q(L ; λ) needs to belong to a variational family that is unbounded with bounded and connected members (see Definition <ref> in Section <ref>). In short, since the support of each distribution q in the family is bounded, we can compute its expectation 𝔼_q(L ; λ)[f(L)] as the sum ∑_ℓ∈support(q)q(ℓ)f(ℓ) for any function f. In Section <ref>, we formally extend the original treatment of Poisson distributions to Gaussians and mixtures of distributions.Second, the variational distribution q(θ| L ; ν) exploits the fact that, conditioned on a fixed network of L layers, we cannot make any statement about the layers greater than L <cit.>:q(θ| L ; ν) = q(θ_1:L ; ν_1:L) ∏_ℓ=L+1^∞p(θ_ℓ)and p(θ_ℓ) can be, for instance, a Gaussian prior. We also fix q(θ_1:L ; ν_1:L) = ∏_ℓ=1^L 𝒩(θ_ℓ ; ν_ℓ, I) and approximate any expectation 𝔼_q(θ| L ; ν)[f(θ_1:L)]), for a function f, at the first order with f(𝔼_q(θ| L ; ν)[θ_1:L]) = f(ν_1:L).Finally, we define q(F_i | g_i, L, θ) as a Dirac delta function δ_W_i whose parameters W_i ∈ [0,1]^|𝒱| × L × d are computed by a function f(g_i). Choosing the delta function makes the computation of expectation straightforward, but other choices can, in principle, be made. We propose two versions of the function f(g_i), whose choice is left as a hyper-parameter: the first computes W_i(v,ℓ) = f_ℓ(x_v) and the second computes W_i(v,ℓ) = f_ℓ(h^ℓ_v), where f_ℓ is a Multi-Layer Perceptron (MLP). In other words, a node's outgoing messages will be filtered according to either the input features of that node or its latent representation at layer ℓ. Given F_i ∼ q(F_i | g_i, L, θ), we extend Equation <ref> to apply such filtering:h_v^ℓ = ϕ^ℓ(h_v^ℓ-1, Ψ({F_i(u,ℓ-1) ⊙ψ^ℓ(h_u^ℓ-1, a^ℓ_uv) | u ∈𝒩_v} ) ),with ⊙ being the element-wise product. Such message filtering is similar in spirit to <cit.>, with the difference that our approach does not require approximating the gradient due to discrete operations and is fully differentiable. Computation of the ELBO. Our choice of the variational distributions allows us to compute the ELBO efficiently and maximize it using backpropagation <cit.>. In particular, we writeln p(g_i, Y_i)≥ 𝔼_q(θ, L, F_i, | g_i, Y_i)[ln p(Y_i, L, F_i, θ| g_i) - ln q(L, F_i, θ| g_i)] = 𝔼_q(L ; λ)[lnp(L)/q(L ; λ) + 𝔼_q(θ| L ; ν)[lnp(θ)/q(θ| L ; ν)]+ 𝔼_q(θ| L ; ν)q(F_i | g_i, L, θ)[lnp(F_i)/q(F_i | g_i, L, θ) + ln p(Y_i | L, F_i, θ| g_i)]]= ∑_ℓ=1^L̂q(ℓ ; λ)[lnp(ℓ)/q(ℓ ; λ) + lnp(ν)/q(ν|ℓ ; ν) + lnp(W_i)/q(W_i | g_i, ℓ, ν) + ln p(Y_i |ℓ, W_i, ν, g_i)],where L̂=support(q(L)), p(L) is a prior over layers, such as a Poisson distribution, and p(F_i) is a prior over all possible message filtering schemes (uninformative in this work). The second equivalence relies on the specific properties of the variational distributions and the first-order approximation previously mentioned. To make predictions about a new graph g_j,uses the variational posterior as an approximation of the predictive distributionp(Y_j | g_j, 𝒟)≈𝔼_q(θ, L, F_i, | g_i)[p(Y_j ; Ω_L(g_i, F_i, θ) ] ≈∑_ℓ=1^L̂q(ℓ ; λ) p(Y_j ; Ω_ℓ(g_i, W_i, ν)).In other words, we obtain the prediction as the weighted sum of the L̂ output layers of the DGN, and the variational distribution over layers provides said weights. Practical considerations. What we have just described is referred to as dynamic variational inference <cit.>. This is because the depth ofvaries dynamically in a variational inference framework and, as such, its parameters. In particular, the support of the distribution q(ℓ ; λ) is obtained by truncating the distribution at a given quantile (0.99 in our experiments). Whenever the quantile shifts, we either grow or shrink the DGN, increasing or reducing the number of message passing operations performed between the nodes. Whenever the network is increased, we instantiate a new layer and, in the case of libraries such as PyTorch <cit.>, add it to the computational graph of the optimizer (in our case Adam, where we have to reset the optimizer's state each time). In addition, we have to increase the output dimension of the function f(g_i) that produces W_i. When shrinking the DGN, we can either retain the excess layers in the eventuality of future expansions or delete them; here we opt for the retention strategy.Another important advantage ofis that the depth is not a hyper-parameter to be tuned anymore. Depth can have a considerable effect on DGNs' performances <cit.>, and for this reason different configurations are typically tried via grid or random searches. This can severely impact the time necessary to find a good hyper-parameter configuration for the task. At the same time,requires choosing a family of truncated distributions q(L) and a proper initialization, but it is generally believed that this has a smaller effect on the final result <cit.>. Also, setting uninformative priors seemed to work well in our experiments but priors can also be used, for example, to penalize the computational efforts of deeper networks.§ ON OVERSMOOTHING, OVERSQUASHING, AND UNDERREACHING In this section, we discuss 's implications on oversmoothing, oversquashing, and underreaching, all of which hamper the ability of DGNs to capture long-range interactions between nodes in the graph and are related in subtle ways. Oversmoothing. Oversmoothing has been formally defined by <cit.> as the convergence of a node embeddings' similarity as the number of message passing layers increases. In other words, it formalizes the widely accepted notion that node embeddings tend to become identical after many layers of message passing. Different oversmoothing metrics have been proposed, and in this work, we consider the Dirichlet energy <cit.> at layer ℓ defined asE(H^ℓ) = 1/|𝒱|∑_u ∈𝒱∑_v ∈𝒩_u || h_u^ℓ - h_v^ℓ ||^2where we indicate with H^ℓ the set of node embeddings computed at layer ℓ. There are at least two reasons whyalleviates oversmoothing. The first is that, in principle, the adaptive message filtering scheme reduces the synchronous exchange of all messages at a given layer, and message exchange will be different depending on the specific layer. The second is that the readout mapping of each layer directly propagates the gradient of the loss into the corresponding message passing operation, which encourages diversity of node representations of each layer ℓ as long as q(ℓ ; λ) is large enough (that is, layer ℓ's output is important for the final prediction). In our experiments, we will show thatcan generate architectures in which the Dirichlet energy does not decay exponentially and thus suffers less from oversmoothing than the baselines. Oversquashing. The term oversquashing refers to the compression of an exponentially-growing amount of information <cit.> into fixed-size node embeddings <cit.>, causing a possibly severe bottleneck that hampers DGNs' ability to effectively propagate task-specific information. An intuitive visualization is provided in Figure <ref> (left), where node 3 of the graph defined in Figure <ref> needs to compress information of its 2-hop neighborhood into a single node embedding. The literature on the topic is already vast despite its very recent introduction; some works address oversquashing through rewiring of the original graph structure <cit.>, while others preserve information by viewing the message passing operations through the lens of ordinary differential equations <cit.>. By properly modifying the curvature of a graph <cit.>, some graph rewiring approaches aim at increasing the sensitivity of a node's u embedding h_u^Lwith respect to the input x_v of another node v, that is || ∂h_v^L/∂x_u||_1. <cit.> argue that increasing the sensitivity can alleviate oversquashing and better capture long-range dependencies. Indeed, by rewiring two distant nodes with a new edge, the sensitivity of these two nodes will almost certainly increase. While we do agree that long-range dependencies can be better captured, we argue that rewiring might make the oversquashing problem worse by adding extra information to be compressed into a node's embedding (assuming other edges are not removed). In contrast, the adaptive filtering scheme ofshown in Figure <ref> (right) might decrease the overall sensitivity defined above, but at the same time it will reduce the number of messages that need to be compressed into node 3, hence alleviating oversquashing. Similarly, the synthetic datasets defined in <cit.>, which are meant to measure how well a model addresses oversquashing, require that all information is preserved to solve a task. This would certainly be a good test-bed for ODE-based models <cit.>, but other tasks might require propagating only a subset of the total information contained in the graph. The ability to isolate such information from the rest can be seen as a solution to the oversquashing problem, which is exactly the opposite goal of the synthetic tasks previously mentioned. In summary, the problem of oversquashing is clearly multi-faceted and requires great care regarding its evaluation. As such, it might be a good idea for the future to decompose over-squashing into simpler sub-problems, such as the ability to isolate the relevant information (whichcan do), the ability to propagate all information, and the ability to increase the sensitivity between far-away nodes. Underreaching. Finally, underreaching is defined as the inability of standard message passing with K layers to capture interactions of range greater than K. <cit.> addresses this problem by adding a message passing layer on a fully connected graph at the last layer of the architecture, which empirically improves the performances but does not fundamentally solve the problem. A solution to this problem is letting the model decide the right depth of the architecture for the task, which is exactly whatdoes.We conclude the section with a result on the ability ofto propagate a message unchanged from any two connected nodes in a graph. Let us assume that a graph g contains two (not necessarily distinct) nodes v and u, and a walk ((v,v_2),…,(v_K,u)) of length K>1 exists between them. We also assume that each node can be associated with a unique identifier by a differentiable function z(g). Then, there exists a parametrization ofthat can propagate x_v to node u unchanged.To prove the statement, we first choose a parametrization of q(ℓ ; λ) such that support(q)=K. Then, we can choose a filtering function f(g, z(g))=W∈ [0,1]^|𝒱| × L × d such that * W(v, 1) = 1 and W(*, 1) = 0 otherwise,* W(v_ℓ, ℓ)= 1 and W(*, ℓ) = 0 otherwise (ℓ∈ [2,K]).At this point, message passing can be instantiated from Equation <ref> as followsh_v'^ℓ = ∑_u' ∈𝒩_v'W(u',ℓ-1) ⊙h_u'^ℓ-1,where we recall that W(u',ℓ-1) represents the filter for the incoming message h^ℓ-1_u', to achieve the propagation of x_v to node u unchanged; in fact, h^K_u = x_v.Figure <ref> sketches the process formalized in the theorem. It is worth noting that the identifiability assumption is satisfied whenever each pair of node attributes differs, which is easily the case when one deals with continuous node attributes. In light of the above discussion, this theorem hints atindeed mitigating oversmoothing, oversquashing, and underreaching by being able to propagate a single message unchanged, which is reminiscent of asynchronous message passing <cit.>§ EXTENSION TO NEW FAMILIES OF TRUNCATED DISTRIBUTIONS The family of truncated Poisson distributions, proposed by <cit.> to learn unbounded depth networks, satisfies specific requirements that allow us to efficiently perform (variational) inference. In particular, by truncating the Poisson distribution at its c-quantile one can bound its support and compute expectations in finite time. However, the Poisson distribution suffers from equidispersion, meaning that the variance is equal to the mean; this is a particularly limiting scenario when learning distributions over the importance of layers. In fact, one might also want to model variances that are smaller or greater than the mean, which is referred to as under and over-dispersion, respectively, to learn a broader class of distributions <cit.>. To address this problem, in the following we introduce two families of distributions and prove that they also satisfy the requirements defined in <cit.>; we formally recall such requirements below.A variational family Q = q(ω) over ℕ^+ is unbounded with connected and bounded members if * ∀ q ∈ Q, support(q) is bounded* ∀ L ∈ℕ^+, ∃ q ∈ Q such that L ∈argmax(q)* Each parameter in the set ω is a continuous variable. Condition 1 is necessary to compute the expectation over q(ℓ ; λ) in finite time, condition 2 ensures that we can give enough probability mass to each point in the support of q, and condition 3 is required for learning the distributions' parameters in a differentiable manner.The discrete folded normal distribution. folded normal (FN) distributions <cit.> can model under-, equi-, and over-dispersion. They are parametrized by a mean parameter μ and a standard deviation σ. Its density is defined asp_FN(x;μ,σ) = 1/√(2πσ^2) e^-(x - μ)^2/2σ^2 + 1/√(2πσ^2) e^-(x + μ)^2/2σ^2,μ,σ∈ℝ, x ≥ 0.To use a FN distribution in , the idea is to first define a discrete version of the folded normal (DFN) with the strategy highlighted in <cit.>:p_DFN(0;μ,σ) = S_FN(1; μ,σ)p_DFN(x;μ,σ) = S_FN(x+1; μ,σ) - S_FN(x; μ,σ), ∀ x ∈ℕ^+where S_FN(x; μ,σ) is the cumulative distribution function (c.d.f.) of the folded normal distribution evaluated at x.[We note that the support is defined over ℕ and not on ℕ^+, but this is not an issue from a practical point of view.] It is also useful to notice the equivalence between the c.d.f of the DFN S_DFN(x;μ,σ) and that of a folded normal S_FN(x;μ,σ)S_DFN(x;μ,σ) = ∑_i=0^x p_DFN(x;μ,σ) =S_FN(x+1; μ,σ), x ∈ℕ,which implies that S_DFN(x;μ,σ) ≥ S_FN(x;μ,σ). Figure <ref> shows the probability mass function (p.m.f.) of a DFN distribution with μ=1 and σ=5 and its cumulative mass function (c.m.f.). Clearly, condition 3 of Definition <ref> is satisfied. It is also trivial to satisfy condition 2 by choosing a peaked distribution with a small value of σ. In what follows, we focus on lower and upper bounds to the c-quantile of the DFN distribution so that we know we can truncate the distribution to the finite c-quantile, meaning condition 1 is also satisfied. There exists lower and upper bounds to the c-quantile, 0 < c < 1, for any DFN distribution with σ > 0. We first need to compute a lower bound to the quantile of the FN distribution since there is no closed formula for it. To start, we note that the c.d.f. of the Gaussian distribution is greater or equal to that of a folded normal distribution:1/2erf(x-μ/σ√(2)) + 1/2≥1/2erf(x-μ/σ√(2)) + 1/2erf(x+μ/σ√(2))_≤1/2 = S_FN(x; μ,σ).This implies that the c-quantile of the Gaussian x_G, which we know how to compute, is reached earlier than that of the FN x_FN, that is, x_G ≤ x_FN, and in particular ⌊ x_G ⌋≤⌊ x_FN⌋ are also lower bounds. It then follows from Equation <ref> that ⌊ x_G ⌋ - 1 is a lower bound for the DFN distribution.To find an upper bound, we apply Chernoff's Boundp(X ≥ x) ≤M_X(t)/e^tx, ∀ t > 0where X is athat follows a folded normal distribution with mean μ and standard deviation σ, and M_X(t) is the known moment generating function of X. To find an upper bound to the c-quantile, we need that (1-c) = M_X(t)/e^tx for some choice of t. Defining Φ as the normal cumulative distribution function Φ(x)= 1/2[1 + erf(x/√(2))] ≥ 0, we choose t=1/σ and obtainM_X(t)/e^tx= e^-tx(e^σ^2t^2/2+μ tΦ(μ/σ + 1 ) + e^σ^2t^2/2 - μ tΦ(-μ/σ + 1 ))= e^-x/σe^1/2e^μ/σ(Φ(μ/σ + 1 ) + Φ(-μ/σ + 1 ) 1/e^2μ / σ)= k e^μ-x/σ,k = e^1/2(Φ(μ/σ + 1 ) + Φ(-μ/σ + 1 ) 1/e^2μ / σ) > 0.Therefore, the upper bound of the quantile is given byk e^μ-x/σ = (1-c) ln k + μ-x/σ = ln (1-c) σln k + μ-x = σln (1-c)x =μ + σln k - σln (1-c). Therefore, if the upper bound to the quantile of the FN is x, it follows from Equation <ref> that x-1 is also an upper bound of the DFN.Consequently, we can efficiently find the true quantile by running a binary search between the lower and the upper bounds provided by the theory. Figure <ref> (right) shows an example of lower and upper bounds (vertical dashed lines) as well as the true quantile of the FN distribution. A mixture of simpler distributions.It is possible to learn more complex distributions q(ℓ ; ω) that satisfy the conditions of Definition <ref> by mixing simpler distributions like the DFN defined above. A mixture of C families of unbounded distributions q_1(ℓ ; ω), …, q_C(ℓ ; ω) with bounded and connected members is defined as:q_ℳ(ℓ ; ω) = ∑_i=0^C w_i q_i(ℓ ; ω)where 0 ≤ w_i ≤ 1 is mixture's i weight and ∑_i=0^C w_i = 1. Conditions 2 and 3 are again trivially satisfied (a mixture can always collapse to one of its distributions that satisfy said conditions), and below, we show that lower and upper bounds still exist.There exist lower and upper bounds to the c-quantile, 0 < c < 1, for a mixture of C distributions that satisfy the conditions of Definition <ref>, provided that lower and upper bounds exist for each distribution in the mixture. The c.m.f. of a mixture of discrete distributions can be written as a weighted sum of c.m.f.s:S_ℳ(x; ω) = ∑_i=0^C w_i S_i(x ; ω).Let x^* be the greatest upper bound of the c-quantile across all C components of the mixture, and let i^* be the associated component. It follows that, ∀ j, S_j(x^* ; ω) ≥ c, andS_ℳ(x^*; ω) = ∑_i=0^C w_i S_i(x^* ; ω) ≥∑_i=0^C w_i c = c.Therefore, x^* is also an upper bound for the mixture of distributions. It is possible to prove that a lower bound of the mixture is the smallest lower bound of the c-quantile across all C components using a similar approach. To summarize, we have shown how one can use more complex families of distributions in the context of , allowing us to model under and over-dispersion. In this work, we will treat the choice of the family of distributions q(ℓ ; ω) to use as a hyper-parameter to be tuned. § EXPERIMENTAL DETAILS In what follows, we provide details about our experimental setup. To foster reproducibility, our experiments use the PyDGN framework that ensures fair and rigorous comparisons <cit.>. We evaluateon two sets of tasks, both requiring the ability to capture long-range interactions to correctly predict target values <cit.>.Synthetic Datasets We first consider the tasks of predicting the diameter, the single-source shortest paths (SSSP), and the node eccentricity on synthetic graphs <cit.>. In particular, we closely follow the reproducible setup of <cit.> where graphs have sizes ranging from 25 to 35 nodes, the connectivity is generated with different graph generators, and each node has one random (sampled from a Normal distribution) feature attached. For SSSP, a binary feature is added to each node to indicate whether it is the source node in the graph or not. Each dataset amounts to 7040 graphs split into 5120 for training, 640 for validation, and 1280 for testing. The metric to be optimized is the log_10 of the mean squared error (MSE).We have observed that the performance reported in <cit.> can be improved by a significant margin if we average results over 20 rather than four final (that is, after model selection) training runs and increase the patience of the early stopper from 100 to 300, giving models more time to converge to a good solution. Therefore, to ensure a more robust set of results, we re-evaluated all baselines[In this regard, we received support from the authors of the original publication.] considering these changes, and in many cases, we improved the scores. We combinewith three message passing architectures, namely the Graph Convolutional Network (GCN) <cit.>, the Graph Isomorphism Network (GIN) <cit.>, and the Anti-Symmetric DGN (ADGN) <cit.>. To perform the grid search on , in addition to the hyper-parameters range used for the base methods (with the exception of the depth), we tested four different distributions q(ℓ; λ): a Poisson with initial rate λ=10, an FN with initial parameters μ=10 and σ∈{5, 10}, and a mixture of two folded normal distributions with initial parameters μ_1=5, σ_1=3, μ_2=15, σ_2=3. We fix the prior p(θ^ℓ)=𝒩(θ^ℓ; 0, 10*I), and we choose between three priors p(L): an uninformative prior, a Poisson with rate 5, and a folded normal with parameters μ=5 and σ=10. Finally, the message filtering function was chosen between one that does not filter at all, a function f(x) acting on node features, and a function f(h^ℓ) acting on node embeddings. Chemical DatasetsWe also teston real-world chemical node and graph prediction benchmarks, taken from the Long Range Graph Benchmark, called peptides-func and peptides-struct <cit.>. The first is an imbalanced multi-label graph classification dataset, with ten total peptide functions to be predicted, and we evaluate the performances using the average precision (AP) provided in the Open Graph Benchmark package <cit.>. The second is a multi-label graph regression task where we want to predict the properties of the peptides based on their 3D information, and one evaluates the mean absolute error (MAE). Both datasets contain 15535 peptides with approximately 150 nodes each, and the data is split into 70 % for training, 15 % for validation, and 15 % for testing.It was recently believed that graph transformer models were the best-performing models on these tasks <cit.>, possibly due to extra features that were included in the input, such as the Laplacian encodings <cit.>. It turns out that DGNs can be as good or better than graph transformers, at least on these tasks, when one runs a proper cross-validation of these models; indeed, we will rely on the fair re-evaluation of <cit.> that shows how simple baselines like a GCN can achieve very competitive performances when properly tuned. In addition to the original node features, we follow previous works <cit.> and add random-walk structural encodings for peptides-func and Laplacian positional encodings for peptides-struct. Also, in the interest of completeness, we will include results from the original paper <cit.>, its re-evaluation <cit.>, and results taken from other paper, such as CRaWL <cit.>, DRew <cit.>, Exphormer <cit.>, GRIT <cit.>, Graph ViT and G-MLPMixer <cit.>, LASER <cit.>, CO-GNN <cit.>, NBA <cit.>, GRED <cit.> and PR-MPNN <cit.>.We evaluateon GCN, GINE <cit.>, and GatedGCN <cit.>; our grid search follows the best hyper-parameter reported by <cit.> (except the depth), and we tested three different distributions q(ℓ; λ): a Poisson with the initial rate λ=5, a folded normal with initial parameters μ=5 and σ =1, and a mixture of two folded normal distributions with initial parameters μ_1=1, σ_1=1, μ_2=5, σ_2=1. The tested message filtering functions, the priors on θ, and the number of layers are the same as the synthetic tasks. Because the optimal depth ultimately depends on the task and the specific configuration of the model, we cannot impose arbitrary restrictions on the number of total parameters as done in <cit.>; instead, we are interested in letting the model freely adapt and choose the best parametrization that maximizes the performance. In this respect, we also refer the reader to the discussion on 's parametrization in Section <ref>.§ RESULTS This section summarizes our empirical results and main takeaways. We first discuss the quantitative results on the long-range datasets before moving to qualitative analyses on the behavior of . §.§ Quantitative ResultsTable <ref> reports the test log_10(MSE) for the Diameter, SSSP, and Eccentricity datasets for all baselines andversions. Numbers are averaged over 20 runs and the standard deviation is reported. We can see how, regardless of the dataset, applyingto each of the baselines tested always grants a reduction of the test error, with an average improvement of 63 % on Diameter, 72 % on SSSP, and 32 % on Eccentricity. These results show that learning the proper depth of a network and a policy for filtering messages exchanged between nodes is more effective than relying on a manually crafted grid search and a fully synchronous message passing behavior. Eccentricity is the most difficult task to solve, whereas one could claim that SSSP is almost solved; still,grants a non-negligible error reduction when applying ADGN to SSSP. We achieved the greatest reduction in error with respect to the GIN model, probably because the authors in <cit.> found that a 1-layer GIN was the best configuration across all tasks after tuning the depth. This stresses the positive impact that letting the model learn how and when to propagate messages can have on the final scores. We also point out that, while ADGN is provably robust to the oversquashing problem and is the baseline with the best performance,is conducive to further improvements due to its ability to mitigate other issues. We believe these are all strong signs of the benefits of an adaptive and fully-differentiable message passing architecture.On the chemical datasets (Table <ref>), we again observe a similar trend. Regardless of the base message passing architecture,consistently improves its performance on classification and regression tasks by freely adapting its depth and filtering strategy during training. On peptides-func, we achieve an improvement of 2 to 3.4 % compared to the base models and a reduction of MAE on peptides-struct that positions allversions at state-of-the-art levels. Our analyses also found that the number of hidden units is an important hyper-parameter to perform well on these tasks, and a larger value seems to correlate well with good performances. Combined with the above results, we argue that the parameter budget imposed by previous works <cit.> might limit the future progress on these tasks, as deeper networks are probably needed to solve them adequately (we provide an analysis of the depth found bybelow). The average diameter of these peptides is 57, (meaning that using ten layers as done in other works might not be enough to capture long-range dependencies <cit.>). In summary, our approach endows simple architectures with the ability to solve tasks better by finding a good parametrization for the problem at hand, and it requires no modification of the base message passing framework at all. §.§mitigates oversmoothing and oversquashingWe now comment on 's ability to mitigate oversmoothing and oversquashing, and we refer to Figure <ref> for a qualitative analysis of the former (left) and of the latter (right). First, we computed the logarithm of the Dirichlet energy over embeddings of GCN for different layers and datasets. In particular, for each dataset, the curve is drawn using one of the models trained during the final runs, whose best configuration was selected by grid search, and the energy is computed using all graphs of the dataset. This analysis reveals that the Dirichlet energy for 's variants is typically higher than the corresponding baselines and it can exhibit a stable, decreasing, or increasing behavior as the depth grows, in contrast to existing theoretical and empirical research on the GCN model where the Dirichlet energy constantly decreases and embeddings converge to the same value <cit.> (note that we apply skip connections to the base GCN, so the energy does not immediately decreases). Therefore, it appears that our approach is indeed capable of controlling oversmoothing; we attribute such an advantage to the combination of message filtering and a layer-wise loss, which favors the propagation of gradient to intermediate layers. We also computed the layer-wise logarithm of node embeddings' sensitivity[Computed on a subset of nodes from the validation set due to the prohibitive computational costs.] as the gradient of the embeddings of the last layer L with respect to the ones of intermediate layers l: ∑_(v,u) ∈ℰ|| ∂h_v^L/∂h^ℓ_u||_1. This sensitivity provides insights into how pruning messages affects oversquashing; in fact, filtering messages might reduce said sensitivity with respect to the input but greatly increase it for some intermediate layers. We report a quite heterogeneous picture in Figure <ref> (right): the sensitivity ofcan peak at the first or last layers, increase abruptly, or remain relatively stable. In all these cases, we have already seen how _GCN achieves a substantial performance improvement on tasks where addressing the oversquashing problem seems necessary, hence effectively mitigating the problem. Because sensitivity has been connected to oversquashing, as discussed in Section <ref>, the empirical evidence warns against using sensitivity as the sole metric to measure it quantitatively. This is in line with our previous discussions. §.§ Analysis of the learned depth and 's predictionsTo inspect the ability of our approach to mitigate underreaching, we delve into the predictions and the learned distributions q(L; λ) on the five tasks. In Figure <ref> (a), we report the mean predictions of the best performing GCN and _GCN runs on the Diameter dataset, where the shaded bands denote the minimal and maximal errors that both models make. Similarly, Figure <ref> (b) shows the same plot but for ADGN and _ADGN on Eccentricity. We can see howgenerates an almost ideal average prediction on Diameter and is able to deal with higher eccentricity than the base model (despite an almost identical error being achieved in the latter case). If we inspect the learned distributions in Figure <ref> (c), we observe thatrequires more layers than the baselines (see Table <ref> in the Appendix) to achieve the best score on Eccentricity, which partly explains what we just discussed. Instead, it is found that about 20 layers are necessary to solve the task for the Diameter dataset, with all runsattaining a mean value between 17 and 22 layers. Finally, SSSP seems to be the task that requires fewer layers on average, with _GIN selecting less than ten layers to reach a very competitive score. Overall, it appears that folded normal and mixtures of folded normal distributions were selected more frequently as the best hyper-parameter for the synthetic tasks; in particular, the distributions for Eccentricity look sharply peaked, as if the models would need to use only the information computed at the very end of the deep architecture. It is worth remarking that this behavior is completely adaptive and guided by the task, although the initialization of q(L; λ) might play, in general, an important role.Finally, we observe that the distributions learned on the chemical datasets are mostly Poisson ones, and it seems thatlearns to create deeper networks than the corresponding baselines to achieve better scores. In contrast, these distributions peak at around ten layers for peptides-struct, which is more or less in line with what was reported in previous works <cit.>. In all cases,enables training of very deep architectures thanks to its layer-wise output. However, this has a non-negligible cost regarding the number of parameters to learn. §.§ On the effects of message filteringWe conclude this section with a visualization of the amount of information pruned at each layer by a final training run of _GCN on all datasets (Figure <ref> (a)) and an ablation study about the benefits of message filtering (Figure <ref> (b-d)). The amount of information pruned is computed by summing the message filters' activations and normalizing the result by the total number of messages exchanged at each layer. We can see how _GCN gradually increases the amount of information to be used for Eccentricity, whereas in peptides-func, this quantity is almost always below 50%. One can appreciate how, depending on the task, the behavior of the message filtering changes significantly, even though the synthetic and chemical datasets share, within each subset of datasets, the same topological properties.The ablation study, on the other hand, provides evidence that message filtering is, in most cases, a good strategy for performance improvements. Figures <ref> (b-d) show, for each model and dataset, whether filtering based on the input features or embeddings provides an improvement in validation performances compared to no filtering. This is represented by points lying outside of the grey area. The x and y axes represent the reduction or improvement in score, and each point compares the validation scores of the best configurations using no filtering with those that apply a filtering strategy. Empirically, we observe that filtering based on embeddings improves or maintains the scores in many cases (12 out of 15). At the same time, input-based filtering improves the performances, sometimes by a larger margin, in 11 out of 15 cases. We conclude that the choice of which filtering strategy to use remains a matter of empirical investigations. § LIMITATIONS AND FUTURE WORKis inherently limited by the local processing implemented in each layer. In principle, to propagate a message from more than one pair of nodes at the shortest path distance K, it might be necessary to increase the depth of the DGN to values much higher than K (for instance, in the case of a chain). More research is needed to identify the exact topological conditions for which a given number of messages can be propagated, unchanged, for a given amount of layers, and whether or notcan learn to do so. Also,can require more parameters than a standard message passing architecture of the same depth due to the layer-wise message filters and output layers; in other words, there is a trade-off between the full adaptivity of our approach and the number of parameters required. Future works should investigate whether makingmore parameter-efficient is possible.Our approach tries to address oversquashing, but part of this problem is tightly connected to the dimension of the latent node embeddings <cit.>, which we need to select as part of the hyper-parameter search. To further alleviate oversquashing, an interesting direction of future work would be to let the width of each layer grow dynamically, as done, e.g., in <cit.>. Combined with anti-symmetric approaches that provably guarantee the preservation of information <cit.>, this strategy might find the most compressed embedding for the task at hand <cit.>.Finally, we observed slight instability when training , with sudden bumps in the loss, possibly due to the sudden addition and removal of some layers at critical stages of learning. Typically, the trained networks recover fast (in terms of loss values), but avoiding such effects remains a practical issue. We mention, nevertheless, that newly inserted layers have a relatively small weight compared to the rest of the distribution when the quantile of the truncated layers' distribution is chosen to be high as in our experiments, so the initial impact of a new, randomly initialized layer on training stability is generally minimal.§ CONCLUSIONS Capturing long-range dependencies is a longstanding problem in the graph machine-learning community. This work introduces Adaptive message passing, a probabilistic framework that can endow most message passing architectures with the ability to learn how many messages to exchange between nodes and which messages to filter out. Our approach actively targets the long-range issue by relying on the observation that filtering messages mitigates oversmoothing and oversquashing, whereas learning depth can ideally solve underreaching. We have discussed the multifaceted nature of these problems and showed, with theoretical and empirical arguments, thatcan effectively address them. We have also extended the family of unbounded distributions to capture under and over-dispersion, thus allowing the model to learn almost any continuous distribution. Our approach can plug in most existing message passing layers, consistently improving the performance on five tasks that evaluate the ability to capture long-range dependencies. Importantly, we achieved competitive results on these datasets without imposing strong inductive biases, letting the models decide when a node should exchange its message or part of it. Through qualitative analyses, our findings reveal howlearns very deep architectures if necessary for the task, and the amount of information propagated can greatly be reduced compared to classical message passing. Overall, our approach suggests that it might not be necessary to alter the initial graph structure, e.g., through rewiring, to improve the performances on long-range tasks; rather, it could be enough to choose the right information to propagate (using the original graph) at each point in time. We believe Adaptive message passing will foster exciting research opportunities in the graph machine learning field and find successful applications in the fields of physics, chemistry, and material sciences.§ TUNED DEPTH OF BASE MODELSWe report, for the base architectures we have tested within , the number of layers selected by the hyper-parameter search in the original papers. For the synthetic datasets, we obtained this information directly from the authors <cit.>, whereas for the chemical datasets this information was already available in <cit.>.
http://arxiv.org/abs/2312.16560v1
{ "authors": [ "Federico Errica", "Henrik Christiansen", "Viktor Zaverkin", "Takashi Maruyama", "Mathias Niepert", "Francesco Alesiani" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231227124927", "title": "Adaptive Message Passing: A General Framework to Mitigate Oversmoothing, Oversquashing, and Underreaching" }
Equivariance in Approximation by Compact Sets Alison Rosenblum January 14, 2024 =============================================This paper presents an asymptotic preserving (AP) implicit-explicit (IMEX) scheme for solving the quantum BGK equation using the Hermite spectral method. The distribution function is expanded in a series of Hermite polynomials, with the Gaussian function serving as the weight function. The main challenge in this numerical scheme lies in efficiently expanding the quantum Maxwellian with the Hermite basis functions. To overcome this, we simplify the problem to the calculation of polylogarithms and propose an efficient algorithm to handle it, utilizing the Gauss-Hermite quadrature. Several numerical simulations, including a 2D spatial lid-driven cavity flow, demonstrate the AP property and remarkable efficiency of this method. Keywords: quantum BGK equation; AP IMEX scheme; computation of polylogarithms; Hermite spectral method § INTRODUCTION The quantum Boltzmann equation models the evolution of a dilute quantum gas flow, which was initially derived by Uehling and Uhlenbeck in <cit.>. It incorporates quantum effects that cannot be neglected for light molecules at low temperatures. This equation is now applied not only to low-temperature gases but also to model both bosons and fermions, potentially trapped by a confining potential.The quantum Boltzmann equation is formulated in six-dimensional physical and phase space. The collision operator in this equation involves a five-dimensional integral, where the integrand is combined with complicated cubic terms. These complexities pose significant challenges in studying the quantum Boltzmann equation, both theoretically and numerically. Notably, the Bose-Einstein condensation is a phenomenon wherein the distribution function can exhibit finite blow-up or weak convergence towards Dirac deltas, even when the kinetic energy is conserved <cit.>.In this context, we focus on the numerical methods to solve the quantum Boltzmann equation. The initial attempt, proposed in <cit.>, utilizes the symmetric property to simplify the collision term. Subsequently, leveraging the convolution-like structure of the collision operator, the fast Fourier method for the quantum Boltzmann equation has been introduced in <cit.>. This method has been extended to the inhomogeneous case in <cit.>. Additionally, the diffusive relaxation system has been adopted in <cit.> to approximate the full collision term, and a Fokker-Planck-like approximation has been proposed in <cit.>. In <cit.>, the Fourier spectral method is employed for the quantum Boltzmann-Nordheim equation, particularly for describing the long-time behavior of Bose-Einstein condensation and Fermi-Dirac saturation. However, numerically computing the intricate collision operator can be quite expensive, making it difficult to handle high-dimensional problems with these methods. In the classical case, the BGK collision model serves as a widely used surrogate model for the Boltzmann operator, approximating collisions through a simple relaxation mechanism. In quantum kinetic regimes, the quantum BGK model is also extensively adopted to approximate the original collision operator <cit.>, and has also been extended to the multi-species case <cit.>. Several numerical methods have been developed to tackle the quantum Boltzmann equation with the BGK model, often referred to as the quantum BGK equation. For instance, in <cit.>, the lattice Boltzmann method is employed for the quantum BGK equation. In addition, a macroscopic reduced model known as the 13-moment system has been derived for the quantum BGK equation using modified Hermite polynomials <cit.>. In this work, we propose an asymptotic preserving (AP) scheme for solving the quantum BGK equation using the Hermite spectral method. A specially chosen expansion center is adopted in the Gauss weight function to generate the related Hermite polynomials, enhancing the approximation accuracy of the basis functions. This method has proven successful in solving the classical Boltzmann equation <cit.>, and has been extended to address the collisional plasma scenarios <cit.>. For the quantum BGK equation, a primary challenge of the Hermite spectral method lies in approximating the quantum equilibrium. We present a highly efficient algorithm to obtain the expansion coefficients of the equilibrium within the framework of the Hermite spectral method. The complex computations are eventually reduced to evaluating the value of the polylogarithm function, which can be further simplified into an one-dimensional integral.In the numerical experiments, the simulations with periodic initial values are first tested, and the order of convergence validates the AP property of this numerical scheme. Subsequently, the Sod problem is implemented and the numerical results are compared with the solutions of the full quantum Boltzmann equation in <cit.>. The excellent agreement implies that the quantum BGK model serves as a good approximation of the original collision operator. Finally, the mixing regime problem and a spatially 2-dimensional lid-driven cavity flow are conducted to further demonstrate the superiority of this Hermite spectral method. The rest of this paper is organized as follows. In Sec. <ref>, the quantum Boltzmann equation and the BGK collision model are introduced. Sec. <ref> presents the general framework of the Hermite spectral method to solve the quantum BGK equation. A highly efficient algorithm to approximate the equilibrium is proposed in Sec. <ref> with several numerical experiments displayed in Sec. <ref> to validate this Hermite spectral method. This paper concludes with some remarks in Sec. <ref> and additional content in App. <ref>. § PRELIMINARIES In this section, we provide a concise overview of the quantum Boltzmann equation and discuss the quantum BGK model, which serves as a simplified collision operator for the quantum gas. §.§ Quantum Boltzmann equationThe quantum Boltzmann equation governs the time evolution of the phase-space density f(t, , ), representing the probability of finding a quantum particle at time t⩾ 0 in the phase-space volume . Here ∈Ω⊂^D is the dimensionless position variable, and ∈^D is the dimensionless microscopic velocity variable. The dimensionless form of the quantum Boltzmann equation can is expressed as <cit.>ft+·∇_ f=1/ϵ[f](),t⩾0, ∈Ω⊂^D, ∈^D,where ϵ is the Knudsen number, and D is the dimension.[f]() represents the collision operator with quantum effect. The original collision term has the cubic form as follows: _ q[f]()=∫_^D∫_𝕊^D-1B(||,σ)[f'f_∗'(1-θ_0f)(1-θ_0f_∗)-ff_∗(1- θ_0f')(1-θ_0f_∗')]σ,where =-, and f, f_∗, f' and f_∗' represent f(t, , ), f(t, , ), f(t, , ') and f(t, , '). (, ) and (', ') are the pre-collision and the post-collision velocity, respectively, which are determined by {[ '=+/2+1/2|-|σ,; '=+/2-1/2|-|σ,;].where σ∈𝕊^D -1 is the unit vector along '-'. The collision kernel B is a non-negative function that depends only on || and cosω, where ω is the angle between σ and<cit.>. The parameter θ_0 indicates the type of particles <cit.>,which can be classified into three types: * when θ_0 = ħ^D > 0, the particles are the Fermi-Dirac gas, also referred to as the Fermi gas. Here, ħ represents the rescaled Planck constant <cit.>. For the Fermi gas, the Pauli exclusion principle gives us the inequality <cit.>f ⩽1/θ_0. * when θ_0 = -ħ^D < 0, the particles are the Bose-Einstein gas, or the Bose gas.* when θ_0 = 0, the collision model (<ref>) reduces to the classical Boltzmann collision operator _c[f]()=∫_^D∫_𝕊^D-1B(||,σ)(f'f_∗'-ff_∗)σ,and the particles are the classical gases.In the dimensionless quantum Boltzmann equation (<ref>), the macroscopic variables such as density ρ, velocityand internal energy e_0 are related to the distribution function f(t, , ) through the following equations:ρ(t, )=∫_^Df(t, ,) , (t, )=1/ρ∫_^D f(t, ,) , e_0(t, )=1/2ρ∫_^D|-|^2f(t, ,) . Additionally, the stress tensor ℙ and the heat fluxare defined as ℙ = ∫_^D[(-)⊗(-)-1/3|-|^2I] f ,= 1/2∫_^D( - )| -|^2 f . Compared to the classical Boltzmann equation (<ref>), the quantum Boltzmann operator (<ref>) exhibits cubic dependence on the distribution density f, and involves more nonlinearity. This makes the theoretical and numerical study of the quantum Boltzmann equation much more challenging <cit.>.§.§ The quantum BGK modelSimilarly to the classical kinetic theory, a BGK-type model <cit.> is introduced in the quantum case to facilitate the study in the near continuous fluid regime. This simplified model approximates the complex quantum collision model (<ref>) with the relaxation form as follows:_ qBGK[f]() = _q - f,which is referred to as the quantum BGK model. Substituting [f]=_ qBGK[f] into (<ref>) yields the quantum BGK equation. In (<ref>), _q represents the local equilibrium, also known as the quantum Maxwellian:_q(t, , )≜1/|θ_0|1/(z|θ_0|)^-1exp((-)^2/2T)+sign(θ_0)=1/z^-1exp((-)^2/2T)+θ_0,where z|θ_0| > 0 represents the fugacity, and T>0 is the temperature. Determining z and T will be discussed later in (<ref>).also satisfies _q[] = 0. For the Bose gas (θ_0 < 0), ensuring the non-negativity of _q in (<ref>) requires:z θ_0 ∈ [-1, 0).In particular, when z θ_0 = -1, Bose-Einstein condensation occurs, and the steady state differs from (<ref>), taking the form <cit.>_q(t, , )=m_0δ(-)+1/|θ_0|1/exp((-)^2/2T)-1,where m_0 is the critical mass, and δ(·) is the Dirac delta function. For the Fermi gas (θ_0>0), no additional constraint on z is required to obtain a quantum Maxwellian . If θ_0=0, _q reduces to the classical Maxwellian with macroscopic velocityand temperature T: ^, T_c()=ρ/ (2 π T)^D/2exp(-|-|^2/2T). When |θ_0| is small, _q is close to ^, T_c, and the quantum BGK model resembles the classical BGK model. For large |θ_0|, _q behaves quite differently from ^, T_c, and the quantum effect becomes significant. These phenomena are illustrated in <cit.>.There are several important properties of the collision operator. Firstly, it conserves the total mass, momentum, and energy as <cit.>∫_^D[f]()( [1; ; ||^2 ])=0, [f]() = _q[f](), _ qBGK[f]().Moreover, lettingφ()=ln(f()/1-θ_0f()),one can derive the H-theorem of the quantum Boltzmann equation as ∫_^D[f]()φ()⩽ 0, [f]() = _q[f](), _ q BGK[f](),and this equality holds if and only if f attains the quantum Maxwellian <cit.>. In (<ref>), the parameters z and T can be obtained through the nonlinear system∫_^D_q(t, , ) = ∫_^Df(t, ,) = ρ,∫_^D|-|^2_q(t, ,)= ∫_^D|-|^2f(t, ,) = 2ρ e_0.For the degenerate Bose-Einstein case (<ref>), the values of m_0 and T can be computed as T=2ζ(D/2)/Dζ(D+2/2)e_0,m_0=ρ-(2 π T)^D/2/|θ_0|ζ(D/2),where ζ(s):=_s(1) represents the Riemann zeta function. Furthermore, for the Bose-Einstein condensation steady state (<ref>), the conservation of macroscopic variables (<ref>) and the H-theorem (<ref>) persist.In the following, we will propose a numerical scheme for the quantum BGK equation. Specifically, an Implicit-Explicit (IMEX) method will be introduced within the framework of the Hermite spectral method. Additionally, an efficient algorithm will be presented for the calculation of polylogarithms, which is crucial for deriving the expansion coefficients of the distribution function and solving the nonlinear system (<ref>).§ HERMITE SPECTRAL METHOD FOR THE QUANTUM BGK EQUATIONThis section introduces the Hermite spectral method for the quantum BGK equation. We begin by discussing the approximation of the distribution function and deriving the moment system in Sec. <ref>. Subsequently, the numerical scheme with complete discretization is presented in Sec. <ref>. §.§ Series expansion of the distribution function and the moment system To seek a polynomial spectral method for solving the quantum BGK equation, a natural approach is to consideras the weight function. However, as shown in <cit.>, the orthogonal polynomials with respect toare quite complicated. It is observed that when |θ_0| is small, the classical Maxwellian _c^, T serves as a good approximation to _q. Therefore, the classical Maxwellian _c^, T defined in (<ref>) is chosen as the weight function, and the resulting orthogonal polynomials are the Hermite polynomials. These polynomials are defined as follows: For α=(α_1,α_2, ⋯)∈^D, with ∈^D and ∈^+, the three-dimensionalHermite polynomial H_α^,() is defined asH_α^,()=(-1)^|α|^|α|/2/_c^,()∂^|α|∂^α_c^,(),with |α|=∑_d = 1^Dα_d, ∂^α = Π_d = 1^D v_d^_d and _c^,() defined in (<ref>). Here, [, ] is the expansion center, typically determined by a rough average over the entire spatial space.Then the distribution function f can be expanded as f(t,,)=∑_α∈^Df_α(t,) (),where ()=H()_c^,() are the Hermite basis functions. By truncating the expansion in (<ref>), a finite approximation to the distribution function is obtained:f(t,,)≈ f_M(t,,)≜∑_|α|⩽ Mf_α(t,) (),where M is the expansion order. Similarly, the quantum Maxwellian (<ref>) is approximated as _q(t, , )≈∑_|α|⩽ M_q, α(t, ) ().With the orthogonality of the Hermite polynomials ∫_^3H()H()_c^,()= ∏_d=1^Dα_d! δ_α_d,β_d,the expansion coefficients f_α(t, ) and _q, α(t, ) are calculated as f_α(t, )= 1/α!∫_^Df(t, , ) H() ,_q, α(t, )= 1/α!∫_^D_q(t, , ) H() .With the Hermite expansion, the macroscopic variables defined in (<ref>) and (<ref>) can be expressed using the expansion coefficients asρ = f_, u_k= u_k+√()/ρf_e_k, e_0=/ρ∑_k=1^Df_2e_k+D/2-1/2ρ|-|^2, p_kl=(1+δ_kl) f_e_i+e_j+δ_klρ( - θ)- ρ(u_k-u_k)(u_l-u_l), q_k=2^3/2f_3e_k+(u_k-u_k) f_2e_k+|-|^2√() f_e_k +∑_d=1^D[^3/2f_2e_d+e_k+(u_d-u_d) f_e_d+e_k+(u_k-u_k) f_2e_d],k, l = 1, 2, ⋯ D.Here, e_d represents the unit vector. For example, when D = 3, e_1=(1,0,0), e_2=(0,1,0), e_3=(0,0,1). Additionally, by substituting the Hermite expansion of f and _q into (<ref>), we can obtain the moment system astf_α+∑_d=1^Dx_d((α_d+1)√()f_α+e_d+u_df_α+√()f_α-e_d)=1/ϵQ_α,|α|⩽ M,where the collision term Q_α arises from the quantum BGK model (<ref>) and is expressed asQ_α = _q, α - f_α.Besides, the convection term is simplified with the recurrence relationship of Hermite polynomials as v_dH^,_α=√()H^,_α+e_d+ u_dH^,_α+_d√()H^,_α-e_d.The system (<ref>) is closed with the constraintf_α + e_d = 0,|α| = M. Let = (f_, f_e_1, f_e_2, f_e_3, ⋯) represent the vector of expansion coefficients of the distribution function f. (<ref>) can be expressed in matrix form ast + ∑_d = 1^D𝐀_dx_d = 1/ϵ,= _q - ,where = (Q_, Q_e_1, Q_e_2, Q_e_3, ⋯) and _q = (_q, , _q, e_1, _q, e_2, ⋯). 𝐀_d is a matrix whose entries are decided by the convection coefficients in (<ref>).The Hermite spectral method has been successfully employed to solve the classical Boltzmann equation <cit.>, and has been extended to the plasma kinetic models <cit.>. Following similar procedures, we will complete the full discretization of the moment system in Sec. <ref>.§.§ Temporal and spatial discretization In this section, we focus on the numerical scheme to discretize the moment system (<ref>) of the quantum BGK equation. We start with the temporal discretization, employing the implicit-explicit (IMEX) scheme to handle the stiff collision term.Temporal discretization Assuming the numerical solution at time step t^n is ^n, then the temporal discretization for the first-order IMEX scheme takes the form^n+1-^n/Δ t+ ∑_d = 1^D𝐀_d ^nx_d = 1/ϵ^n+1.In the simulation, (<ref>) is split into* convection step:^n+1, ∗-^n/Δ t+ ∑_d = 1^D𝐀_d ^nx_d = 0, * collision step:^n+1-^n+1, ∗/Δ t=1/ϵ^n+1, ^n+1 = _q^n+1 - ^n+1. Since the collision conserves the total mass, momentum, and energy (<ref>), it can be derived that _q^n+1 = _q^n+1, ∗. Therefore, the convection step is first solved, and then ^n+1_q is obtained based on ^n+1, ∗. Finally, the collision step is solved with the computational cost of an explicit scheme. This first-order IMEX scheme can be easily extended into the high-order scheme, and we only present the second-order scheme below: ^n+1/2-^n/Δt/2+ ∑_d = 1^D 𝐀_d ^nx_d = 1/ϵ^n+1/2,^n+1-^n/Δt+ ∑_d = 1^D 𝐀_d ^n+1/2x_d = 1/2ϵ(^n+1+^n).The same splitting method can also be applied to this second-order scheme. More high-order IMEX schemes can be referred to <cit.>. The time step length is chosen to satisfy the CFL conditionCFL≜Δ t/Δ xmax_dλ(𝐀_d) < 1,where λ(𝐀_d) represents the spectral radius (i.e. the maximum absolute value of all the eigenvalues) of matrix 𝐀_d. For further discussions about the eigenvalues of 𝐀_d, we refer the readers to <cit.>. Spatial discretization For spatial discretization, the finite volume method is adopted for the moment system (<ref>). Let the spatial domain Ω⊂^D be discretized by a uniform grid with cell size (Δ x_1, Δ x_2, ⋯) ∈^D and cell centers _k = (x_k_1, x_k_2, ⋯) ∈^D. Denoting _k^n as the approximation of the average ofover the k-th grid cell at time t^n, the finite volume method for the convection step has the form_k^n+1, ∗=_k^n-∑_d=1^DΔ t/Δ x_d(_k+1/2 e_d^n-_k-1/2 e_d^n),where _k+1/2 e_d^n is the numerical flux computed by the HLL scheme <cit.> with spatial reconstruction utilized. Detailed expressions can be found in <cit.>. With this spatial discretization, the numerical scheme to solve the collision step is given by^n+1_k-^n+1, ∗_k/Δ t=1/ϵ( _q, k^n+1 - _k^n+1). So far, we have presented the complete discretization of the quantum BGK equation, and the entire algorithm is outlined in Alg. <ref>. However, it is important to note that compared with the classical case <cit.>, obtaining the expansion coefficients of the quantum Maxwellian in Step <ref> poses greater challenges. Additionally, one has to obtain the parameter z and temperature T through the nonlinear system (<ref>) in Step <ref>. These two problems will be addressed in the following Sec. <ref>. § EXPANSION OF THE QUANTUM MAXWELLIAN In this section, we will delve into the strategies for accomplishing Steps <ref> and <ref> in Alg. <ref>. Specifically, the algorithm for obtaining the expansion coefficients is presented in Sec. <ref>, and the approach to solving the nonlinear system (<ref>) is discussed in Sec. <ref>.§.§ Algorithm to obtain MqTo compute the expansion coefficients _q, α(t, ) in (<ref>), we begin with the exact expansion of Hermite polynomials. From the definition of the one-dimensional Hermite polynomials, when [, ] = [, 1], it takes the form below:_n(x)=(-1)^nexp(x^2/2) ^n/ x^nexp(-x^2/2),which can be precisely expressed as_2n(x) =∑_k=0^n(2n-1)!!/(2n-2k-1)!! (-1)^kC_n^kx^2n-2k,_2n-1(x)=∑_k=0^n-1(2n-1)!!/(2n-2k-1)!! (-1)^kC_n-1^kx^2n-2k-1,where the combination number C_n^k is defined asC_n^k=n!/k!(n-k)!.With the transitivityH()=H_α^,1(√(1/)(-)),it holds that H()=∑_β∈^D: β_i⩽α_i^[,](α,β) ^β,where ^[,](α,β) are constants that can be directly calculated by [, ]. Therefore, to obtain the expansion coefficients _q, α, we only need to compute the coefficients _α = ∫_^D_q()^α, ^α = ∏_d=1^Dv_d^α_d,|α|⩽ M.Then _q, α is calculated as_q, α = ∑_β∈^D: β_i⩽α_i^[,](α,β)_β,|α| ⩽ M.Without loss of generality, we assume the macroscopic velocity =.In this case,is an even function of , and it holds that ∫_^D_q()^α=0,if any entry of α is odd.When all entries of α are even, the expression of _α is given by_α = ∫_^D_q()^α = ∫_^Dzexp(-^2/2T)^α/1+zθ_0exp(-^2/2T).When |z θ_0| < 1 and θ_0 ≠ 0, it follows that 1/1+zθ_0exp(-^2/2T) = ∑_n=0^+∞[-zθ_0exp(-^2/2T)]^n.By substituting (<ref>) into (<ref>), the expression of _α becomes _α = -Γ(α+1/2)/θ_0(2T)^S ∑_n = 1^+∞(-zθ_0)^n/n^S,S = |α|+D/2,where Γ(α+1/2) = ∏_l = 1^DΓ(α_l+1/2) and Γ(·) denotes the Gamma function. To further simplify (<ref>), we introduce the polylogarithm function as follows <cit.>:The polylogarithm function _s(y) is defined by a power series of y, which is also a Dirichlet series of s:_s(y)=∑_k=1^∞y^k/k^s, |y|<1.(<ref>) is valid for arbitrary complex order s and for all complex variables y with |y|< 1. It can be extended to |y| ⩾ 1 through analytic continuation.Substituting (<ref>) into (<ref>), we have _α = -Γ(α+1/2)/θ_0(2T)^|α|+D/2_|α|+D/2(-zθ_0), zθ_0 ∈[-1, 0) ∪ (0, +∞).When θ_0=0, it is reduced into the classical case, and one can derive thatlim_θ_0→ 0_s(-zθ_0)/θ_0 = -z, ∀ s>0.In this case, (<ref>) still holds, and _α is calculated as _α = Γ(α+1/2)(2T)^|α|+D/2z,z = ρ/(2π T)^D/2. From Alg. <ref>, it can be observed that _α needs to be computed at each spatial position in each time step. Thus, an efficient algorithm is required to calculate the polylogarithm _s(y).Calculations of the polylogarithmSeveral algorithms have been proposed to evaluate the polylogarithm _s(y). While the function polylog in MATLAB can be used for this purpose, the low efficiency restricts its applications in large-scale numerical simulations. Some efforts have been made to numerically compute polylogarithms for integer s <cit.>, but they are inadequate for simulations involving quantum kinetic problems.In fact, for the Bose and Fermi gas, based on (<ref>) and (<ref>), the domain for s and y is given bys = 2n +D/2,n ∈,y ∈ (-∞, 1],and a method to compute _s(y) in this region would meet our demands. Inspired by the derivation of (<ref>), we transform the polylogarithm into an one-dimensional integral, expressed as∫_|x|^n/exp(x^2/2)-y x =1/y∫_∑_k=1^∞(yexp(-x^2/2))^k|x|^n x=2^n+1/2Γ(n+1/2)_n+1/2(y)/y,y ∈ (-1, 0)∪(0, 1).By employing the analytical continuation with respect to y, the polylogarithm can be computed as_s(y)=2^-sy/Γ(s)∫_x^2n+2/exp(x^2/2)-y x, s = 2n+D/2,n ∈, y ∈(-∞, 1],and the integral on the right-hand side can be approximated using a Gauss-type quadrature. The rescaled Gauss-Hermite quadrature∫_g(x)exp(-x^2/β) x=√(β)∫_g(√(β)x)exp(-x^2) x≈√(β)∑_k=1^N_ intω_kf(√(β)x_k)is adopted here to evaluate this integral. Here, N_ int represents the number of integral points, x_k are the roots for the Hermite polynomial of degree (N_ int+1), and ω_k are the integral weights. For further details on the Gauss-Hermite quadrature, readers may refer to <cit.>.To enhance the efficiency of this integral approximation, we treat the scaling factor β as a function of y. The integral in (<ref>) is then approximated by∫_1/exp(x^2/2)-y|x|^2n+2 x=∫_exp(-x^2/β(y))G(x,y) x=√(β(y))∑_k=1^N_ intω_k G(√(β(y))x_k,y),where G(x,y)=exp(x^2/β(y))/exp(x^2/2)-y|x|^2n+2,n ∈, y ∈(-∞,0) ∪(0, 1].The integral most commonly used to analyze _s(y) has the form_s(y)=y/Γ(s)∫_0^∞x^s-1/exp(x)-y x, y ∈(-∞, 1].For the common case D=3, the numerator is a function of x with a half-integer index. However, the Gauss-type quadrature will be more accurate when the integrand behaves closely to a polynomial <cit.>. Therefore, we opt for the integral (<ref>) over (<ref>) to compute the polylogarithm. Additionally, the choice of the scaling factor β(y) in (<ref>) aims to make the integrand G(x, y) more polynomial-like. In the numerical experiments, β(y) is empirically chosen as β(y)={[2-1.8y,y∈(0, 1],;1+exp(y), y∈(-∞, 0). ].It is always challenging to accurately calculate x_k and ω_k when N_ int is too large, so we set it as N_ int=70 in all subsequent tests. In App. <ref>, this method is compared with the MATLAB algorithm polylog, which verifies its excellent efficiency and accuracy. This algorithm will also be employed to solve the nonlinear system in Sec. <ref>.§.§ Solving the nonlinear system In this section, we present the method for obtaining z and T in the quantum Maxwellian . Since ρ and e_0 can be easily derived from the distribution function f, z and T can be solved from the nonlinear system (<ref>). With the relationship (<ref>), the nonlinear system (<ref>) can be simplified as ρ =-1/θ_0(2π)^D/2T^D/2_D/2(-zθ_0),ρ e_0 =-D/2θ_0(2π)^D/2T^2+D/2_2+D/2(-zθ_0).Without loss of generality, we set D = 3 in this section, and the algorithm can be easily extended to other D. By eliminating T in (<ref>), the system is reduced to a nonlinear equation of z as |_3/2(-zθ_0)|^5/2/|_5/2(-zθ_0)|^3/2=|θ_0|ρ(3/4π e_0)^3/2.We will first discuss the existence of the solution for the Bose-Einstein and Fermi-Dirac gas separately before introducing the algorithm to solve (<ref>). Bose-Einstein gas (θ_0<0) As stated in (<ref>), -z θ_0 is restricted in (0, 1] to ensureis positive. Define (y)= (_3/2(y))^5/2/(_5/2(y))^3/2, 0<y⩽ 1.Then (<ref>) is reduced into (y)=|θ_0|ρ(3/4π e_0)^3/2,y = -z θ_0,0<y⩽ 1.As observed in <cit.>, (y) is continuous and non-decreasing on (0,1]. When|θ_0|ρ(3/4π e_0)^3/2⩾(1),the Bose-Einstein condensation occurs. In this case, z θ_0 = -1, and the quantum Maxwellian is reduced into (<ref>) with the related parameters derived in (<ref>). Otherwise, a solution for y exists in (0,1]. Fermi-Dirac gas (θ_0>0) Unlike the Bose-Einstein gas, z can be arbitrary large in the Fermi-Dirac distribution. To distinguish from the Bose-Einstein gas, we define (y) = (-_3/2(y))^5/2/(-_5/2(y))^3/2,y<0,and then (<ref>) is reduced into (y)=|θ_0|ρ(3/4π e_0)^3/2,y = -z θ_0, y∈ (-∞, 0).As observed in <cit.>, (y) is continuous and non-increasing on (-∞, 0) and satisfies lim_y→-∞(y)=5/3√(10/π).The following lemma ensures that there exists a solution for y in (<ref>). Under the Pauli exclusion principle f⩽1/θ_0, it holds that θ_0ρ(3/4π e_0)^3/2⩽5/3√(10/π).Without loss of generality, we assume the macroscopic velocity =. Let R > 0 satisfy 4π/3 R^3 = ρθ_0,and define the auxiliary function f_0 as f_0()=1/θ_0χ_B(,R)={[ 1/θ_0, ||⩽ R,; 0,||>R, ].which is also known as the Fermi-Dirac saturation, and represents the critical state of the Fermi gas. It follows for f_0 that ∫_^3f_0()= 4π/3θ_0 R^3=ρ, 1/2∫_^3f_0()||^2=2π/5 θ_0 R^5.Using the relation of the macroscopic variables and the distribution function f as in (<ref>), we can derive that ρ e_0 - 2π/5 θ_0 R^5= ρ e_0-1/2∫_^3f_0()||^2 = 1/2∫_^3(f()-f_0())||^2⩾ 0.The proof is completed by combining (<ref>) and (<ref>).The derivative of the polylogarithm function can be expressed as/ y_s(y)=1/y_s-1(y).Thus, if a solution exists in (<ref>), it can be numerically obtained through the Newton iteration method, with the stopping criterion set as| |_3/2(y)|^5/2/|_5/2(y)|^3/2-|θ_0|ρ(3/4π e_0)^3/2|⩽ 10^-12,and the numerical solution at the last time step is utilized as the initial solution for the iteration. In practical numerical simulations, the iteration count remains quite low, typically less than 5 for most cases. To illustrate the efficiency of this Newton iteration in detail, an example is presented in App. <ref>. Once z is obtained, the temperature T can be directly solved from (<ref>), concluding the algorithm for this nonlinear system.§ NUMERICAL EXPERIMENTS In this section, several numerical examples are conducted to validate this numerical scheme for the quantum BGK equation. First, the asymptotic-preserving (AP) property of this Hermite spectral method is tested with a periodic flow. Subsequently, we examine its performance in the spatially one-dimensional cases, including the Sod and mixing regime problems. Finally, a spatially two-dimensional lid-driven cavity flow problem is simulated to further validate the accuracy and efficiency of this Hermite spectral method.§.§ Test of the AP propertyIn this section, the AP property of this Hermite spectral method is assessed. The spatial and microscopic velocity dimensions are set to 1 and 3, respectively. The initial condition is the equilibrium with macroscopic variables given byρ=2+sin(2π x)/3, = 0,T=3+sin(2π x)/4.The computational domain is [0, 1], with periodic boundary conditions applied in the spatial space. The expansion order is set as M = 10, and the CFL number is set as CFL = 0.2. We perform simulations with grid numbers N = 16, 32, 64, 128, 256 and 512, respectively for Knudsen numbers ϵ = 1, 0.1, 0.01 and 10^-6, and the parameter θ_0 = ± 9. The first-order IMEX scheme (<ref>) is initially tested, with the expansion center set as the spatial average of the macroscopic velocity and temperature, i.e., [, ] = [0, 3/4], and no reconstruction is applied in the spatial space. Reference solutions are obtained by the second-order scheme (<ref>) with the WENO reconstruction, utilizing a grid size of N = 1024. The computation time is t = 0.1. The l_2 error of the density ρ, temperature T, and fugacity z|θ_0| between numerical solutions and reference solutions for θ_0 = ± 9 is presented in Fig. <ref>. The results indicate that for both Bose and Fermi gas with different Knudsen numbers, the error uniformly converges with first-order, confirming the AP property of this first-order scheme (<ref>).Next, we investigate the AP property of the IMEX2 scheme (<ref>), utilizing the same computational parameters as the first-order scheme. The WENO reconstruction is utilized in the spatial space. The identical reference solutions as in the previous test are employed. The l_2 error of the density ρ, temperature T, and the fugacity z|θ_0| is displayed in Fig. <ref>, illustrating that the error uniformly converges with second-order for both θ_0 and different Knudsen numbers. This demonstrates the AP property of the IMEX2 scheme (<ref>). §.§ Sod problem In this section, we examine the spatially 1D and microscopically 2D quantum Sod problem, which has also been investigated in previous studies <cit.>. The identical initial condition as in <cit.> is used here: {[ρ_l=1,_l=,T_l=1,0<x<0.5;; ρ_r=0.125,_r=,T_r=0.25, 0.5⩽ x<1. ].Given the discontinuity in the initial condition, this problem poses a significant challenge in numerical computations.First, we set the Knudsen number as ϵ = 0.01, and θ_0=± 9, -0.01, which are the same parameters as in <cit.>. The computational region is [0, 1]. The IMEX2 scheme (<ref>) with the linear reconstruction is employed. To handle the discontinuity in the initial condition, the Minmod limiter is applied in the reconstruction. The CFL number in (<ref>) is set to CFL = 0.3, and the mesh size in the spatial space is chosen as N = 256. The expansion order and center are selected as M = 15 and [, ]=[, 1], respectively. Numerical solutions for the density ρ, the internal energy e_0 and the fugacity z|θ_0| at t = 0.2 by this Hermite spectral method (HSM) are provided in Fig. <ref>, where the reference solutions are obtained from solving the full quantum Boltzmann equation (<ref>) with Exp-RK2 method in <cit.>. It is evident that the numerical solutions closely match the reference solutions, indicating that the quantum BGK model (<ref>) serves as a good approximation of the full quantum collision operator (<ref>). Notably, this approximation allows for significantly reduced computational costs. To further validate the computational capability of this Hermite spectral method, we increase the Knudsen number to ϵ = 1, which introduces greater challenges associated with rarefaction. The computational domain is extended to [-0.5, 1.5]. This simulation uses the same IMEX2 scheme (<ref>) and linear reconstruction as the previous test. We increase the expansion order to M=30, modify the CFL number to CFL=0.2, and retain the expansion center at [, ] = [, 1]. The mesh size remains as N = 256. Numerical solutions of the density ρ, the internal energy e_0 and the fugacity z|θ_0| for θ = ± 9 and -0.01 at t = 0.2 are displayed in Fig. <ref>, along with the reference solutions obtained from the discrete velocity method (DVM). The excellent agreement between the numerical solutions and the reference solutions verifies the capability of this Hermite spectral method to accurately describe rarefied scenarios.§.§ Mixing regime problem In this section, we address the spatially 1D and microscopically 3D problem with the mixing regime. Similar simulations have also been tested in <cit.>. The initial condition ρ=1+1/2sin(2π x),=0,T=1+1/4sin(2π x)is utilized in this test. The Knudsen number, varying across kinetic and fluid regimes, is expressed asϵ(x)=ϵ_0+0.005(exp(3x)-1), ϵ_0 = 0.001,with the profile depicted in Fig. <ref>. In the simulation, periodic boundary conditions are employed, with the CFL number set to CFL = 0.3. The expansion order and the grid size are chosen to be M = 10 and N = 256, respectively. The IMEX2 scheme (<ref>) is adopted with the WENO reconstruction. Besides, the expansion center is set to [, ]=[, 1].For θ_0 = ± 4, numerical solutions for the density ρ, macroscopic velocity in the x-direction u_1, internal energy e_0, fugacity z|θ_0|, shear stress in the x-direction p_11, and heat flux in the x-direction q_1 at t = 0.1 are displayed in Fig. <ref>. Reference solutions, computed using DVM, are also included for comparison. The numerical solutions exhibit good agreement with the reference solutions for both Bose and Fermi gases. Notably, due to the non-periodic nature of ϵ(x) in the spatial space, oscillations arise near the boundary in the numerical solutions, as observed in Fig. <ref>, particularly for fugacity z|θ_0| and heat flux q_1. These oscillations are also well-captured by this Hermite spectral method. To compare the efficiency of this Hermite spectral method (HSM) with DVM, the running time of these two methods is summarized in Tab. <ref>. All simulations are conducted on the CPU model Intel Xeon E5-2697A V4 @ 2.6GHz with 8 threads. For the DVM, the velocity space is discretized within [-10, 10] by 80 points in the x-direction and within [-5, 5] by 20 points in the other two directions. Temporal and spatial discretizations in the DVM are kept consistent with the Hermite spectral method. Tab. <ref> reveals that for both θ_0, the computational cost of HSM is significantly lower than that of DVM, which demonstrates the high efficiency of this Hermite spectral method. To investigate how the parameter θ_0 influences the numerical solutions, we compute results for θ_0=-10, -4, -1, 0, 1, 4, 10, respectively, while keeping other parameters consistent with the previous test. The numerical solutions of the macroscopic variables are illustrated in Fig. <ref>. It can be observed that for different θ_0, the numerical solutions behave differently. When |θ_0| gets larger, the distinction between numerical solutions and the classical case (i.e. θ_0 = 0) becomes more evident. Furthermore, it can be inferred that the numerical solutions exhibit continuous variation with θ_0.§.§ 2D lid-driven cavity flow In this section, the lid-driven cavity flow in a spatially 2D and microscopically 3D setting is considered. The classical case of this problem has been widely studied, as seen in works like <cit.>. In this scenario, the quantum gas is confined in a square cavity, where the top lid moves to the right at a constant speed, while all the other three walls remain stationary. All walls maintain the same temperature. Over an extended period, the gas reaches a steady state, which is the condition of particular interest. Due to the high dimensionality of this problem, capturing the steady state results in a considerable computational challenge.For the classical problem, the Maxwell boundary condition <cit.> is adopted. A similar boundary condition is utilized for the quantum gas. Specifically, assuming the velocity and temperature of the boundary wall are ^w and T^w, respectively, the wall boundary condition at _0 is given by f^w(t, _0, )={[ ^w_q(t,_0,)≜1/(z^w)^-1exp((-^w)^2/2T^w)+θ_0,(-^w)·_0 ⩽ 0,;f(t, _0, ),(- ^w) ·_0 > 0, ].where _0 represents the outer unit normal vector at _0. Here, z^w is determined such that the normal mass flux on the boundary is zero: ∫_^3[(-^w)·_0] f^w(t, _0, )=0.However, when θ_0 < 0, there are cases where no z^w∈ [0,-1/θ_0] satisfies (<ref>). In such instances, the condition (<ref>) is not met. Under these circumstances, the boundary condition is adjusted as f^w(t, _0, )=C^w^w_q(t,_0,), z^w = -1/θ_0, ( - ^w) ·_0 ⩾ 0,where C^w is a positive constant determined by (<ref>). For detailed implementation of this boundary condition with the Hermite spectral method, readers may refer to <cit.>.In the simulation, the velocity of the top lid is set to ^w = (0.5, 0, 0) with a uniform temperature T^w = 1.0 for all walls. A uniform grid mesh with N = N_x = N_y = 100 is employed for spatial discretization. The first-order scheme (<ref>) is adopted with linear reconstruction in the spatial space. The CFL number is set as CFL = 0.2, and the expansion center is chosen as [, ] = [, 1]. Firstly, the case with ϵ = 0.1 is tested, employing an expansion order of M = 10. Numerical solutions of the fugacity z|θ_0|, temperature T, and shear stress p_12 for θ_0 = 4, 0.01, -4 are depicted in Fig. <ref>, illustrating distinct behaviors for different θ_0.To examine the convergence of the numerical solutions, results along x = 0.5 and y = 0.5 are displayed in Fig. <ref> and <ref>, respectively, for grid sizes N = 25, 50, 100, and 200. Both figures demonstrate that the numerical solutions for z|θ_0|, T, and p_12 converge with increasing grid numbers for all θ_0. This affirms the reliability of the numerical results.For further validation, the case with ϵ = 1.0 is investigated. The expansion number is increased to M = 15, while other settings are maintained from the test of ϵ = 0.1. The numerical solutions for ϵ = 1.0 are presented in Fig. <ref>, exhibiting trends similar to those for ϵ = 0.1. All the experiments are conducted on the CPU model Intel Xeon E5-2697A V4 @ 2.6GHz with 28 threads. The simulations take approximately 7 hours with M=10 and around 30 hours with M=15, both for a spatial mesh of N=100. These results indicate that achieving a steady state simulation for this two-dimensional spatial and three-dimensional microscopic velocity problem is feasible with reasonable computational cost, which highlights the efficiency of the Hermite spectral method. § CONCLUSION We have proposed an asymptotic preserving IMEX Hermite spectral method for the quantum BGK equation. To enhance the overall efficiency of the numerical scheme, a fast algorithm for computing the polylogarithm is introduced to derive the expansion coefficients of the quantum Maxwellian. In the numerical experiments, the AP property has been successfully verified. Subsequently, the numerical scheme has been validated through simulations of the spatially 1D Sod and mixing regime problems. Finally, this Hermite spectral method is applied to a spatially 2D lid-driven cavity flow problem, which further demonstrates its outstanding efficiency.§ ACKNOWLEDGMENTSThe work of Ruo Li is partially supported by the National Natural Science Foundation of China (Grant No. 12288101). This work of Yanli Wang is partially supported by the National Natural Science Foundation of China (Grant No. 12171026, U2230402, and 12031013), and Foundation of President of China Academy of Engineering Physics (YZJJZQ2022017).This research is supported by High-performance Computing Platform of Peking University.§ APPENDIX§.§ Comparison to polylog in MATLABTo validate the algorithm for calculating the polylogarithm, as proposed in Sec. <ref>, we compare the results with the MATLAB function polylog. The error e=|_s,num(y)-_s,ref(y)| is recorded for s=1.5, 2.5, 3.5 and y∈[-10,1] with an interval of 0.01. Here, _ num corresponds to the result obtained using the method proposed in Sec. <ref>, and _ ref is the reference result obtained using the MATLAB function polylog. Fig. <ref> shows that the error e is quite small when y is far from 1, which is close to the machine's precision. The error increases as y approaches 1 due to the singularity of the integral (<ref>). To illustrate the efficiency of this integral algorithm, we record the running time to obtain _s,num and _s,ref in Tab. <ref>. The results reveal that the integral algorithm is significantly faster compared to polylog, which demonstrates its high efficiency.§.§ Newton iteration method In this section, a simple experiment will be conducted to verify the efficiency of the Newton iteration method introduced in Sec. <ref> for obtaining z and T. We consider a spatial homogeneous problem with a source term, and the governing equation (<ref>) is reduced toft=(_q-f)+ 𝒮(t),where 𝒮(t) is the source term defined as𝒮 = ρ_r(t)/ (2 π T_r(t))^3/2exp(-||^2/2T_r(t)),with ρ_r(t) and T(t) being random variables uniformly and independently distributed in the interval [0.2, 1.8] for any t. The initial condition is given by the summation of two equilibrium states asf(0,)=1/2(1/z^-1exp((-)^2/2T)+θ_0+1/z^-1exp((+)^2/2T)+θ_0),where T=1, =(1,0,0), and z is chosen such that ρ(0)=1. The time step size is set as Δ t=0.001, which is similar in scale to the simulations in Sec. <ref>. We set θ_0 = ± 0.01 and ± 9, and the iteration counts for different θ_0 at each time step are shown in Fig. <ref> for the final time t = 0.1.It is evident that for all θ_0, the number of iterations is quite small. This validates the high efficiency of this Newton method in solving the nonlinear system (<ref>), regardless of whether the problem is in the near classical regime or the regime with a strong quantum effect.tocsectionReferences plain
http://arxiv.org/abs/2312.16585v1
{ "authors": [ "Ruo Li", "Yixiao Lu", "Yanli Wang" ], "categories": [ "math.NA", "cs.NA" ], "primary_category": "math.NA", "published": "20231227142125", "title": "A highly efficient asymptotic preserving IMEX method for the quantum BGK equation" }
Emergent Gravity Completion in Quantum Field Theory, and Affine Condensation in Open and Closed Strings Durmuş Demir0000-0002-6289-9635 January 14, 2024 ======================================================================================================= We introduce a novel positional encoding strategy for Transformer-style models,addressing the shortcomings of existing, often ad hoc, approaches.Our framework provides a flexible mapping from the algebraic specification ofa domain to an interpretation as orthogonal operators.This design preserves the algebraic characteristics of the source domain,ensuring that the model upholds the desired structural properties.Our scheme can accommodate various structures, including sequences, grids and trees,as well as their compositions. We conduct a series of experiments to demonstrate the practical applicability of our approach. Results suggest performance on par with or surpassing the currentstate-of-the-art, without hyperparameter optimizations or “task search” of any kind. Code will be made available at <github.com/konstantinosKokos/UnitaryPE>. § INTRODUCTIONAttention-based models inheriting from the Transformer architecture <cit.> have become a ubiquitous model of neural computation. Benefiting from an excellent scaling behavior, they have largely surplanted the go-to models of the last decade, such as recurrent and convolutional neural networks, catalyzing a continuous stream of breakthroughs across diverse application domains. Their success is perhaps at odds with the Transformer's structural lenience – its key building block, dot-product attention, is by default unable to perceive and utilize the structural arrangement of the input/output tokens being processed. To sidestep this limitation, recent research has explored ways to endow the Transformer with appropriate inductive biases, either by directly modifying the attention function, or by adjusting the token representations to insinuate the structure being modeled. Nonetheless, most of the solutions proposed so far have been empirically motivated and/or tailored to specific tasks. This renders their theoretical evaluation challenging, and hinders any prospects of a unifying framework.In this study, we seek to address this challenge with a theory-first approach. We scrutinize some of the most commonly targeted data structures, and express them by means of their inductive definitions and algebraic properties. Leveraging this analysis, our modeling strategy invokes a homomorphic interpretation that maps each structure into attention-compatible vector operations. In the sequential context, our proposal streamlines the widely adopted rotary encodings of <cit.>, while at the same time offering clear theoretical insights on their success. More importantly, our approach naturally extends to non-sequential domains, such as κ-ary trees and multidimensional regular grids, paving the way for a simple and elegant methodology for interpretable and domain-general positional encodings. § BACKGROUNDAll transformer variants employ some variation of the multi-head scaled dot-product attention mechanism proposed by <cit.>. For each attention head, the dot-product attention between 𝐗 : ℝ^m × d and 𝐘 : ℝ^n× d is given by 𝐀(𝐗,𝐘) : ℝ^m× d, defined as:softmax_(n)( (f_q(𝐗), f_k(𝐘))/√(d)) f_v(𝐘)  .Here, the functions f_q, f_k, f_v : ℝ^d→ℝ^d are point-wise applied (broadcasted) across all m and n rows of the matrices X and Y. The matrix (𝐐, 𝐊) : ℝ^m× n contains unnormalized agreement scores between the queries 𝐐 := f_q(𝐗) and the keys 𝐊 := f_k(𝐘), computed as their pairwise dot-product (𝐐,𝐊) = 𝐐𝐊^⊤. Unmodified, dot-product attention is permutation equivariant with respect to 𝐗, and permutation invariant with respect to 𝐘. What this practically means is that 𝐀(p_x 𝐗,𝐘) evaluates the same as p_x 𝐀(𝐗, p_y 𝐘) for any permutations p_x : m and p_y : n.Unless one is dealing with orderless structures like multisets or fully connected graphs, this property is generally undesirable. The lack of structural biases is more evident in (1) data-scarce domains, where extensive pretraining is impossible, and (2) structure-rich domains, where a bag-of-tokens projection is too much of a simplification.To address this issue, we conduct an algebraic analysis of the domains most commonly explored in machine learning applications, and show how they can give rise to interpretation schemes that faithfully mirror the desired structural properties. With practical applicability still in mind, we propose a principled methodology that enacts a sensible and actionable meta-theory of positional encodings. We conduct an experimental evaluation in controlled settings that allow reproducible and statistically sound conclusions, providing initial but compelling evidence that our proposal presents a promising alternative to the current state of the art.§ THE ALGEBRA(S) OF POSITIONSOur objective is to establish a framework that offers general and extensible semantics for positions across various structures – what we commonly encounter in the literature as positional encodings. Most existing proposals adopt a rather parochial stance, relying on maneuvers or heuristics tailored to specific applications and driven, predominantly, by extensive empirical investigations. As such, they fall short with respect to accommodating or reflecting the properties of the underlying structure.In this work, we follow a different approach. We embrace Montague's perspective, succinctly paraphrased as:syntax is an algebra, semantics is an algebra, and meaning is a homomorphism between them <cit.> We begin by noting that “positions” do not exist in isolation, but only in the context of some underlying ambient structure. Given that, we contend that reasonable positional encodings (semantics) may only be reliably obtained by taking into account exactly this structure, its formation rules and properties (syntax), and then applying an appropriate interpretation (meaning). This is not merely an academic exercise: a careful syntactic specification is a prerequisite if we aim for semantics that adhere to certain properties, which is arguably preferable to stumbling upon these properties in arbitrary encoding schemes. §.§ Sequences§.§.§ SyntaxWe start from the simplest structure, which is incidentally also the most standard one: the sequence. In the context of a sequence, relative positions coincide with the integers ℤ, with positive (resp.) negative) numbers denoting forward (resp. backward) offsets. Aiming at generalization, we set to arrive at this set (ℤ) from first principles.The key idea here is to inductively describe all possible pathsbetween any two points.We start with two constants: the empty path (), which relates any given point to itself, and the unit path (), which relates any point to its immediate next.We also need a way to compose paths into longer ones, which we can do with the aid of a binary operation _. This already suffices to specify all forward offsets. In order to construct backward offsets, we need a unary operation _, such that ρ denotes the inverse of ρ. We can summarize the above by the grammar: =0pt:=   |    |  _  |   .Furthermore, the operations must be coherent; that is, the effect of going forward a certain number of steps then backward the same number of steps must be the same as not moving at all.It turns out that the necessary laws are exactly those of a group:ρ_ = ρ = ρL1 (ρ_1 _ρ_2) _ρ_3= ρ_1 _ (ρ_2 _ρ_3) L2 ρ_ρ =  . L3The insight here is that paths in a sequence form a free group, generated by a single generator () – the uniqueness of the generator exceptionally also makes the group abelian (i.e. commutative).Elementary algebra verifies that this group pertains, indeed, to the set of integers. This allows us to use the notational shorthand ^p, where:^p := _⋯__pp≥ 0 _⋯__-p) p<0  . §.§.§ Semantics The syntactic specifications of the previous section impose constraints on the candidate semantic targets. Among these candidates, we isolate and focus on ⟨𝐖⟩, the subgroup of the orthogonal group O(d) generated by a single orthogonal matrix 𝐖. This semantics is not only sound [It will also be complete except when 𝐖^p=𝐈 for some p. In practice, this kind of periodic behaviour does notarise during or after training, so we can think of ⟨𝐖⟩ as being isomorphic to .] with respect to the structure under scrutiny, but also a familiar object in machine learning literature <cit.>. Note that for ⟨𝐖⟩, the group axioms are obtained for free from the orthogonal group, and the additional requirement of commutativity is again satisfied by the uniqueness of the generator. [The story is no different for 𝐖 unitary, with the group structure provided by the unitary group U(d), and path inversion interpreted as the conjugate transpose.]To illustrate the correspondence between the two structures, we spell out the homomorphism ., which maps pathsto elements of ⟨𝐖⟩, and path operations to operations on orthogonal matrices of size d. For the primitives, we have := 𝐈_d and := 𝐖. Path composition amounts to matrix multiplication, i.e. ρ_1 _ρ_2 := ρ_1ρ_2, while path inversion effectuates matrix inversion, i.e. ρ := ρ^-1≡ρ^⊤. The fact that orthogonal matrices form a group under multiplication is folklore; one can easily verify that the group laws hold also for the semantics. §.§.§ ImplementationIn practice, we have ^p↦𝐖^p : ⟨𝐖⟩; a norm-preserving bilinear scalar function ℝ^d ×ℝ^d→ℝ which can be used to mediate the dot-product between a query q and a key k offset by relative position p. The representation of all paths up to length p can thus be implemented as a matrix collection [𝐖^0…𝐖^p], which can asymptotically be obtained using 𝒪(⌈log_2(p) ⌉) matrix product steps, and memory for storing 𝒪(pd^2) scalars. Transposed, the same matrices also serve to represent backwards paths [𝐖^-p…𝐖^0]. Storing all matrices of relative positions between queries 𝐐 : ℝ^m× d and keys 𝐊 : ℝ^n× d in a tensor 𝐑 : ℝ^m× n× d× d, we obtain a new formulation for the unnormalized attention scores (𝐐,𝐊,𝐑)_mn = ∑_ij𝐐_mi𝐊_nj𝐑_mnij. Albeit transparent, this formulation is computationally unappealing. We can do better by noting that 𝐑^_mnij = 𝐏^Q_mik𝐏^K_njk, where 𝐏^Q and 𝐏^K denote, respectively, the absolute position encodings of the entries in 𝐐 and 𝐊 (see Figure <ref> for a visual example).In turn, this allows us to keepunchanged, except now plugging in 𝐐'_mj := 𝐐^_mi𝐏^Q_mij and 𝐊'_nj := 𝐊^_ni𝐏^K_nji – practically rotating/reflecting each entry of 𝐐 (resp. 𝐊) forward (resp. backward) according to its position. This version streamlines computation, decomposing the tensor contraction into two independent smaller contractions (matrix multiplications), leaving the memory complexity ofintact.§.§ Trees§.§.§ SyntaxIn the previous section, we characterized the structure of relative paths on a sequence as the free group with one generator, and uncovered a (practically) isomorphic interpretation in the subgroup of orthogonal matrices generated by a single matrix. Upon closer inspection, we note that a sequence can be viewed as a special case of the more general structure of κ-ary branching trees, where the branching factor κ just happens to be 1. Denoting the more general case as _κ, we must first extend the set of primitives to include all branching options, , , , …κ : _κ. Each primitive now denotes a choice of branch (except for , which is again the empty path). Paths now form a free group with κ distinct generators. The presence of multiple generators means that commutativity no longer holds; _ is distinct from _. The former prescribes a descent down branchthen branch , whereas the latter prescribes a descent down branchthen branch . Inversion is as before: for every path from each local root to some descendant down the line, there is also an inverse path from this very descendant up to its ancestor. Perhaps more interestingly, upwards and downwards paths can be joined, allowing the precise specification of relative distances between nodes beyond a single line of descent (i.e. nephews, aunts and all other sorts of distant relatives, see Figure <ref> for an example). Adjusting grammar (<ref>) accordingly, we have: =0pt_κ :=   |    |    | …  |  κ  |  _κ__κ_κ  |  _κ with laws <ref>, <ref> and <ref> still in effect.§.§.§ SemanticsThe interpretation follows along the same lines as before. This time around, however, we cannot make do with a single orthogonal matrix 𝐖 – we need a collection of κ matrices, one for each branch option. As a consequence, the semantic target is now ⟨𝐖_1, 𝐖_2, …𝐖_κ⟩. Note that the target is no longer commutative, in alignment with the source.§.§.§ ImplementationFor a tree structure of depth δ and branching factor κ, the number of unique absolute positions scales with δ^κ. Their representations can be computed in δκ steps of parallel matrix-matrix multiplications with a memory cost of δ^κd^2 as follows. First, we can build up a collection of all unique paths, each represented as a (right-padded) word of length δ from the vocabulary of primitives. The corresponding representations constitute a tensor of size δ^κ× d × d, initialized as δ^κ identity matrices. We can then iterate across these words in parallel, one primitive per step (i.e. depth) t,selecting all words that take the same branching direction at the current depth, and right-multiplying their representations by the corresponding orthogonal generator. §.§ Grids The generalization from sequences to trees rests on the observation that a sequence is a tree with a deficit of choices. An altogether different axis of generalization can be obtained by recalling that the product of groups is also a group. Moreover, if it just so happens that the original groups were abelian, then so is their product (one speaks of a direct sum of groups). This construction provides access to an extension from sequences to multidimensional regular grids.For the sake of simplicity and without loss of generality, we consider a standard instance of a two-dimensional grid: an image. An image is a collection of pixels (or pixel patches) that inhabit a coordinate system (h, w), where each of h and w is the product of grammar (<ref>) (inheriting all path-related notions discussed earlier). Sinceis an abelian group, the coordinate system also constitutes an abelian group ^2 := ⊕. The new group and inversion operations are _^2 and _^2, and denote the act of joining and inverting two-dimensional paths, respectively. Both are standardly defined component-wise, on the basis of their one-dimensional counterparts:(x, y) _^2 (z,w) := (x _ y, z_ w) (x,y) := ( x,y)with ^2 := (, ) as the new neutral element. Intuitively, _^2 corresponds to vector addition, and _^2 to a reflection about the origin with respect to both axes.§.§.§ SemanticsThe specifications above allow us to reuse the notions from Section <ref> in order to interpret the components and operations of ^2. What is left unspecified is the interpretation of the group elements themselves; that is, we have yet to explicate what an object of ⊕ looks like. The quest is a short one; the notion of a direct sum carries over to matrices, where it is defined as:𝐀⊕𝐁 =[ 𝐀 0; 0 𝐁 ] .From this, we get the (rather straightforward) interpretation (ρ_1, ρ_2)↦ρ_1⊕ρ_2.§.§.§ Implementation In practice, we now split the vector space in two independent parts. The first part is modulated by the orthogonal matrices from ⟨𝐇⟩, and the second part by the orthogonal matrices from ⟨𝐖⟩. For a query q and a key k that reside at a relative distance of (h, w), their attention score is computed as q(𝐇^h ⊕𝐖^w)k – see Figure <ref> for an illustration. Each axis contributes an additive but separable factor to the attention score, forcing the model to learn contextual alignments between token pairs on the basis of their axial offsets. Not much else is different: we can still compute all matrices in parallel, temporally bounded by a logarithmic complexity of log_2(max(h, w)) and a memory footprint of max(h,w)(d/2)^2, for a grid of size (h,w) §.§ Variants & ExtensionsThe structures that we have seen so far are not the only ones that our methodology can tackle – in fact, many other group-like structures are amenable to similar interpretations. We sketch out some enticing examples in what follows. §.§.§ Absolute Positions Our analysis has so far been focused on relative position encodings, but nothing precludes the treatment of absolute positions. Our framework subsumes absolute positions, noting that absolute positions are in fact relative positions, except just in relation to a fixed point of origin.The simplified structure is that of a monoid, where only laws <ref> and <ref> are in effect. However, the rest of our exposition remains largely unaffected: one can still use subgroups of matrices to represent positions, except this time applying them on either the queries, or the keys (but not both!).§.§.§ Periodic Domains Under additions, the integers form an infinite cyclic group. An interesting twist would be to consider the positional encodings of finite cyclic groups instead. Such structures are not uncommon in chemistry; a benzene molecule, for instance, comprises six carbon atoms arranged in a ring. The semantics of the interpretation for such a structure would need to be of a matching period; in the benzene example, we would need a generator 𝐖 such that 𝐖^6 = 𝐈. Parameterizing such an operator is not unthinkable; we would simply need to fix the orthogonal matrix so as to have it implement rotations at angles which are multiples of π / 3 (see Appendix <ref> for a relevant discussion).§.§.§ Composite Groups The direct sum interpretation of Section <ref> is applicable for arbitrary groups that can be described as products, commutative or otherwise. This allows the representation of positional encodings for several other kinds of composite structures that can be concocted using the same principles, such as sequences of trees, trees of grids, etc.§.§.§ Beyond Dot-Product AttentionThroughout the previous sections, we have adopted a dot-product formulation for the attention weight function . Nonetheless, orthogonal positional encodings can be readily integrated into any other attention mechanism, such as linear <cit.>, cluster <cit.> and “softmax-free” <cit.> variants, inter alia.§ EXPERIMENTSTo assess the viability of our approach, we conduct a selection of controlled experiments in setups that allow for easy replication and comparison against alternative schemes. §.§ TasksOn the sequential front, we consider three synthetic tasks: sequence copying, sequence reversal and sequence repetition. We experiment with four positional encoders: the additive sinusoidal encodings of the vanilla transformer, the trainable relative encodings of <cit.>, RoPE <cit.> and ours.When it comes to trees, we consider four synthetic tasks on binary branching trees: tree copying, tree rotation, algebraic expression reduction and self-referential tree manipulation. The tree copy task is morally identical to the sequence copy task – the tree structure (and its positional specification) is practically a confound. In the tree rotation task, the output tree is a mirror image of the input, where all left children have become right children and vice versa. The task is purely structural, in the sense that its resolution requires no deep interaction between content and position. For the algebraic expression reduction, we consider input trees that specify a complex expression from the cyclic group C3, and task the model with producing the result of a single reduction step (i.e. reducing all subtrees of depth 1 into a leaf). This time around, the model has to identify reducible subtrees, match argument nodes to their siblings, and reduce them depending on their content. The tree operations task, finally, combines the aspects of the other three, requiring content-based addressing, structure manipulation and dynamic semantics. Concretely, we generate an input tree consisting of unique nodes, and randomly select one of its subtrees as well as one of four operators. We then construct a deeper tree, with the new root corresponding to the chosen operator, its left child indexing the chosen subtree, and its right child being the original random tree. The model is then tasked with producing the correct output given a combination of an operator, a tree, and an index. The operators we consider are extraction (i.e. return the indexed subtree), flip-extraction (i.e. return the indexed subtree, rotated), truncation (i.e. return the full tree with the indexed subtree removed) and a no-op (i.e. return the full tree as-is, ignoring indexing). We compare our model with four alternatives: the “tree” encodings of <cit.>, the flat version of itself, as well as flat sinusoidal and RoPE encodings. For the flat baselines, we consider positional encodings given the same pad-free projection of the input/output trees as the regression order.Finally, as a more practical benchmark, we train and evaluate a Compact Convolutional Transformer <cit.> on the CIFAR-10 <cit.> dataset, comparing our approach against the commonly used additive encoding schemes, either fixed (Sinusoidal) or parametric (Learned), applied on the row-by-row flattened image following established practice.§.§ SetupWe report hyperparameter configurations in Appendix <ref>. We conduct no hyperparameter optimizations of any kind. We repeat all experiments three times, varying the seeds used for weight initialization and optimization, but fixing the data across repetitions. When using orthogonal encodings, we apply two practical tricks we have found to be beneficial for training stability and faster convergence. First, we inject a locality bias by scaling the dot-product score between two tokens located at a distance p (i.e. p steps away). This is essentially the same as scaling the mediating orthogonal operator, except it allows us to maintain parallelism.follow a similar approach; here, we set c := 0.98.Second, we initialize our parameterized orthogonal matrices close to identity (i.e., we have them implement many low-angle rotations, and progressively fewer rotations at progressively larger angles). Orthogonal matrices are procured by the matrix exponentiation of skew-symmetric bases. For the sake of parameter compression, we share the orthogonal matrices between the different encoder/decoder layers, but use a distinct matrix or collection of matrices per head. In all autoregressive experiments, we reuse input/output embeddings, and evaluate using teacher forcing. §.§ ResultsWe report test-set results in Tables <ref>, <ref> and <ref>, underlining all scores that fall within one standard deviation of the best in each respective category. Our approach consistently achieves the best or near-best scores across all the tasks and domains considered, even when pitted against schemes that are parameterized by the shape and size of the input/output structures (i.e., the Relative, Tree, and Learned schemes). [It is worth noting that <cit.> report higher top accuracies with both the Sinusoidal and the Learned variants of the Convolutional Transformer on CIFAR-10, but they use an over-engineered training pipeline.] Unsurprisingly, performance drops drastically when using sequential orthogonal encodings in the image case, as this is effectively providing the wrong structural biases. This is less so the case in the flattened tree-to-tree tasks, since the structural collapse is mirrored on both the input and the output ends of the pipeline.§ RELATED WORKThe original Transformer <cit.> is made sequence-conscious by having the raw token embeddings augmented with either trainable positional embeddings <cit.> or a sinusoidal periodic function. Following this, positional encodings have garnered significant community attention– too much, in fact, to permit an exhaustive enumeration here.provide an extensive survey, where they group modeling approaches according to the following criteria: * injection method – whether the modeling alters the vanilla dot-product attention function 𝐀(·,·), or instead adjusts the content representations of 𝐗 and 𝐘 externally* recurrence – whether the modeling is applied once throughout the model, or once per layer* reference point – whether the modeling captures position information in a pairwise-relative fashion, or individually, on the basis of a fixed reference point* learnability – whether the modeling introduces additional trainable parameters and, if so, how many* unboundedness – whether the modeling theoretically permits the representation of arbitrary positions, irrespective of model instantiation or the targeted data size.While points (<ref>) and (<ref>) are technical bureaucracies, points (<ref>), (<ref>) and (<ref>) are of prime importance. Whether we are interested in absolute or relative positions (or both) is largely problem dependent; if the data being modeled exhibit translation equivariance, relative positional encodings should make for a natural choice. Ideally, the modeling approach should allow some leeway on how positions are to be interpreted, allowing the user a say on the matter. Learnability also comes with its own trade-off. On the one hand, having a system tunable to the import of positions is crucial in generically handling multiple diverse tasks, where positions may have different effects at different granularity scales and directions <cit.>. On the other hand, learnability comes at the cost of added parameters, which need to be stored and optimized alongside the rest of the model.Unboundedness, finally, requires the model to be able to perceive and represent arbitrary positions during inference, even if unseen during training – a crucial desideratum for problems in size-generic domains. At a first glance, learnability and unboundedness seem to be at odds: how can one “learn” representations for positions prior to having even seen them? To bypass the problem, several approaches resort to practical tricks, such as clipping relative distances to a certain range of values <cit.> or zeroing out the corresponding attention matrix coefficients <cit.>. In effect, doing so amounts to softening size constraints by either “blinding” the model to the differences between sufficiently distant positions, or confining attention within a localized sliding window. A more principled alternative is to instead learn functions that generalize smoothly to arbitrary sizes, as explored by <cit.>, <cit.>, .In addition to the factors above, there are also practical considerations worth taking into account: * performance – does the modeling offer any tangible benefits in the concrete performance of the resulting end-model?* computational cost – what is the added memory footprint and temporal cost of the modeling, if any?* content-dependence – does positional information have a static effect on the output of 𝐀(·,·) regardless of what 𝐗 and 𝐘 are, or does it vary with its input?* extensibility – is the modeling naturally extensible to data structures other than the ones considered/experimented on?Points (<ref>) and (<ref>) have been the focal points of most proposals to date. Ironically, and as <cit.> note, drawing an extensive quantitative comparison between existing approaches is practically impossible, and not just due to their sheer number. Positional encoding schemes are often presented intermixed with other ad hoc modifications to the Transformer architecture, with no established evaluation suites or hyperparameter setups to standardize their systematic evaluation. Instead, the tasks commonly employed usually involve heavy-duty self-supervised models, the training of which is costly to replicate, and the perceived value of which is reduced if no significant performance improvements are reported. This often leads to intensive hyperparameter tuning, while simultaneously discouraging the “waste” of precious compute budget into statistical averaging.In turn, this reduces the epistemic value of any conclusions drawn. Mindful of this issue, we have consciously tried to conduct our experiments on limited scale models and datasets in order to facilitate replication and evaluation.With respect to content-dependence, the majority of positional encoding approaches so far adopt an additive formulation. That is, positional encodings are computed independently of the content of 𝐐 and 𝐊, and are added on top of them prior to the computation of the attention matrix . The corresponding expansion ofboils down to an addition of pairwise multiplications that do not involve any simultaneous combinations of 𝐐, 𝐊 and their positions. The (presumably unwanted) side effect is an inability to model complex interactions that require higher-degree polynomials to resolve. [Conversely, <cit.> claim that the interactions captured by additive encoding schemes are in fact too strong, and opt for maximally untying content and structure instead.]Apropos of the above criteria, we can frame our proposal as one that supports learnable, unbounded, performant, computationally tractable, content-dependent and extensible representations, for positions upon many variations of infinite but enumerable inductive structures. Related Approaches A versed reader will note semblances between our proposal and the works of <cit.> and <cit.>; hints of the concept of positional encodings as sequence homomorphisms can already be found in the former, even if not explicitly formulated as such. Despite being worded differently, all three approaches account for positions by interpreting them as multiplicative, norm-preserving (rotation-like) operations. Our system expands upon both works in applying arbitrary orthogonal operations, i.e. rotations and reflections about arbitrary planes, as opposed to rotations about planes aligned with the axes. In the case of a single generator matrix (i.e., sequences), this difference turns out to be non-essential. Since attention mechanisms start by applying trainable linear operators f_q and f_k, an implicit change of basis is already in effect.Whether the rotation planes can be trained or not is therefore irrelevant– see Appendix <ref> for an analysis.This no longer holds, however, in the case of multiple generator matrices (i.e., trees), where each generator should be able to rotate and reflect different sets of planes. Moreover, in using trainable orthogonal operators, our methodology allows the seamless training of not just the rotation planes, but also the rotation angles. Even though this might not be universally desirable (see the discussion of Section <ref>), it is definitely a property that we want control over.In parallel to this work, <cit.> similarly advocate for positional encodings as group homomorphisms, there framed as irreducible group representations. Modulo presentation, the two approaches are variations on a common theme; theirs is technically concerned with post-hoc representation of symmetries and equivariances, whereas ours focuses on the interpretation of abstract structures. § CONCLUSIONWe have presented a theoretically motivated approach towards constructing positional encodings for a variety of structures. Without any significant modification or overhead, our methodology can capture sequences and their (multi-dimensional as well as multi-branching) generalizations. In doing so, it reconciles powerful but structurally oblivious models with their missing inductive biases. Beyond that, it grants full control over how these biases are to be implemented (i.e. whether they are trainable or parametric, absolute or relative, shared or distinct, etc.), while also being amenable to adjustments and extensions (e.g. to periodic or composite structures). Our work indicates that generality and extensibility are not in spite of, but rather due to structural discipline and abstraction. Initial results show promise; we hope that practitioners and researchers will explore and build upon our findings.§ LIMITATIONSThere are three axes upon which our work exhibits weaknesses and limitations: theoretical, empirical and epistemic.On the theoretical front, we have only explored regular structures that can be described by simple inductive grammars. This presents a rather simplified view of the machine learning world, where one frequently encounters vastly more complex structures, such as arbitrary graphs, as well as inductions and structural specifications that fall beyond the scope of abstract groups. Exploring the homomorphic perspective for such structures is an open problem, and one we have consciously chosen to avoid here in favor of establishing concrete foundations for more precise and tractable settings. Nonetheless, we believe that the same principles can be of merit even there, even if not under the exact same interpretation guidelines. We leave this to future work.On the empirical front, we must remark that our approach increases a model's parameter count (even if negligibly), as well as its temporal compute complexity, especially during training. This is barely noticeable in the sequential and grid constructions, which scale logarithmically, but is particularly felt in the case of trees, which scale linearly and require explicit for-loops and costly indexing operations.On the epistemic front, one can argue that our experiments are too narrow to draw indefeasible or definitiveconclusions.Our view is that dealing with the complexity of benchmarking detracts from the message we want to convey here, as the necessary hyperparameter optimizations, highly engineered training routines and data biases can all act as confounding factors. That said, we leave task-based adaptations of our work as an open question that warrants further exploration. On a related note, using teacher forcing during testing allows faster and easier evaluation across structures and models, but paints an overly optimistic picture of “real-life” autoregressive inference. Even if quality assessment is unrealistic, it is not unfair – all model variations we consider merit from it just the same.Finally, our work carries no ethical risks that we can perceive. Moreover, we believe it is an important step towards equipping large language models with the necessary inductive biases, making them transparent and interpretable, and therefore unraveling their “black box” nature.§ MATHEMATICAL DECOMPOSITION OF ORTHOGONAL OPERATORS The rotations applied by RoPE <cit.> can be represented as either a real-valued block-diagonal matrix of size d, or a complex-valued diagonal matrix of size d/2. Here, we adopt the latter representation for the ease of analysis, but the geometric interpretation is the same in the real-valued case. The rotation matrix 𝐃 can be specified as 𝐃 = diag(e^jΘ), where Θ = (θ_1, …θ_d/2) is the set of rotation angles utilized. The matrix is raised to the power p (or, equivalently, the angles are multiplied by p) in order to mediate the dot-product between a query/key pair at a relative distance of p.By the spectral theorem, the eigendecomposition of a unitary matrix 𝐖 has the form 𝐔Σ𝐔^†, where 𝐔 is itself unitary.The matrix Σ contains a collection of eigenvalues of the form λ_i = e^jθ_i, where each θ_i encodes an angle of rotation along the planes specified by 𝐔. Furthermore, raising 𝐖 to its pth power is equal to 𝐔Σ^p𝐔^† (i.e., it leaves the left and right unitaries 𝐔 and 𝐔^† unaffected). Assuming f_q and f_k are linear functions, they can be composed with 𝐔 and 𝐔^† respectively. The above implies that our model and RoPE coincide, but only for some collection of eigenvalues Σ = 𝐃 (and corresponding angles), where we may substitute f_q,RoPE for the composition f_q ∘𝐔, and f_k,RoPE for f_k ∘𝐔^†. In other words, our sequential model corresponds to a version of RoPE where the rotation matrix 𝐃 may vary and be optimized during training.In this light, our scheme offers the appealing unifying perspective of orthogonal operators implementing rotations that can either be fixed to account for the structure at hand, or left to vary and be learned from the data. § HYPERPARAMETERSIn all sequence and tree tasks, we use a simple Transformer with the hyperparameters of Table <ref>. For the sequence tasks, we sample words of random lengths from the discretized normal 𝒩(100, 10) and a vocabulary size of 20 (to ensure token repetition and diffuse the possibility for leaning on content-based addressing). For the tree Copy, Reorder and C3 tasks, we sample random trees of maximum depths from the discretized normal 𝒩(7, 1). For the former two, we use an evenly distributed vocabulary of size of 20 (10 operators, 10 leaves). For the tree operations task, we use a larger vocabulary size of 128, as mandated by the need for content-based addressing of unique nodes. We train with AdamW <cit.> using a 5% linear warmup – 95% cosine decay schedule, with an initial learning of 10^-7, peaking at 5· 10^-4 and valleying at 10^-9.For the Compact Convolutional Transformer, we largely rely on the setup of <cit.>. Concretely, we apply a small-step “tokenizing” convolution on the input image, apply max pooling to downsample the result and treat the resulting image as a flat sequence.After passing it through the encoder, we apply a global soft attention <cit.> (rediscovered by <cit.>, now dubbed “sequence pooling”) to aggregate into a single vector prior to applying the classifier. To attain competitive scores, we apply standard CIFAR-10 data augmentations and more aggressive regularization: a 10% attention weight dropout, a stochastic depth of 10% for each consecutive layer, and a weight decay of 3· 10^-2. All the above settings are taken without modification from <cit.>.When using a scheme that requires fixing the size of the structure being modeled (i.e. the Relative, Tree and Learned schemes), we fix it at exactly the distribution's mean to ensure a fair comparison.
http://arxiv.org/abs/2312.16045v1
{ "authors": [ "Konstantinos Kogkalidis", "Jean-Philippe Bernardy", "Vikas Garg" ], "categories": [ "cs.LG", "cs.AI" ], "primary_category": "cs.LG", "published": "20231226131725", "title": "Algebraic Positional Encodings" }
[email protected] Centro de Estudios Científicos (CECS), Casilla 1469, Valdivia, Chile Departamento de Física, Universidad de Concepción, Casilla 160-C, Concepción, [email protected] Dipartimento di Fisica “E. Pancini", Università di Napoli Federico II - INFN sezione di Napoli, Complesso Universitario di Monte S. Angelo Edificio 6, via Cintia, 80126 [email protected] Instituto de Ciencias Exactas y Naturales,Universidad Arturo Prat, Playa Brava 3256, 1111346, Iquique, Chile Facultad de Ciencias, Universidad Arturo Prat, Avenida Arturo Prat Chacón 2120, 1110939, Iquique, Chile We construct topological soliton solutions describing baryonic tubes and layers with modulation in the SU(2) non-linear sigma model coupled with ω-mesons in 3+1 dimensions. Using appropriate Asäntze for the pionic matter field and the ω-mesons vector potential, the complete set of seven coupled partial differential equations can be solved analytically. These solutions represent modulated tubes and layers at finite volume with arbitrary baryon number, where the modulation of the solitons in one direction is determined by one of the three degrees of freedom of the pionic field, satisfying the equation of a two-dimensional free massless chiral scalar field. As expected, the inclusion of the ω-mesons to the Non-linear sigma model allows to reduce the repulsion energy between baryons, which leads to a flattening of the tubes and layers in one direction, forming a kind of “nuclear linguine phase”. Also, we show that this construction can be carried out even when higher order terms in the large N_c expansion are included -in particular the Skyrme term- without spoiling the integrability of the field equations. Exact modulated hadronic tubes and layers at finite volume in a cloud of π and ω mesons Aldo Vera========================================================================================= § INTRODUCTION Quantum Chromodynamics (QCD) constitutes one of the pillars of the standard model of particle physics. Featuring three colors and several flavors, QCD dynamics become strongly coupled at low energies while weak at high energies. It is in the strong sector where the non-perturbative nature of QCD remains inaccessible to standard analytical techniques, being the numerical methods the most used <cit.>, <cit.>. In this context, the development of innovative analytical techniques is of crucial importance, allowing a controlled study of the strong dynamics. One of the most important models to address the strong sector of QCD is known as the non-linear sigma model (NLSM), which manifests spontaneous chiral symmetry breaking and provides an accurate description of pions at low energies <cit.>. Additionally, the NLSM admits the existence of topological soliton solutions, which are interpreted as baryons <cit.>, <cit.>, <cit.>, <cit.>, with the topological charge equal to the baryonic number. In this context, baryons emerge from the non-linear interactions between mesons. However, these configurations are not energetically stable due to Derrick's theorem <cit.>. This problem can be solved including higher derivative corrections in the Lagrangian that come from the large N_c expansion, being the Skyrme term the simplest one <cit.> (see also <cit.>, <cit.>). Nevertheless, it is important to highlight that stable topological solitons describing baryons can be constructed even without including the Skyrme term in the NLSM. In fact, this can be carried out by circumventing Derrick's theorem in different ways. For example, by working in a finite space or coupling the theory to spin-1 matter fields.[Here we consider two ways to avoid Derrick's theorem. First, we consider a system without spherical symmetry. In particular, we construct solitons confined to a finite volume in regular patterns. Second, the matter field depends on a light-like degree of freedom, which constitutes one of the main ingredients in the construction of analytical solutions.]On the other hand, although the NLSM and Skyrme models provide a good description of hadrons at low energies, both in their static properties and as interacting states <cit.>, <cit.>, <cit.>, <cit.>, some predictions differ significantly from the experimental data. One of these discrepancies is the nuclear binding energy. The predictions from the Skyrme model point to a repulsion energy between baryons greater than what has been measured in experiments.Now, is it possible to stabilize the solitons and, at the same time, reduce the expected value of the binding energy coming from the NLSM by introducing vector mesons to the theory. In fact, in Refs. <cit.>, <cit.>, <cit.> (see also <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and references therein) has been shown that the inclusion of ω-mesons accomplishes this task. In this paper, we construct analytical solutions in the NLSM coupled to ω-mesons with non-trivial topological charge. These solutions represent ordered arrays of baryonic tubes and layers at finite volume. Then, we generalize our results to the case where the Skyrme term is included, and even when higher order corrections in the t' Hooft expansion are considered <cit.>, that is, the generalized Skyrme model <cit.>, <cit.>, <cit.>, <cit.> (see also <cit.>, <cit.>, <cit.>).It is well known that, when baryonic matter is under extreme conditions, ordered arrays are expected to appear as a result of the non-linear interactions between the constituents. This has been explored using numerical methods in the Skyrme model in Refs. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. Recently, in Refs. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> (see also <cit.>, <cit.>, <cit.>, <cit.>), the first analytical solutions describing crystals of topological solitons at finite volume in the NLSM and Skyrme models were constructed (see <cit.> for a review). One of the main achievements of such construction is that the configurations obtained are very similar to the nuclear pasta phases <cit.> (see also <cit.>, <cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, <cit.>, <cit.>,<cit.>, <cit.> and references therein), in particular, nuclear lasagna (in the form of layers) and nuclear spaghetti (in the form of tubes). This is of particular interest as nuclear pasta states are expected to emerge by subjecting baryonic matter to extreme conditions, for example, in supernovae cores and in the crust of neutron stars, where densities exceed the normal nuclear density <cit.>, <cit.>.[In addition to the Skyrme model, in the study of compact stars, the Walecka model is a very useful theory <cit.>, <cit.>, <cit.>, which also describes nucleons and mesons. A discussion about the relation between these models can be found in Ref. <cit.>.]Although the study of the formation of nuclear pasta phases has been approached using simulations such as molecular dynamics <cit.> and numerical methods (see, for instance, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), until now there has been no analytical approach to this problem, which would open an important window in the understanding of baryonic matter at extreme conditions. In this work, we aim in that direction. We generalize the solutions constructed in Ref. <cit.>, showing that baryonic tubes and layers made of π and ω mesons can be promoted to solutions of the generalized Skyrme model. The inclusion of an arbitrary light-like degree of freedom in the matter field allows for modulation of the solitons in one direction, and the time evolution of these configurations can be shown explicitly.The paper is organized as follows: In Section II, we introduce the NLSM coupled to ω-mesons. In Section III, we construct analytical solutions describing baryonic tubes and layers at finite volume, and discuss their main physical properties. In Section IV, we show how the inclusion of the ω-mesons allows for a reduction in the binding energy between baryons. Also, we discuss the differences and similarities between our solutions and the crystals of gauged skyrmions.In Section V, we show that the inclusion of higher-order corrections to the theory does not spoil the integrability of the equations. Section VI is devoted to the conclusions. § THE MODELThe NLSM coupled to vector ω-mesons is described by the actionI[U, ω]=∫ d^4 x √(-g)(K/4Tr[L^μ L_μ] -1/4 S_μν S^μν-1/2 M_ω^2ω_μω^μ-γρ_μω^μ), L_μ=U^-1∇_μ U= L_μ^j t_j ,S_μν= ∇_μω_ν - ∇_νω_μ ,t_j=i σ_j ,where U(x) ∈ SU(2) is the pionic field, σ_i are the Pauli matrices, ω_μ is the 4th-vector potential describing the ω-mesons, ∇_μ is the covariant derivative, K and γ are positive coupling constants fixed experimentally, and M_ω corresponds to the ω-mesons mass. In our convention c=ħ=1,Greek indices run over the four dimensional space-time with a mostly plus signature, and Latin indices are reserved for those of the internal space. The field equations of the system are a set of seven coupled partial differential equations given as follows: First, the three equations obtained varying the actions with respect to the pionic matter field U are ∇_μL^μ-6 γ/K∇_ν(ϵ^μνλρω_μL_λL_ρ)=0.Second, the field equations that come from the coupling with the ω-mesons, obtained through the variation with respect to the field ω_μ, are∇_μ S^μν-M_ω^2ω^ν=γρ^ν .The ω-mesons interact with the π-mesons through the topological current, ρ^μ, present in Eq. (<ref>), which is defined asρ^μ=ϵ^μνλρTr[(U^-1∂_ν U)(U^-1∂_λ U)(U^-1∂_ρ U) .].where ϵ^μνλρ is the Levi-Civita tensor. The integral over a space-like hypersurface Σ of the ρ^0 component of the topological current leads to the topological chargeB=1/24 π^2∫_Σρ^0 , which determines the baryonic number of a given matter configuration.On the other hand, the energy-momentum tensor of the theory is given byT_μν=-K/2 Tr[L_μ L_ν-1/2 g_μν L^α L_α]+S_μα S_ν^α-1/4 S_αβ S^αβ g_μν +M_ω^2(ω_μω_ν-1/2 g_μνω^αω_α)+γ(ρ_μω_ν+ρ_νω_μ-g_μνρ_αω^α) . § THE SOLUTIONSIn this section, we construct two types of analytical solutions of the NLSM coupled to ω-mesons using two different Ansätze for the pionic field U(x)∈ SU(2). The first one, which describes baryonic layers (the so-called “lasagna phase"),is built via the Euler angles representation, while the second one, which describes baryonic tubes (the so-called “spaghetti phase"), is constructed via the exponential representation. Both configurations possess a non-vanishing topological charge.[ As we will see below, the inclusion of the ω-mesonsinduces a change in the geometry of the nuclear pasta states; in particular, the baryonic tubes flatten in one direction. This is the reason for calling these novel solutions “nuclear linguine phase".] For the ω-mesons field, we will use a convenient choice that allows decoupling the degrees of freedom corresponding to each type of meson. This desirable characteristic is achieved by Ansätze that satisfy the following relation: ∇_ν(ϵ^μνλρω_μL_λL_ρ)=0,which is clear from Eq. (<ref>). We will see that, although this condition may at first seem restrictive, the solutions that emerge from it exhibit particular characteristics that come from the coupling of the π-mesons and ω-mesons.As we are interested in the construction of analytical solutions at finite volume, we will consider the metric of a boxds^2= - dt^2 + L_x^2 dx^2 + L_y^2 dy^2 + L_z^2 dz^2,where the adimensional spatial coordinates {x,y,z} have the ranges 0 ≤ x ≤ 2π , 0 ≤ y ≤π ,0 ≤ z ≤ 2π ,and the coefficients L_i fix the size of the box in which the solitons are confined.§.§ Modulated baryonic layers in a cloud of π and ω mesons For the construction of analytical solutions describing baryonic layers at finite volume in the NLSM coupled to ω-mesons, we will use an Ansatz inspired in the case of Yang-Mills theory and the Skyrme model introduced in Refs. <cit.>, <cit.>, <cit.>, <cit.> (see also <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). In those references, it has been shown that the parameterization in Euler angles is convenient to describe finite volume configurations homogeneous in two spatial dimensions. An element of SU(2) can be written in the Euler angles representation as follows U=e^F(x^μ) t_3e^H(x^μ) t_2e^G(x^μ)t_3 ,where F(x^μ), G(x^μ), H(x^μ) are the three degrees of freedom of the pionic field, and they are, in principle, arbitrary functions of the coordinates. A choice for the degrees of freedom that allows to reduce the NLSM equations to a decoupled system at finite volume is the following (see <cit.> for details)F(x^μ)= q y, H(x^μ)=H(x),G(x^μ)=G(t,z).In fact, the above Ansatz, in absence of the ω-mesons field, reduces the NLSM equations to the following solvable system∂_x^2 H=0, G= ∂_t^2 G - 1/L_z^2∂_z^2 G = 0.Here below we point out some important comments about these equations. First, being Eq. (<ref>) a simple ODE and Eq. (<ref>) the wave equation, the solutions of these equations are, respectively H(x)=(1+2n)/4x,where n is an integer number fixed by the boundary conditions, and G=G_-+G_+ ,whereG_+ =z_0^++v_+(t/L_z+z)+∑_n ≠ 0(a_n^+sin[n(t/L_z+z)]+b_n^+cos[n(t/L_z+z)]), G_- =z_0^-+v_-(t/L_z-z)+∑_n ≠ 0(a_n^-sin[n(t/L_z-z)]+b_n^-cos[n(t/L_z-z)]).Now, once the interaction with the ω-field is taken into account, in order to decouple the contribution of the ω-mesons from the pionic degrees of freedom, we have to impose(∂_t G)^2-1/L_z^2(∂_z G)^2=(∂_t G+1/L_z∂_z G)(∂_t G-1/L_z∂_z G)=0.The above condition along with the following Ansatz for the ω-mesons ω_μ=-u/p(∂_zG, 0,0,L_z^2∂_tG), u=u(x),(with p an integer number) guarantees that the full system of equations remains integrable. In fact, the potential in Eq. (<ref>) satisfies the constraint in Eq. (<ref>). It must be highlighted that Eq. (<ref>) is not inconsistent with the wave equation in Eq. (<ref>). Instead of that, Eq. (<ref>) is a particular case of Eq. (<ref>); it projects one of the modes G_- or G_+ to zero. Therefore, the main difference with respect to Ref. <cit.> is that the degree of freedom G now describes a two-dimensional free massless chiral scalar field.Note that this vector field is very similar to the Ansatz introduced in Refs. <cit.> and <cit.> for the Maxwell potential. However, there are relevant differences between these two cases. We will discuss this point in the next section. From the above, the four equations related to the ω-mesons are reduced to just one differential equation, namelyu”-L_x^2 M_ω^2 u =12 γ p q L_x /L_y L_zsin (2 H) ∂_x H .This equation can be easily solved due to the profile H is a linear function. In fact, the ω-mesons profile turns out to beu(x)= -12 γ L_x (2 n+1) p q /L_y L_z(4 L_x^4 M_ω^2+(2 n+1)^2)sin((n+1/2) x).where we have used periodic boundary conditions for the u function; u(0)=u(2π)=0. The boundary conditions for the fields H and G come from imposing the topological charge in Eq. (<ref>) will be an integer number. In fact, using the previous parametrization in Eqs. (<ref>) and (<ref>), the topological charge density ρ^0 becomes ρ^0=12 q /L_xL_yL_zsin (2 H) ∂_x H ∂_z G.It follows that, choosing the following boundary conditionsH(2π)=(1+2n)/2π ,H(0)=0,G(t, z=2π)-G(t, z= 0)=(2 π) p,the topological charge turns out to be B= p q. Therefore, the baryon number for these kinds of solutions can be an arbitrary integer number. Although the integer parameter n does not contribute to the topological charge, it determines the number of layers in the lattice along the x direction. Note that the boundary conditions in Eq. (<ref>) implies that the coefficients v_± in Eq. (<ref>) satisfy p=v_+-v_-, while the coefficients {a^±,b^±} do not contribute to the topological charge.The energy density, ε=T_00, for the solutions presented above is given byε= K/2(H'^2/L_x^2+q^2/L_y^2+Δ G )+L_z^2/2L_x^2p^2 u'^2Δ G+1/2p^2 M_ω^2 u^2 L_z^2Δ G+12L_zγq /L_xL_ypu H' sin (2 H)Δ G,where Δ G =∂_tG^2+1/L_z^2∂_zG^2, is the energy density of a free massless scalar field, and the prime denotes ∂_x. Fig. <ref> shows the energy density of these configurations. One can see that the solution describes an array of baryonic layers at finite volume; the number of layers is determined by the number n in the boundary conditions, while the number of baryons is fixed by p and q. The modulation of the tubes is controlled by the modes associated with the G function in Eq. (<ref>), as well as the evolution in time.§.§ Modulated baryonic tubes in a cloud of π and ω mesonsCrystals of baryonic tubes as solutions of the NLSM and Skyrme model can be constructed using the exponential representation, as have been shown in Refs. <cit.>, <cit.>, <cit.> and <cit.> (see also <cit.>, <cit.>, <cit.>).An element of SU(2) in the exponential representation is written asU^± 1(x^μ)=cos( α) 1_2±sin( α) n^it_i ,n^in_i=1 ,n^1=sinΘcosΦ , n^2=sinΘsinΦ ,n^3=cosΘ ,where α, Θ and Φ are the three degrees of freedom of SU(2). Following Refs. <cit.> and <cit.>, we will choose these functions asα =α (x) ,Θ = Q y ,Φ =G(t,z).As in the case of the baryonic layers presented above, one can check that there is an Ansatz that satisfies the constraint in Eq. (<ref>) and, therefore, allows to decouple the NLSM equations from the ω-mesons equations; that is, ω_μ=-v/p(∂_zG, 0,0,L_z^2∂_tG), v=v(x,y),with p a constant. This potential is very similar to Eq. (<ref>), but in this case the function v must depends on two spatial coordinates instead of just one.Replacing Eqs. (<ref>), (<ref>) and (<ref>) into the NLSM equations, we obtain the following decoupled systemα” -Q^2 L_x^2/L_y^2sin(α) cos(α) = 0, G = ∂_t^2 G - 1/L_z^2∂_z^2 G = 0, (∂_t G)^2-1/L_z^2 (∂_z G) ^2 = (∂_t G + 1/L_z∂_z G)(∂_t G - 1/L_z∂_z G) = 0.Again, Eqs. (<ref>) and (<ref>) are solved by one of the modes expansion in Eq. (<ref>), defining a free massless chiral field theory in 1+1 dimensions for the G field. The simplest solution of this system is a linear function G=t/L_z-z, which has been explored in Ref. <cit.>; these are tubes without modulation. On the other hand, Eq. (<ref>) can be solved analytically in terms of Elliptic functions.Even more, the explicit solution of this equation is not necessary since all the relevant quantities that characterize the solution (such as the energy density and the topological charge density) only depend on α and its derivatives, not on the x coordinate explicitly. Indeed, Eq. (<ref>) can be reduced to the following quadrature: α'^2+ L_x^2 Q^2 /2 L_y^2cos (2 α )=E_0,where one can read α' in terms of α (here E_0 is an integration constant fixed by the boundary conditions). On the other hand, the ω-mesons equations are reduced to the following partial differential equation(-Δ + M_ω^2) v = f(x,y), f(x,y)=12 γ Qp/L_x L_y L_zsin^2(α)sin(Qy) α', Δ = 1/L_x^2∂_x^2 +1/L_y^2∂_y^2. Although this equation is not as simple as that of the nuclear lasagna phase, Eq. (<ref>) is a Poisson equation, and its general solution can be written asv(r⃗)=∫ d r⃗^'G(r⃗, r⃗^') f(r⃗^'), (-∇_r⃗^2+M_ω^2) G(r⃗, r⃗^')=δ(r⃗-r⃗^'),where r⃗=(x,y) and G(r⃗, r⃗^') is the corresponding Green function.The topological charge density of this configuration is given by ρ^0=-12 Q/L_x L_y L_zsin (Q y) sin ^2(α) α^'∂_z G .In order to have an integer value for the baryon number, we must impose the following boundary conditions:α(2 π)-α(0)=n π,G(t, z=0)-G(t, z=2 π)=(2 π) p,so that, the baryon number for these configurations turns out to be B=np.Note that the parameter Q in Eq. (<ref>) must be an odd number to ensure that the baryon number does not vanishes.The respective energy density reads ε= K/2(sin ^2(α ) (Δ G L_y^2 sin ^2(Q y)+Q^2)/L_y^2+2 α '^2/L_x^2) +Δ G L_z^2 (L_y^2( ∂_x v)^2+L_x^2 (∂_y v)^2)/2 L_y^2 L_x^2 p^2 +Δ G L_z^2 M_ω^2 v^2/2 p^2-12 L_z γ Q sin (Q y) α ' sin ^2(α ) vΔ G/ L_x L_yp ,where Δ G has been defined below Eq. (<ref>).Fig. <ref> shows the energy density of these configurations. One can see that the system describes a lattice of baryonic tubes, where the numbers n and p define the baryon number, that is, the number of tubes in the x direction. On the other hand, the parameter Q repeats the pattern in the y direction. An interesting issue arises from the coupling with the ω-mesons. It is expected that nuclear spaghetti-like solutions are tubes extended in the z direction (with or without modulation) whose cross sections are concentric circles <cit.>, <cit.>. In fact, this is what is expected from nuclear spaghetti phases, as can be seen from the simulations obtained in Refs. <cit.>, <cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, <cit.>, <cit.>,<cit.>, <cit.>. Here, however, as the configurations extend in the z direction, the cross sections no longer remain spherical but take on oval shapes. This phenomenon appears precisely because the coupling of the ω-mesons reduces the repulsion energy between the baryons. Indeed, in Fig. <ref> we can see that, for a fixed baryon number (the number of tubes in the x direction) the tubes in the y direction move closer together due to the lower repulsion which generates the ω-mesons in addition to the π-mesons.§ COMPARING CRYSTALSAs was proposed in Refs. <cit.>, <cit.>, <cit.>, the inclusion of vector mesons in the NLSM allows for a reduction in the predicted binding energy for nucleons, which makes it more compatible with the experimental data. Even more, this can be clearly seen from the analytical solutions presented in the previous section, where in the case of the baryonic tubes (see Fig. <ref>), a flattening emerges due to the presence of the ω-mesons. Another way to see it is by introducing the quantity Δ(B), which measures the interaction energy between baryons <cit.>. This quantity is defined asΔ(B) = E_(B+1)-(E_(B)+E_(1))/(B+1) E_(1) ,where B, in our cases, is the baryon number for the baryonic tubes and layers and E_(i) is the total energy of the system containing (i) baryons. This quantity is an increasing function of B, due to the strong short-range repulsion between baryons. In fact, from Fig. <ref> one can see that the inclusion of the ω-mesons reduces the interaction energy since these curves associated with the solutions with ω-mesons are below the one that only contains π-mesons.Another interesting fact comes from the comparison between the baryonic tubes and layers. From Fig. <ref> (below), one can see that, when comparing the Δ(B) function for both configurations for a fixed baryon number, the layer pattern is the one that most reduces the repulsion energy between the baryons that constitute the system. Tube-like configurations are more repulsive, at least in this sector.At this point, it is important to highlight the difference between the baryonic crystals coupled to ω-mesons presented here versus the gauged crystals shown in Refs. <cit.>, <cit.>, constructed using similar methods. First, let us remember that gauged skyrmions come from the minimal coupling of the pionic field with photons through the covariant derivative, which is defined asD_μU=∇ _μU+A_μUÔ ,Ô=U^-1[ t_3,U].An appropriate Ansatz for the Maxwell potential allows to decouple the Skyrme equations from the Maxwell equations (just as in the case of the ω-mesons that we have shown here), allowing the construction of crystalline structures of baryonic tubes and layers (see <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). However, the fact that the coupling with vector mesons comes from an interaction term in the action instead of the minimal coupling, implies the following relevant differences:* For crystals of gauged skyrmions, the Skyrme equations are affected by the electromagnetic coupling (even when the three equations reduce to a single equation for the skyrmion profile), while in the case of the crystals with ω-mesons the equation for the profile is exactly the same with and without ω-mesons. This can be seen in Refs. <cit.>, <cit.>, <cit.>, <cit.> for the nuclear lasagna phase.* In the case of gauged skyrmions, the Maxwell's equations reduce to a Schrödinger-like equation, while for the ω-mesons, the equations are reduced to a Poisson equation.This can be seen in Refs. <cit.>, <cit.>, <cit.>, <cit.>, <cit.> for the nuclear spaghetti phase.* While the coupling with the ω-mesons generates a flattering in the energy density of the baryonic tubes and layers, the coupling with the electromagnetic field changes the intensity of the energy inside the tubes or layers, but the geometry remains the same.§ HIGHER ORDER CORRECTIONSIn this section, we will show that the previous set of solutions can be constructed even when higher-order derivative terms are included in the action. For this purpose, we consider the generalized Skyrme model <cit.>, <cit.>, <cit.>, <cit.>, <cit.> (this is, the NLSM model plus the Skyrme term and higher order corrections) coupled to ω-mesons, described by the actionI_gen[U, ω]=∫ d^4 x √(-g)[K Tr(L^μ L_μ+λ/8 F_μν F^μν)-S_μν S^μν-1/2 M_ω^2ω_μω^μ-γρ_μω^μ+ℒ_corr ], where F_μν=[L_μ, L_ν].The term ℒ_corr represents the subleading corrections to the Skyrme model, which can be obtained via chiral perturbation theory (see <cit.> and references therein) or by the large N_c expansion of QCD <cit.>, <cit.>. To make the calculations clearer, we will consider only the first correction, which isℒ_corr=c_6/96Tr[F_μ^ν F_ν^ρ F_ρ^μ],(with c_6 a coupling constant) although our results are still valid even including the next higher order terms.The field equations obtained by varying the action in Eq. (<ref>) with respect to the U field, are ∇^μ L_μ+λ/4∇^μ[L^ν, F_μν]-6γ/K∇_ν(ϵ^μνλρω_μ L_λ L_ρ)+6 c_6/K[L_μ, ∂_ν[F^ρν, F_ρ^μ]] =0, while the ω-mesons equations are the same as in Eq. (<ref>).First, using the same Ansatz for the baryonic layers defined in Eqs. (<ref>) and (<ref>), one can check that the coupled system in Eq. (<ref>) reduces to the same relations in Eqs. (<ref>), (<ref>) and (<ref>). The contribution that comes from the Skyrme term is encoded in a global factor, namelyK (L_y^2 -λp^2) ∂_x^2 H=0.Note that the contribution in Eq. (<ref>) does not appear at all in the field equations. This is because the correction in Eq. (<ref>) is the topological charge density to the square, and it vanishes when (at least) one of the pionic degrees of freedom is light-like, as is indeed the case for our solutions. Something similar happens with the baryonic tubes. Indeed, for the Ansatz in Eqs. (<ref>) and (<ref>), the system in Eq. (<ref>)is reduced to the same relations in Eqs. (<ref>) and (<ref>) for the G function, together with a singleODE for theprofile α:α”(λQ^2 sin ^2(α )-L_y^2)+λQ^2sin (α ) cos (α )α '^2+L_x^2 Q^2 sin (α) cos (α )=0.This last equation can be solved in terms of generalized Elliptic Integrals, but again, it can lead to a first order ODE:∂_x(Y(α)(α^')^2+W(α)+E_0)=0,Y(α)=2 L_y^2-λQ^2+λQ^2 cos (2 α ),W(α)= L_x^2 Q^2 cos (2 α ),(where E_0 is an integration constant fixed by the boundary conditions), allowing all the relevant quantities to be written only in terms of α, and not explicitly on the coordinates. § CONCLUSIONSWe have shown that exact solutions describing crystals of baryonic tubes and layers can be constructed in a NLSM that includes π-mesons and ω-mesons. These configurations have an arbitrary topological charge (written as the product of two integer numbers) and can be modulated through a light-like degree of freedom present in both mesonic fields. The inclusion of the ω-mesons to the NLSM allows for a reduction in the binding energy between baryons, making the predictions of the model more compatible with the experimental data.Interestingly, with the solutions constructed here, this can be seen very clearly: the energy density plot of the tube-like solutions shows that the tubes flatten in one direction by reducing the repulsion between baryons and forming a sort of “linguine phase". Also, by looking at the same plot, it could be possible tobreak the tubes into small pieces and form a “gnocchi phase" by choosing a fine-tuning of the mode's coefficients. In this way, the inclusion of the ω-mesons could be important for an analytical description of the gnocchi phase. Finally, we have shown that these exact solutions can be constructed in the generalized Skyrme model, where higher-order derivative terms are included in the action. G.B. is funded by the National Agency for Research and Development ANID grant 21222098.M. T. is funded by Agencia Nacional de Investigación y Desarrollo (ANID) grant 72210390. A.V. has been partially funded by National Agency for Research and Development ANID SIA SA77210097. A.V thanks the hospitality of ICEN (UNAP) where part of this project was done. 999Shifman1 M. Shifman, “Advanced Topics in Quantum Field Theory: A Lecture Course" Cambridge University Press, (2012).Wilczek K. Rajagopal and F. Wilczek, in At the Frontier of Particle Physics/Handbook of QCD, edited by M. Shifman (World Scientific, Singapore, 2001); arXiv:hep-ph/0011333.MantonBook N. Manton, P. Sutcliffe, Topological Solitons Cambridge University Press, Cambridge, 2007.Lizzi A.P. Balachandran, A. Barducci, F. Lizzi, C.G.J. Rodgers, A. Stern, Phys. Rev. Lett. 52 (1984), 887.Shifman2 M. Shifman, A. Yung, “ Supersymmetric Solitons" Cambridge University Press, (2009).Witten0 E. Witten,Nucl. Phys. B 223, 422 (1983); Nucl. Phys. B 223, 433 (1983).ANW G. S. Adkins, C. R. Nappi, E. Witten, Nucl. Phys. B 228, 552 (1983).Derrick G. H. Derrick,J. Math. Phys. 5 (1964), 1252-1254 doi:10.1063/1.1704233 Skyrme T. Skyrme, Proc. R. Soc. London A 260, 127 (1961); Proc. R. Soc. London A 262, 237 (1961);Nucl. Phys. 31, 556 (1962).Zahed I. Zahed and G. E. Brown,Phys. Rept. 142, 1-102 (1986)Jackson1 A. Jackson, A. D. Jackson and V. Pasquier,Nucl. Phys. A 432, 567-609 (1985).Ordonez C. Ordonez, L. Ray and U. van Kolck,Phys. Rev. C 53, 2086-2105 (1996).Yan M. L. Yan, S. Li, B. Wu and B. Q. Ma,Phys. Rev. D 72, 034027 (2005).Adkins1 G. S. Adkins and C. R. Nappi,Phys. Lett. B 137, 251-256 (1984).Jackson2 A. Jackson, A. D. Jackson, A. S. Goldhaber, G. E. Brown and L. C. Castillejo,Phys. Lett. B 154, 101-106 (1985).Meissner U. G. Meißner and I. Zahed, Phys. Rev. Lett. 56, 1035 (1986).[8]p P. Jain, R. Johnson, N.W. Park, J. Schechter, H. Weigel, Phys. Rev. D 40 (1989) 855.[9]p B. Schwesinger, H. Weigel, G. Holzwarth, A. Hayashi, Phys. Rep. 173 (1989) 173.[10]p U.-G. Meißner, Phys. Rep. 161 (1988) 213.[11]p R. Johnson, N.W. Park, J. Schechter, V. Soni, H. Weigel, Phys. Rev. D 42 (1990) 2998.[12]p D. Masak, Phys. Rev. D 39 (1989). 305.tHooft G. 't Hooft,Nucl. Phys. B 72, 461 (1974).Witten1 E. Witten,Nucl. Phys. B 160, 57-115 (1979).Gudnason S. B. Gudnason and M. Nitta,JHEP 09, 028 (2017).Scherer S. Scherer,Adv. Nucl. Phys. 27, 277 (2003).Adam1 C. Adam, J. Sanchez-Guillen and A. Wereszczynski,Phys. Rev. D 82, 085015 (2010)Marleau1 L. Marleau,Phys. Rev. D 43, 885-890 (1991).Marleau2 L. Marleau,Phys. Rev. D 45, 1776-1781 (1992).Marleau3 L. Marleau,Phys. Lett. B 235, 141 (1990) [erratum: Phys. Lett. B 244, 580 (1990)].Klebanov I. R. Klebanov,Nucl. Phys. B 262, 133-143 (1985).Goldhaber A. S. Goldhaber and N. S. Manton,Phys. Lett. B 198, 231-234 (1987).Kugler1 M. Kugler and S. Shtrikman,Phys. Lett. B 208, 491-494 (1988).Braaten E. Braaten, S. Townsend and L. Carson,Phys. Lett. B 235, 147-152 (1990).Kugler2 M. Kugler and S. Shtrikman,Phys. Rev. D 40, 3421 (1989).Adam2 C. Adam, A. G. Martin-Caro, M. Huidobro, R. Vazquez and A. Wereszczynski,Phys. Rev. D 105, no.7, 074019 (2022).Adam3 C. Adam, A. Garcia Martin-Caro, M. Huidobro and A. Wereszczynski,Symmetry 15, no.4, 899 (2023)crystal1 F. Canfora, Eur. Phys. J. C 78, no. 11, 929 (2018).crystal2 F. Canfora, S.-H. Oh, A. Vera, Eur.Phys. J.C 79 (2019) no.6, 485.crystal3 F. Canfora, M. Lagos and A. Vera, Eur. Phys. J. C 80, no. 8, 697 (2020).crystal4 F. Canfora, S. Carignano, M. Lagos, M. Mannarelli and A. Vera, Phys. Rev. D 103 (2021) 7, 076003.crystal5 F. Canfora,JHEP 11, 007 (2023).Alvarez P. D. Alvarez, F. Canfora, N. Dimakis and A. Paliathanasis,Phys. Lett. B 773, (2017) 401-407.Hidalgo F. Canfora, D. Hidalgo, M. Lagos, E. Meneses and A. Vera,Phys. Rev. D 106, no.10, 105016 (2022).Barriga1 G. Barriga, F. Canfora, M. Torres, A. Vera,Phys. Rev. D 103 (2021) 9, 096023.SU(N)1 P. D. Alvarez, S. L. Cacciatori, F. Canfora and B. L. Cerchiai,Phys. Rev. D 101, no. 12, 125011 (2020).SU(N)2 S. L. Cacciatori, F. Canfora, M. Lagos, F. Muscolino and A. Vera,JHEP 12, 150 (2021); Nucl. Phys. B 976, 115693 (2022).Torres J. Bersini, A. D'Alise, M. Torres and F. Sannino,[arXiv:2310.04083 [hep-ph]].Rebolledo F. Canfora and S. C. Rebolledo-Caceres,Mod. Phys. Lett. A 38, no.12n13, 2330002 (2023).Dorso J. A. Lopez, C. O. Dorso, G. A. Frank, Front.Phys. (Beijing) 16 (2021) 2, 24301. pasta1 D.G. Ravenhall, C.J. Pethick, J.R.Wilson, Phys. Rev. Lett. 50, 2066 (1983).pasta2 M. Hashimoto, H. Seki, M. Yamada, Prog. Theor. Phys. 71, 320 (1984).pasta2a C. J. Horowitz, D. K. Berry, C.M. Briggs, M. E. Caplan, A. Cumming, A. S. Schneider, Phys. Rev. Lett. 114, 031102 (2015).pasta2b D. K. Berry, M. E. Caplan, C. J. Horowitz, G. Huber, A. S. Schneider, Phys. Rev. C 94, 055801 (2016).pasta3 C. O. Dorso, G. A. Frank, J. A. López, Nucl. Phys. A978, 35 (2018).pasta4 A. da Silva Schneider, M. E. Caplan, D. K. Berry, C. J. Horowitz, Phys. Rev. C 98, 055801 (2018).pasta5 M. E. Caplan, A. S. Schneider, and C. J. Horowitz, Phys. Rev. Lett. 121, 132701 (2018).pasta6 R. Nandi and S. Schramm, J. Astrophys. Astron. 39, 40 (2018).pasta7 Z. Lin, M. E. Caplan, C. J. Horowitz, C. Lunardini, Phys. Rev. C 102 (2020) 4, 045801.pasta8 C.O. Dorso, A. Strachan, G.A. Frank, Nucl. Phys. A 1002 (2020) 122004.pasta9 C.J. Pethick, Z. Zhang, D.N. Kobyakov, Phys. Rev. C 101 (2020) 5, 055802.Watanabe G. Watanabe and T. Maruyama,[arXiv:1109.3511 [nucl-th]].Liebling S. L. Liebling and C. Palenzuela,Living Rev. Rel. 26, no.1, 1 (2023).Walecka1 R̈. S. Costa, M. R. Cortes, D. R. Nunes, and A. S. A. Batista,AIP Conference Proceedings 1625, 212 (2014).Walecka2 J. D. Walecka,Annals Phys. 83, 491-529 (1974)Walecka3 Marco Schramm, Study of Inhomogeneous Phases in the Walecka Model Bachelor-Thesis Technische Universitat Darmstadt. Barriga2 G. Barriga, F. Canfora, M. Lagos, M. Torres and A. Vera,Nucl. Phys. B 983, 115913 (2022).aprox0 L. Brey, H. A. Fertig, R. Cote, A. H. MacDonald,Phys. Rev. Lett. 75, 2562 (1995).aprox1 I. Klebanov, Nucl. Phys. B 262 (1985) 133.aprox2 E. Wrist, G.E. Brown, A.D. Jackson, Nucl. Phys.A 468 (1987) 450.aprox3 N. Manton, Phys Lett. B 192 (1987) 177.aprox4 A. Goldhaber, N. Manton, Phys Lett. B 198 (1987), 231.aprox5 N. Manton, P. Sutcliffe, Phys. Lett. B 342 (1995) 196.aprox6 D. Harland, N. Manton, Nucl. Phys. B 935 (2018) 210.aprox7 W. K. Baskerville, Phys. Lett. B 380 (1996) 106.aprox8 M. Loewe, C. Villavicencio, Phys. Rev. B 71 (2005) 094001.aprox9 M. Loewe, S. Mendizabal, J.C. Rojas, Physics Letters B 632 (2006) 512–516.aprox10 J. A. Ponciano, N. N. Scoccola, Phys. Lett. B 659, 551 (2008).Aviles L. Aviles, F. Canfora, N. Dimakis, D. Hidalgo, Phys. Rev. D 96 (2017), 125005.Oh F. Canfora, M. Lagos, S. H. Oh, J. Oliva and A. Vera, Phys.Rev. D 98, no. 8, 085003 (2018).Ayon1 E. Ayon-Beato, F. Canfora and J. Zanelli,Phys. Lett. B 752, 201-205 (2016).Ayon2 E. Ayón-Beato, F. Canfora, M. Lagos, J. Oliva and A. Vera,Eur. Phys. J. C 80, no.5, 384 (2020).Flores F. Canfora, D. Flores-Alfonso, M. Lagos and A. Vera,Phys. Rev. D 104, no.12, 125002 (2021).Pais F. Canfora, M. Lagos, P. Pais and A. Vera,Phys. Rev. D 108, no.11, 114027 (2023).
http://arxiv.org/abs/2312.16131v1
{ "authors": [ "Gonzalo Barriga", "Matías Torres", "Aldo Vera" ], "categories": [ "hep-th", "nucl-th" ], "primary_category": "hep-th", "published": "20231226173402", "title": "Exact modulated hadronic tubes and layers at finite volume in a cloud of $π$ and $ω$ mesons" }
Quantum reservoir computing (QRC) has been proposed as a paradigm for performing machine learning with quantum processors where the training is efficient in the number of required runs of the quantum processor and takes place entirely in the classical domain, avoiding the issue of barren plateaus in parameterized-circuit quantum neural networks. It is very natural to consider using a quantum processor based on microwave-frequency superconducting circuits to classify microwave signals that are analog—continuous in time. However, while theoretical proposals of analog QRC exist, to date QRC has been implemented using circuit-model quantum systems—artificially imposing a discretization of the incoming signal in time, with each discrete time point input by executing a gate operation. In this paper we show how a quantum superconducting circuit comprising a linear oscillator coupled to a single qubit can be used as an analog quantum reservoir for a variety of classification tasks, achieving high accuracy on all of them. Our quantum system was operated without artificially discretizing the input data, directly taking in microwave signals (centered at ∼6 GHz). Our work does not attempt to address the question of whether or when QRCs could provide a quantum computational advantage in classifying pre-recorded classical signals. However, beyond illustrating that sophisticated tasks can be performed with a very modest-size quantum system and inexpensive training, our work opens up the possibility of achieving a different kind of quantum advantage than a purely computational advantage: superconducting circuits can act as extremely sensitive detectors of microwave photons; our work demonstrates processing of ultra-low-power microwave signals in our superconducting circuit, and by combining sensitive detection with QRC processing within the same system, one could achieve a quantum sensing-computational advantage, i.e., an advantage in the overall detection and analysis of microwave signals comprising just a few photons. tocsectionAbstractBGK subgrid model for neutrino quantum kinetics Masamichi Zaizen January 14, 2024 ===============================================tocsectionIntroduction§ INTRODUCTION Over the last decade, researchers in quantum information processing have broadly divided their efforts into two distinct but complementary themes. In one, the focus has been on realizing the building blocks for large-scale, fault-tolerant quantum processors <cit.>, which would enable running algorithms such as Shor's or Grover's at meaningful scale. In another, there has been a push to realize quantum systems comprising tens to hundreds of qubits or qumodes, but without error correction, and to explore what can be done with such noisy, pre-fault-tolerance systems—often denoted as noisy, intermediate-scale, quantum (NISQ) devices <cit.>. Quantum computational supremacy with such NISQ devices has been demonstrated <cit.>, but there has been much less progress on achieving quantum advantage in practically relevant applications than had been hoped for as NISQ machines began to be created <cit.>. There have been many NISQ studies on quantum machine learning <cit.>, and in this area too, quantum advantage for problems of broad practical interest has remained elusive <cit.>. A major open question is whether one can achieve any practically relevant advantage for machine learning with NISQ systems.One of the main approaches to performing quantum machine learning with NISQ machines is to use parameterized quantum circuits as quantum neural networks <cit.>, which are a subclass of variational quantum algorithms, in which parameters of a quantum circuit are adjusted, usually by a classical co-processor, so that the quantum circuit incrementally approaches carrying out a desired computation. This approach, however, typically suffers from barren plateaus <cit.>, which mean that, in practice, it is difficult or impossible to perform the optimization required to set circuit parameters <cit.>. Inspired by the framework of reservoir computing <cit.> in classical machine learning, quantum reservoir computing (QRC) <cit.> has emerged as an approach to quantum machine learning that entirely avoids barren plateaus by performing all the learning in the classical domain. They key idea of a QRC is that a quantum system (called a quantum reservoir) can generate nonlinear, high-dimensional features of inputs to it, and that these features can be used to perform machine-learning tasks purely by training a classical linear transformation. QRC can be implemented both in the circuit model of quantum computation <cit.> and with analog quantum dynamical systems <cit.>. However, experimental demonstrations to date have been performed with digital quantum circuits <cit.> that have limited the complexity of tasks that can be performed, in part due to an input bottleneck imposed by the use of discrete gates to input temporal data using a series of separate, imperfect gates.The aim of our work is to demonstrate a proof-of-principle for a new application of and approach to quantum machine learning with NISQ devices that overcomes or sidesteps the challenges in training and inputs noted above. We use the driven, continuous-time analog quantum nonlinear dynamics of a superconducting microwave circuit as a quantum reservoir to generate features for classifying weak, analog microwave signals (Fig. <ref>a). We use repeated measurements of the reservoir both to extract features that contain information about temporal correlations in the input data, as well as to induce non-unitary dynamics. Our use of a continuous-variable system in our quantum reservoir grants us access to a substantially larger Hilbert space than would be the case with a qubit-only system with equally many hardware components. In relying on continuous-time dynamics, our approach is similar to other proposals for analog NISQ processors and simulators <cit.>, which aim to avoid the overhead that imposing a discrete-time (circuit-model, gate-based) abstraction causes. Analog operation grants us an even more important ability however, which fundamentally distinguishes our work from prior experimental demonstrations of quantum machine learning on circuit-model quantum processors: it allows our device to directly, natively receive weak analog microwave signals, and to immediately leverage analog quantum information processing to extract relevant features of the signals for classification.This small shift in context has important implications, offering a new path to practical quantum advantage with NISQ hardware. Rather than focusing on using NISQ hardware to perform computation on pre-recorded, digital data, we instead use quantum hardware to perform computation on real-time analog signals that interface directly with our microwave superconducting device. Our experiments do not address the question of whether a QRC can achieve a quantum computational advantage, since our experimental device is small enough to be easily classically simulable. However, our demonstrations suggest a route to achieving a quantum advantage of a different kind: an advantage in the quantum detection and processing of weak microwave signals, allowing quantum hardware to extract complex information of interest from dim, analog signals in ways that would be noisier with a conventional classical approach. This type of quantum advantage, arising from a combination of quantum sensing with extraction of complex features about the sensed signal, is discussed in general terms as a route to quantum advantage with quantum machine learning in Ref. <cit.>. Our work shows that when classical signals comprising just a few photons have entered an analog quantum reservoir, they can be classified using our QRC approach. If one combines this analog quantum processing with a sensitive quantum detector of microwave radiation, as has already been previously demonstrated using superconducting circuits <cit.>, then one can construct a system that achieves a quantum advantage in the task of combined sensing and signal processing. §.§ Experimental setup and protocolstocsubsectionsec:experimentOur quantum reservoir, composed of a long-lived cavity mode coupled to a transmon qubit (Fig. <ref>b), can be modeled with the rotating-frame Hamiltonian,H = -χ a^†a σ_z/2 + ϵ^*(t) a + ϵ(t)a^† +Ω_x(t) σ_x + Ω_y(t) σ_y,where σ_z is the Pauli operator on the qubit subspace of the transmon, a is the photon annihilation operator of the cavity mode, and χ is the nonlinear interaction strength (see Appendix <ref> for details). The right-most term of Eq. <ref> describes the unitary control of the qubit, and the second term describes both the encoding of the input data ϵ_in(t), and unitary control of the oscillator mode, i.e., ϵ(t) = ϵ_in(t) + ϵ_control(t). Equation <ref> describes the unitary dynamics, which is complemented by non-unitary dynamics generated by the back-action from qubit measurements interspersed throughout the evolution.The oscillator and qubit control drives used in our work realize a reservoir that consists of a series of entangling unitaries interleaved with qubit and oscillator measurements (Fig. <ref>c). The analog input results in a time varying displacement of the cavity, which streams in concurrently with control drives implementing an entangling unitary. Following the unitary, we perform a qubit measurement, and then the parity of the oscillator state is measured <cit.> (see Appendix <ref>). The parity measurement projects the oscillator state into either even or odd super-positions of Fock states, giving us sensitive information about photon number changes of oscillator while inducing non-classical features to the state via measurement back-action. In effect, our construction implements a sequence of non-commuting measurements (see Appendix <ref>), generating correlated measurement distributions that can then be used as complex output features. The measurement outcomes are used to construct output feature vectors to be fed into the linear layer (Fig. <ref>a), but this can be done in a few different ways. In principle, with repeated applications of the unitary, we generate a sample of bitstrings with 2^M possible outcomes, where M is the number of measurements. The outcomes can be counted to directly form a sample probability distribution over measurement trajectories, which can then be used as a high-dimensional output feature vector after obtaining a sufficient number of samples N. While this approach has the benefit of capturing all information in the measurement distribution <cit.>, it can generally suffer from poor scaling in sampling noise, requiring N ∼ 2^M shots in the worst case <cit.>. On the other hand, one could average over the measurements directly <cit.>; however, this has the unwanted effect of averaging over and removing quantum correlations. Here, we construct an output feature vector from estimates of successive central moments μ_1, μ_2, μ_3, … of the underlying distribution over measurement trajectories (Fig. <ref>d). Additionally, given finite memory in our reservoir, we choose to only use correlations between measurements at most 3 measurements apart. This approach, inspired by Ref. <cit.>, has the benefit of leveraging the hierarchy of noise in the central moments, while capturing the essential correlations in the dynamics to achieve high accuracy even in the few-sample regime. For a detailed analysis of the construction of our reservoir output features with comparisons, see Appendix <ref>. § RESULTStocsectionsec:results §.§ Classification of time-independent signalstocsubsectionsec:results:spiralTo illustrate the scheme proposed in this work, we begin with an example classification using our quantum reservoir by performing binary classification task of time-independent signals. Figure <ref>a describes the control drives in more detail. For time-independent input data, the two-dimensional input data is encoded as the I and Q quadratures of an analog signal resonant with the oscillator frequency, which displaces the cavity. Here, time-independence describes the signal refers to the fact that it is on resonance the oscillator mode, and thus has no time-dependence in the oscillator's rotating frame (such that ϵ_in(t) = ϵ_in in Eq. <ref>). For time-independent tasks, the signal bandwidth is set by its duration, and therefore the resultant displacement is essentially conditioned on the qubit in the ground state due to the cross-Kerr interaction. The unitary encoding the input displacement is complimented by control drives that entangle the qubit and cavity via a series of conditional displacements <cit.> and qubit rotations. For time-indepedent tasks, this set of unitaries effectively impart a geometric area enclosed by the cavity trajectory onto the qubit, such that the information of the phase of the unknown input signal can be extracted via a qubit measurement (see Appendix <ref> for details of this unitary). In Appendix <ref>, we show the ability of the set of unitaries implemented here (Fig. <ref>a) to be able to approximate any scalar function of the input signal when the signal is time-independent. For all results presented, we implement our reservoir unitary with these control drives across all tasks, with 4 applications of the unitary interleaved with qubit and oscillator-parity measurements. The binary classification task we perform here is: Two distributions of time-independent signals, completely characterized by the signal's in-phase (I) and quadrature (Q) components, are distributed along two separate “arms of a spiral” in the I-Q plane (Fig. <ref>b).The classification task is: given a displacement described by the points I and Q sampled from either signal distribution, figure out which distribution the signal came from. This simple task has the feature that, if one feeds in the inputs directly into a linear layer, this would classify with an accuracy of no more than 67% – just above random guessing of 50% (Fig. <ref>b). As a point of comparison with non-linear digital networks, we found that a 64-dimensional, two-layer digital reservoir was needed to achieve the same performance as our quantum reservoir for this task (see Appendix <ref> for details of this comparison).To probe the role of quantum in our reservoir, we performed the same classification task, but with reduced coherence time in the qubit during the reservoir execution. This is achieved by populating the lossy readout resonator with photons that send the qubit to the center of the Bloch-sphere when the readout resonator is traced out (see Appendix <ref>). With T_2 → 0, we effectively removed all entanglement with the cavity, and observed two things: a dramatic reduction in classification performance, and importantly, T_2 only began affecting the performance once it was on the order of the reservoir duration, after which the qubit was projected. This latter point highlights an important benefit of repeated measurements in our reservoir construction, i.e. while entanglement is important for generating complex distributions in our setup, we are able to classify and capture information much longer than the qubit decoherence time, requiring only that the oscillator state is coherent at long times.§.§ Classification of radio-frequency (RF) communication modulation schemestocsubsectionsec:results:rfmlWe showcase our reservoir in a real-world setting by discriminating time-dependent radio-frequency (RF) signals from 10 different digital modulation schemes. Digital modulation schemes encode binary information in discrete `symbols' encoding in sequential time-bins. For example, Binary Phase-Shift Keying (BPSK) encodes binary data in discrete phase jumps of a signal, such that a symbol 0 (1) maps to a phase flip of 0 (π). While BPSK only contains one bit of information per symbol, other encoding schemes such as 32 Quadrature Amplitude Modulation (32QAM) can encode 5 bits per symbol. These and other encodings can be represented in a constellation diagram (Fig. <ref>a), which denotes the potential (I,Q) values a signal can take for each symbol. A given string of digital data can then be encoding in a time-domain signal by sequentially choosing points in the constellation diagram with a given symbol rate, denoting the rate at which the symbol will change. For typical WiFi signals this is around 250 kHz per subchannel <cit.>.For this task, we generated RF signals by encoding random digital strings into the 10 different modulation schemes with a fixed symbol rate of around 2 symbols per μs. The duration of these signals can last much longer than the reset rate of our system. Importantly, we did not repeat the same signal to artificially reduce the sampling noise associated with each input data, as this would not typically be applicable in a real-world setting. Instead, the measurement statistics were generated by sampling the signal in real time. Consequently, what we refer to as `shots' in a real-time task do not correspond to identical repetitions of the experiment, but instead, is the number of resets we performed while acquiring the signal, which changed from shot to shot. In effect, each different encoding scheme produces a unique “fingerprint" of distribution over measurement outcomes, and it is the goal of the linear layer is to separate these distributions with as high accuracy as possible. Figure <ref>c shows the accuracy in classifying digitally modulated RF signals with increasing number of shots, compared with the performance of a linear classifier. We note that in less than a millisecond, or with less than 2000 symbols, the reservoir was able to classify which of the 10 classes a given signal belongs to with > 90% accuracy when using 8 qubit-cavity measurements. A linear classifier can only achieve 20% classification accuracy for this task, even with infinite symbols. The confusion matrix between the difference classes at 32, 512, and 10^4 shots is displayed in Fig <ref>d, which is nearly diagonal.§.§ Classification of filtered noisetocsubsectionsec:results:noise_mlNext, to demonstrate the performance of our QRC on continuous-time data[The previous time-dependent task, RF-modulation-scheme classification, concerns discrete-time data.], and with a task that requires both long-term and short-term memory in the quantum reservoir, we performed the following classification task: input data assumed to have come from a source of white noise is filtered using a moving-average filter having one of three filter shapes (Gaussian, Lorentzian and inverse-power-law), and one of two window widths (50 ns and 600 ns), and the task is to identify both the filter shape and window width (Fig. <ref>a), leading to six possible output classes. The filter functions were normalized so that the photon number distributions generated by the time-dependent displacements are identical up to the filter width. This normalization was applied to ensure that the task is not trivially solvable by just measuring the mean photon number (see Appendix <ref>).Because all the signals used in this dataset are noise with zero mean, a linear classifier would do no better than random guessing. On the other hand, Figure <ref>b visually shows (using singular-value-decomposition on the output feature space) that the quantum reservoir was able to peel apart the different noise distributions. In this space, we see that the different classes are nearly all linearly-separable, with some overlap between the long-tailed but fast 50-ns inverse-power-law noise class, and the slow 600-ns Gaussian noise class. On the task of classifying over six different sources of noise, we achieved 93% accuracy (Fig. <ref>c) in only 2000 shots. As seen in the confusion matrix in Fig. <ref>d, the primary confusion at 2000 shots was distinguishing between the 50-ns inverse-power-law noise class and the 600-ns Gaussian noise class, as expected from the overlap in the SVD of the feature space. Finally, we compared the ability of our reservoir to understand long vs short correlations in input signals. For this, we deconstructed the full 6-class task into two sets of 3-class classification tasks, where each set has the same correlation length and are only distinguishable by the filter window type (see Fig. <ref>d and e). The class of signals with coherence length of 50 ns highlight the convenience of our input encoding scheme, i.e. feeding signals directly into the cavity mode without the need to sample the signal discretely in time. In contrast, classification of the class of signals with coherence lengths of 600 ns require correlations of the reservoir dynamics beyond that of the measurement rate. To highlight the advantage of our scheme, we simulate the performance of a reservoir with that of a recent gate-based protocol where the input is sampled discretely in time <cit.>. Our simulations results, in Appendix  <ref>, highlight the advantage of our protocol when the sampling rate of the input is slow, which can arise in experiment such as finite pulse durations and latency introduced by the FPGA classical comparison.Figure <ref>e looks at the participation of the different moments μ_k of the measurements in the classification accuracy of the 50 ns subtask (top), and the 600 ns subtask (bottom). Here, the output features are constructed by the mean μ_1, or the off-diagonal elements of the moments μ_2 and μ_3 as a function of the hamming distance, allowing us to probe the contribution of the moments as a function of the locality of the correlations. For the 50 ns subtask, we see that the most important contribution is the mean, with the second order moment being the next-most important contribution, and the third-order moment being relatively unimportant. In stark contrast, the third-order moment is most important for the 600-ns subtask, suprisingly yielding nearly 90% using non-local third-order correlations alone. The ability to distinguish stochastic signals among the combined six classes demonstrate the ability of our reservoir to capture both slow and fast features of microwave signals. To understand the role of the Hilbert space dimension on the performance of the reservoir, in Appendix  <ref>, we simulate an extension of our quantum reservoir with multiple qubits. Our results point to an increase in classification accuracy with every additional inclusion of a qubit to the reservoir, for the same duration of input signal received. § DISCUSSIONIn summary, we have experimentally realized an analog quantum reservoir computer (QRC) and demonstrated its ability to directly process microwave analog input signals without discretization, achieving high classification accuracy on three different tasks. Previous demonstrations of quantum reservoir computing have used multi-qubit, gate-based quantum reservoirs <cit.>. In contrast, we perform machine learning directly on analog signals fed into a single oscillator coupled to a transmon qubit. Intuitively, an analog (continuous-time, partially continuous-variable) quantum reservoir should be well-matched to processing microwave signals that may be continuous in time as well as amplitude. In addition to demonstrating accurate classification of microwave signals in our experiments, we also performed a direct comparison with a state-of-the-art discrete-time, gate-based QRC approach in simulation, and found that a continuous-time reservoir outperforms a discrete-time reservoir when the input signals contain temporal variations fast relative to the discretization time.While our quantum reservoir only has two constituents (an oscillator and a qubit) we are nevertheless able to construct high-dimensional output features from the reservoir—which is essential in reservoir computing <cit.>—by performing multiple (M) projective measurements of the qubit during the dynamics between resets of the reservoir. We have proposed and demonstrated using central moments to construct output feature vectors from correlations between the measurement results. This approach has two key benefits. First, it allows one to control the dimension of the feature vector through the choice of the maximum order of correlators to include—which is important because while it is important to have high dimensionality, it is also possible for the dimensionality to be too high[Two examples of disadvantages of feature-vector dimensionality being too high are: the classical post-processing and linear-layer computations may become overly costly, and the required number of shots may become too large.]. This is in contrast to, for example, constructing a feature vector from the histogram of all 2^M possible bitstring outcomes of performing M qubit measurements—in which case the feature vector has fixed size 2^M. Second, central moments provide a natural way to extract non-trivial correlations in the measurement results, which is best explained with an example: a correlation ⟨x_1 x_2|$⟩ may be dominated by the product⟨x_1|⟨%s|%s⟩⟩x_2, and we use the approach of central moments to subtract this trivial component. We performed experiments that compared the central-moment-correlators feature-vector construction with the histogram feature-vector construction and found that the former approach yielded better accuracy.For any quantum neural network, including QRC approaches, a central concern is to what extent one can achieve high accuracy on a particular task without needing an impractical number of shots <cit.>. Ref. <cit.> recently reported that certain functions—termed eigentasks—can be constructed with low error from quantum reservoirs even when the number of shots is modest, giving evidence that for some tasks, sampling noise need not be overwhelming. In our experiments, we found that it was possible to achieve high accuracy for all the tasks we attempted while needing only10^3–10^4shots (depending on the task). There is important future work to be done in exploring the tradeoffs between reservoir size (e.g., number of oscillators or qubits), number of measurementsMbetween reservoir resets, feature-vector dimension (dependent both onMand the choice of order of correlators to include), and number of shots required for both training and inference. Because in our construction the feature-vector dimension can be adjusted without changingM, it is possible to, for example, explore the impact of feature-vector dimension and content on task accuracy while using the same number of measurements and a fixed number of shots.Our quantum reservoir is small enough that it is easy to simulate classically, so it does not—at its present size of just one cavity and one qubit, at least—provide a quantum computational advantage. We nevertheless performed two studies to try understand what role quantumness is playing in our reservoir in achieving the classification accuracies that we experimentally observed. First, we showed that by artificially decreasing the coherence time of the qubit through injection of noise, the classification accuracy decreased. Second, we performed simulations of our QRC with a classicalized model of the quantum reservoir, in which no entanglement could be present, and found that this classicalized simulation of our QRC achieved worse accuracy than our quantum experimental results. These studies provide strong evidence that quantumness plays an important role in the operation of our quantum reservoir.With improved quantum hardware, we anticipate that it will be possible to carry out even more sophisticated tasks than what we have already demonstrated. Increasing the coherence time of the oscillator would enable us to perform many more measurements (the qubit's coherence time is, favorably, less important in our scheme because our protocol involves repeatedly projectively measuring the qubit). While we analytically showed in Appendix <ref> the ability of our QRC to be able to approximate any scalar function of the input signal when the signal is time-independent, provided the number of measurementsMperformed is large enough, there remains the open theoretical question of the expressiveness of the QRC when the input signal is time-dependent. Extending the qubit-oscillator system to have multiple qubits and/or multiple oscillators would provide a larger Hilbert space and the potential for more complex dynamics and entanglement, which should in turn support more sophisticated computations.It is an open question if QRC—using the type of reservoir we considered in this paper, or any other—can, when implemented with NISQ hardware, achieve a quantum computational advantage over the best classical machine learning approaches, just as it is unclear if any quantum-machine-learning method can <cit.>. We did not investigate the potential for purely computational quantum advantage: our quantum reservoir is small enough to be easily classically simulable, and we did not vary its size in experiment to systematically study scaling. In the setting of processing prerecorded signals (which can be copied and replayed with negligible added noise), our single-oscillator, single-qubit QRC would offer no computational advantage over the best classical algorithms running on classical digital computers. However, our work opens up the possibility to experimentally achieve a different type of quantum advantage than a purely computational one. If one performs quantum processing on data obtained by a quantum sensor, there is the potential for an advantage that is a hybrid of being due to the advantage of quantum sensing and of quantum computing <cit.>. Our work suggests the feasibility of concretely realizing this kind of hybrid quantum sensing-computational advantage, where the quantum sensor is a superconducting circuit that can detect classical microwave radiation with high quantum efficiency and low noise <cit.>, and the processing of the received signal can happen within the same superconducting circuit as the detection occurred. Our experiments have shown that it is possible to accurately classify signals using a superconducting circuit even when there are only a few photons of signal in the superconducting circuit within any single run. Combining this with a sensitive quantum detector could lead to quantum smart sensors—quantum versions classical in-sensor processors <cit.>—that can reliably extract information from weak microwave signals in a way that exceeds the accuracy of any equivalent classical system. Note added: During the final stages of our work, we became aware of a related effort, Ref. <cit.>, and we coordinated to release our papers simultaneously. Ref. <cit.> introduces a protocol for quantum reservoir computing with temporal data. Similar with theirs, our approach also uses mid-circuit measurements. We experimentally realized our reservoir with an analog quantum system, in contrast to their implementation, which was with a discrete-time, gate-based quantum system.§ DATA AND CODE AVAILABILITYAll data generated and code used in this work is available at: <https://doi.org/10.5281/zenodo.10432778>§ AUTHOR CONTRIBUTIONS A.S. designed and carried out the hardware experiments and performed the data analysis. S.P. performed the numerical simulations of the quantum system and helped to optimize the experimental protocol with early contribution from J.K.. V.K. performed the numerical simulations of the classicalized quantum system, and performed the comparisons with classical machine-learning methods. V.F. oversaw the design and creation of the superconducting device by S.R. and others. A.S. and V.F. set up the cryogenic and microwave apparatus. S.R. calibrated the superconducting device with A.S. and V.F.. Y.C. and X.W. performed the theoretical analysis of the expressivity in Appendix <ref>. T.O., L.G.W. and P.L.M. conceived the project, and T.O. and J.K. performed initial numerical simulations to validate the concept. A.S., S.P. and P.L.M. wrote the manuscript with input from all authors. P.L.M. supervised the project.§ ACKNOWLEDGEMENTSThe authors would like to thank Hakan Türeci, Shyam Shankar, Saeed Khan, Haohai Shi, William Banner, Shiyuan Ma, and Maxwell Anderson for helpful discussions and comments. The authors would also like to thank Bradley Cole, Clayton Larson, Britton Plourde, Eric Yelton, and Luojia Zhang for the fabrication of the transmon and on-chip resonator, Chris Wang for the design of the transmon, the on-chip resonator and the 3D superconducting cavity (using pyEPR <cit.>), and Nord Quantique for the fabrication of the 3D superconducting cavity. We gratefully acknowledge MIT Lincoln Laboratory for supplying the Josephson traveling-wave parametric amplifier (TWPA) used in our experiments. This paper is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-22-1-0203. We gratefully acknowledge a DURIP award with AFOSR award number FA9550-22-1-0080 for equipment used in this work. The authors wish to thank NTT Research for their financial and technical support. PLM acknowledges membership in the CIFAR Quantum Information Science Program as an Azrieli Global Scholar. Y.C. and X.W. were supported by the Air Force Office of Scientific Research under Grant No. FA9550211005, NSF CCF-1942837 (CAREER), and a Sloan Research Fellowship.text=8.5in-40mm,11in-50mm[bib/general,bib/qrc_ibm,bib/qrc_theory,bib/qrc_general,bib/qrc_hakan,bib/rc_general ] [sections]§ APPENDIX CONTENTS [sections]0 0.50.4pt § SUMMARY OF METHODS§.§ Reservoir unitary tocsubsectionsec:methods:reservoir_unitaryTo design a good reservoir computer capable of performing machine learning on a variety of tasks, one needs to implement control drives that can efficiently capture important information of the input and perform a non-trivial and non-linear map to output features. Here, our reservoir is composed of alternating unitaries and measurements. The design of the former is motivated to harness the quantum properties of the dynamical system to generate entanglement and the design of the latter to generate non-linear operations on the state of our reservoir via measurement back-action. Here we summarize the control drives and measurements we use and their effect on the reservoir dynamics, both in the context of time-dependent and time-independent signals. For time-independent signals, the unitary implemented in our reservoir (see Fig. <ref>b) can be approximated by the following set of unitaries (see Appendix <ref>)U_1= X_π/2 U_2= D(α) | g ⟩⟨ e | + D(-α) | e ⟩⟨ g |CNOD U_3= D(β) | g ⟩⟨ g | + | e ⟩⟨ e |InputU_4= X_π U_5= U_3 = D(β) | g ⟩⟨ g | + | e ⟩⟨ e |InputU_6= D(-α) | g ⟩⟨ e | + D(α) | e ⟩⟨ g |CNODU_7= Y_π/2.This combination of unitaries enclose a loop in the oscillator's phase space. The area of this closed loop, which depends on the phase of the unknown displacement β, imparts a geometric phase onto the qubit. In this work, we perform this unitary directly after a qubit measurement without reset. The action of the combined unitary on the qubit prepared in the ground or excited state, and for an arbitrary cavity state, isU | g ⟩ = U_7U_6U_5U_4U_3U_2U_1 | g ⟩= 1/√(2) D(β) [ isin(A - π/4) | g ⟩+ cos(A - π/4) | e ⟩ ]⊗|cavity⟩ U | e ⟩ = 1/√(2) D(β) [ icos(A - π/4) | g ⟩+ sin(A - π/4) | e ⟩ ]⊗|cavity⟩where A = 2 |α||β|sin (δ) = i(αβ^* - α^* β) is the geometric phase enclosed by the oscillator trajectory, and dependent on the phase difference δ between a known displacement α, and the unknown displacement β. The probability of measuring the qubit in the excited state given it started out in the ground state, P_e | g, and the probability of measuring excited given the qubit start in the excited state, P_e | e are given by P_e | g = cos(A - π/4)^2P_e | e = sin(A - π/4)^2The equation relates the qubit probability to the phase of the input displacement, which is otherwise challenging to extract in a setup with only qubit measurements. For general time-dependent signals, the closed loop formed by Eqs. <ref>-<ref> is broken, and the system is entangled before the measurement. While this can be hard to study analytically in the general case, we take a look at a special case of time-dependent signals, namely those of Fig. <ref>. Here, the signal is time-dependent up to half the duration, so that the signal is effectively two time-independent signals combined. As a result, Eqs. <ref> and <ref> are no longer equal, but each still a time-independent displacement, and thus, the effects of the cross-Kerr, as discussed in Appendix <ref>, do not hinder the interpretation of the effective gate-based model. For such input signals, the state of the system just before measurement is|ψ⟩= 1/2 [e^iA_iD(β_i) + e^-iA_jD(β_j)] ⊗| g, cavity⟩ + 1/2 [e^iA_iD(β_i) - e^-iA_jD(β_j)] ⊗| e, cavity⟩,where β_i is the displacement just before the qubit flip (corresponding to Eq. <ref> for this time-dependent set of tasks), and β_j is the displacement after (Eq. <ref>). A_i = αβ_i is the phase acquired after two non-orthogonal displacements. When β_i = β_j we recover the dynamics for time-independent signals. §.§ Repeated measurements tocsubsectionsec:methods:repeated_measurementsThe unitaries described above are followed by a qubit measurement, then a parity measurement. For time-independent signals, the qubit and cavity are disentangled at the end of the unitary, and the effect of the unitary on the cavity is just a displacement. Thus we can ignore any affects of the qubit measurement on the cavity. The state of the cavity after M repeated measurements and M time-independent displacements can be effectively described as|cavity⟩ = … P_p_4 D(β)P_p_3 D(β)P_p_2 D(β)P_p_1 D(β) | 0 ⟩,where P_x_n is the projector of the nth parity measurement Π with measurement outcomes x_n = {+,-}. In Appendix <ref>, we show that by sampling the parity measurements alone combined with the linear layer, we can realize (but not limited to) the following vector space of funtions: ℋ_parity := { c_0 + c_1 e^-2 |β|^2 + c_2 ( e^-2|β|^2)^2 + ⋯ + c_k ( e^-2 |β| ^2)^k: c_0, c_1, …, c_k ∈ℝ}.§.§ Output feature encoding & the linear layer tocsubsectionsec:methods:feature_vectorIn reservoir computing, the outputs of a reservoir, called feature vectors, are sent to a trained linear layer. Here, we briefly outline the motivation and construction of the feature vectors and the training algorithms used in this manuscript. In general, sampling over all possible measurement trajectory outcomes and generating a probability distribution contains all the information one can extract from a quantum system. However, not all the information plays an equal role for finite samples. Thus, for our work here, we use a physically motivated output feature vector that efficiently captures the relevant information for a linear layer. The output feature vectors for our reservoir are generated from computed correlations of measurement outcomes. The p-th order correlations are characterized by the p-th central moment μ_p of the underlying distribution of measurement trajectories. The elements of μ_p are (μ_p)_ijkl… = 1/N_shots∑_n^N_shots (x_ni - ⟨ x_i⟩ ) (x_nj - ⟨ x_j⟩ ) (x_nk - ⟨ x_k⟩ )(x_nl - ⟨ x_l⟩ ) …,where x_in is the nth repeated measurement outcome of observable x_i for a total of N_shots repetitions, and ⟨…⟩ is the expectation value taken over repetitions. For the results presented in the main text, we use only up to third-order correlations. Additionally, due to the finite memory present in our reservoir, we only keep correlations between nearest, next-nearest, and next-next-nearest measurements. See Appendix <ref> for details and motivation behind this choice. For machine learning with reservoir computing, the only component of the reservoir that is trained is a linear layer applied to the above feature vectors. The linear layer is an R × C matrix W_train and applied to the R-dimensional feature vector x, and biased with a C-dimensional vector v_train:y = W_trainx + v_train.Here C is equal to the number of classes in the data set. The largest elements of y corresponds to the class that the reservoir predicts the given input data point x belongs to. To train the weight matrix W_train, we either use a pseudo-inverse method to minimize the mean squared error (MSE) between W_trainx and y, or backpropagation to minimize the MSE after a softmax function. Both methods are described in more detail in Appendix <ref>. In the main manuscript, we present results for whichever performed the best.§ EXPERIMENTAL SETUPThe device used in this paper consists of an oscillator, a 3D stub post cavity made from high-purity 4N Aluminum treated with an acid etch, and a transmon qubit. The transmon, made of Niobium, is fabricated on a resistive silicon chip, along with an on-chip readout resonator also made of Niobium. The single chip hosting the transmon and the readout resonator is mounted in the 3D cavity package using copper clamps. The cavity and the copper clamp contain copper films for thermalization directly to gold-plated copper breadboard at the mixing chamber plate of the dilution refrigerator (Fig. <ref>). The device is shielded with Copper coated with Berkeley Black, and two types of magnetic shields: Aluminum, and Cryoperm (Fig. <ref>). The control pulses for the qubit and the storage are synthetized using Zurich Instruments (ZI) HDAWG, which have a baseband bandwidth of 1 GHz. These are upconverted using Rohde & Schwarz SGS100A, which are signal generators with built-in IQ mixers. These built-in mixers are used for all frequency conversions with the exception of the readout. The readout pulses are synthesized and digitzied using ZI UHFQA, and are up-converted and down-converted using Marki mixers (MMIQ-0416LSM-2), with a split LO from a single SGS100A. Readout signals are first amplified with a traveling-wave Josephson Amplifier (TWPA), a quantum-limited amplifier. The TWPA typically requires large pump tones, so we gate it with a trigger line from the readout AWG which combines with the CW pump tone in an IQ mixer (as a makeshift fast switch). The readout signals are then further amplified with a High-electron mobility transistor (HEMT) ampliflier at the 4K stage, and again amplified with a room temperature amplifier (ZVA-1W-103+ from Mini-Circuits) and filtered. The digitizer on the ZI UHFQA converts to the analog response to a digital signal and integrates it to produce a binary outcome depending on the qubit state. For the experiments that intentionally suppress the qubit T_2 via resonator induced dephasing via pumping of the readout resonator, we use an additional ZI HDAWG channel that combines with the AWG of the ZI UHFQUA. This was mostly a choice out of convenience, as the AWG of the ZI UHFQUA has limitations that made characterizations tricky. § SYSTEM HAMILTONIAN & RESERVOIR DESCRIPTION§.§ Hamiltonian description tocsubsectionsec:si:reservoir:hamiltonianWe approximate our transmon as a qubit. Our qubit-oscillator system is well described by the Hamiltonian <cit.>:H/ħ = ω_q q^†q+ ω_a a^†a - χ q^†q a^†a - χ' q^†q a^† 2 a^2 - K_q q^† 2 q^2- K a^† 2 a^2 + Ω(t) (q + q^†) +ϵ(t)(a + a^†),where a is the annihilation operator for the oscillator mode, and q is the annihilation operator for the qubit mode, ω_a and ω_q are the frequencies of the oscillator and qubit mode respectively, χ and χ' are the dispersive shift and the cavity state-dependent dispersive shift respectively, K and K_q are the self-Kerr of the oscillator and the transmon anharmonicity respectively. The values for these parameters, as well as values for decay rates are listed in Table <ref>. For the construction of our drives, we ignore the self-Kerr or the oscillator as well as the higher-order cross-Kerr. We note that these are indeed present, but for the purposes of a quantum reservoir, only add to the complexity of the dynamics. Finally, moving to the rotating frame of the qubit and cavity mode, we arrive at the Hamiltonian in Eq. <ref>.§.§ Reservoir description for time-independent signals tocsubsectionsec:si:reservoir:time_independentThe advantage of the reservoir computing paradigm is the flexibility in the choice of dynamics. However, simple design principles, motivated by the physics of the system, can go a long way in engineering a reservoir with high expressive capacity on many tasks. In this section, we provide full details and motivations for the unitaries and measurements in this work, followed by sections outlining characterizations of the device in order to realize the intended dynamics. The reservoir drives consists of two categories of dynamics: the unitaries and the measurements. In what follows, we will first provide analysis of the dynamics for time-independent input (e.g. the signals in Fig. <ref>). As we will see, the unitary component of the dynamics implemented in this work strives to implement a cos^2 nonlinearity on the raw input, whereas the measurements generate non-classical features in the state and quantum correlations in the measurement trajectories via measurement backaction.Whereas with a typical homodyne-setup, measuring the quadratures of some unknown signal is easy, however performing the same measurement of a displacement on an oscillator using only qubit measurements can be non-trivial. Of course, when designing a reservoir, one does not strive to implement the identity, but it is a good starting point – the unitary is thus implemented to approximate the identity. It consists of the input signal data, which is sandwiched on either side by fast conditional displacement gates implemented with CNOD <cit.> and qubit rotation gates. The broad-overview of the decomposed unitary is given in terms of gates in Fig. <ref>, along with a schematic portrayal of the phase-space trajectory of the oscillator mode initialized in vacuum subject to a time-independent drive. We begin with an idealized gate-based version decomposition of our reservoir for time-independent input on resonance with the oscillator conditioned on the qubit being in the ground state. The sequence of gates the reservoir unitary approximates:U_1= X_π/2 U_2= D(α) | g ⟩⟨ e | + D(-α) | e ⟩⟨ g |CNOD U_3= D(β) | g ⟩⟨ g | + | e ⟩⟨ e |InputU_4= X_π U_5= U_3 = D(β) | g ⟩⟨ g | + | e ⟩⟨ e |InputU_6= D(-α) | g ⟩⟨ e | + D(α) | e ⟩⟨ g |CNODU_7= Y_π/2Ignoring the very first unitary, after applying the sequence of unitaries U_2 through U_7, we arrive at unitaryU_7U_6U_5U_4U_3U_2 = i/√(2)e^αβ^* - α^*βD(β) (| g ⟩⟨ g | - | e ⟩⟨ g |) - i/√(2)e^-αβ^* + α^*βD(β) (| g ⟩⟨ e | + | e ⟩⟨ e | )Let |ψ⟩ = [e^-iϕ/2cos(θ/2)| g ⟩ + e^iϕ/2sin(θ/2) | e ⟩ ]⊗|cavity⟩be some arbitrary initialized state. Then for θ = π/2, we haveU_7U_6U_5U_4U_3U_2 |ψ⟩ = 1/√(2) D(β) [ isin(A - ϕ/2) | g ⟩+ cos(A - ϕ/2) | e ⟩ ]⊗|cavity⟩,where A = 2 |α||β|sin (δ) = i(αβ^* - α^* β) is the geometric phase enclosed by the oscillator trajectory which is dependent on the phase difference δ between the known displacement D(α), and the unknown displacement D(β) (Fig <ref>b). Thus, for the proper qubit state before the application of U_2… U_7, we are able to extract phase information of the displacement. We also note that the qubit and the oscillator are disentangled after the unitary, and that the effect of the unitary on the oscillator mode is a simple displacement. Finally, pre-pending U_1 (Eq. <ref>) to the string of unitaries guarantees that we initialize our qubit state with θ = π/2 when following a qubit measurement, independent of that measurement outcome. It also guarantees ϕ = π/2 or 3π/2 depending on the measurement outcome.The probability of measuring the qubit in the excited state conditioned on preparing it e vs g after the entire sequence is then:P_e | g = cos(A - π/4)^2P_e | e = sin(A - π/4)^2Thus, with this sequence of unitaries, we are able to extract the phase of some unknown displacement (relative to some known displacement α) by simply measuring the qubit. While for the first run of the reservoir, the qubit will start in the ground state (up to thermal noise), after performing a parity measurement, the qubit state will depend on the previous measurement outcome. See Fig. <ref> for an experimental implementation of the above results. In principle, Eq. <ref> enables us to perform the identity operation on the input x,y points followed by a cos^2 kernel. Without loss of generality, we take (α) = 0, then i(αβ^* - α^* β) = Im(β) = β_x. Alternating between (α) = 0 and (α) = π/2 allows us to extract cos^2(β⃗) with two runs of the reservoir. Whereas all gates besides the input (Eqs. <ref> and <ref>) are fast and therefore insensitive to the cross-Kerr interaction, the primary deviation from the gate description occurs for the input, which can be very long. This input displacement is conditioned on the qubit being in the ground state. Therefore, in the rotating from of the qubit-oscillator system, the branch of the cavity state conditioned on the qubit being in the excited state will rotate at a frequency χ, which in general will break the geometric phase construction for time-independent tasks. Therefore, we limit the exposure time of the reservoir to the input signal to be an integer multiple of 4π/χ, so that the cavity state conditioned on the qubit being in the excited state will return to the same point. The unitary described in Eqs. <ref>-<ref> is followed by a qubit measurement, then a parity measurement Π <cit.> with projectors P_±, whereΠ = (-1)^a^†a P_± = 1/2 (1 ±Π)As mentioned above, the effect of the unitary on the oscillator state for time-independent signals is simply a displacement of the input data D(β), independent of the qubit measurement outcome. For the following discussion, we will ignore the qubit dynamics, since the qubit and the oscillator are disentangled at the end of the unitary. In effect, the state of the cavity can be described by a series of alternating displacements and parity measurements:|cavity⟩ = … P_p_4 D(β)P_p_3 D(β)P_p_2 D(β)P_p_1 D(β) | 0 ⟩,where P_p_n is the projector of the nth parity measurement with outcomes p_n = {+,-}. For k runs of the reservoir, we can reorder terms and add pairs of canceling displacements D(-β)D(β) to rewrite the above as|cavity⟩ = ( ∏_n^k P_p_n^nβ) D(kβ) | 0 ⟩.Equation <ref> describes a series of projective measurements after preparing a displaced vaccum state. The projectors and their associated measurements areP_±^α = D(α)P_±D(-α) Π^α = D(α)Π D(-α)The measurements Π^α describe parity measurements in displaced frame at α. Incidentally, the expectation value of this operator are proportional to the Wigner function at α <cit.>. However, importantly, Eq. <ref>does not describe performing Wigner tomography of the state D(kβ)| 0 ⟩ = | kβ⟩ at points given by β, 2β, 3β, …, as the effective measurements Π^α do not commute for different values of α. Instead, in general [Π^α,Π^γ]0. Therefore, in this light, our reservoir construction can be seen to leverage non-commuting measurements and quantum contexuality to generate conditional and correlated probabilities over measurement trajectories.§.§ Reservoir description for slow varying time-dependent signalstocsubsectionsec:si:reservoir:time_dependentFor generic, time-dependent signals, like those classified in Figs. <ref> and <ref> in the main text, the geometric unitary described by Eqs. <ref>-<ref> does not in general hold, as the symmetry between panels 3 and 4 in Fig. <ref> is broken. Additionally, the approximation that the input is displacement conditioned on the qubit in the ground state (Eq. <ref> and <ref>) will not hold for high bandwidth signals, like those in Fig. <ref> in the main text. For high-bandwidth signals, the input will also have some contribution in displacing cavity state conditioned on the qubit being in the excited state, which can lead to complex dynamics in the cavity state. While for generic signals, this can be hard to describe, here we prove a treatment of our reservoir construction for slowly-varying, time-dependent signals, like those in Fig. <ref> of the main text.We can follow most of the derivation from the scenario of time independent signals in Appendix <ref>, to describe the dynamics of the QRC for the task of classifying radio frequency modulation schemes. Along with the assumptions in the previous section, we make the slow-varying input approximation, such that the displacement on the oscillator of the reservoir is still effectively conditioned on the ground state of the qubit. The displacement on the cavity depends on the value of the symbol encoded for the given modulation scheme. Since, in general, the symbol is different before and after the qubit π pulse: the direction of the displacement in the cavity will be different. Given the timescales of the input signal involved, this essentially corresponds to a displacement on the cavity state conditioned on the ground state of the qubit. When the two displacements are different in magnitude and direction, the qubit remains entangled with the cavity at the end of the reservoir unitary. The state of the system just before the measurements is (step (5) of Fig <ref>: |ψ⟩ = 1/√(2)(e^-iA_iD(β_i)| g, cavity⟩ + e^iA_jD(β_j)| e, cavity⟩), where β_i is the displacement before the qubit flip, and β_j is the displacement after. A_i = αβ_i is the phase acquired after two non-orthogonal displacements. When β_i = β_j, we recover the dynamics for time independent signals. It is straightforward to show that the qubit will be disentangled from the cavity and that the area A_i, corresponding to the geometrical phase form the area enclosed in phase space will be present as a relative phase difference between the ground and excited state. After a Y_π/2 gate, we have the following state in our system: |ψ⟩= 1/2 [e^iA_iD(β_i) + e^-iA_jD(β_j)] ⊗| g, cavity⟩ + 1/2 [e^iA_iD(β_i) - e^-iA_jD(β_j)] ⊗| e, cavity⟩, One can think of this as a cat state in the cavity, with a parity determined by the qubit state. This is schematically shown in (6) and (7) in Fig <ref>. In the limit of very different displacements, the probability of the qubit measurement is the same for both ground and excited states. The goal of this task can be thought of as discriminating probability distribution functions over the (I,Q) plane. Fig <ref> (a) represents the so-called “constellation " diagram of the modulation schemes considered in this work. Each scheme can take discrete values in (I,Q) space, with even equal probability (we construct the dataset of radio signals encoding random binary strings). Our lack of knowledge of the exact displacement on the cavity can be mathematically expressed as a density matrix. This is the most apparent in the state of the cavity after the initial qubit measurement, ρ_cavity' = ∑_β_i ∈ P p_i D^†(β_i)ρ_cavity D(β_i) + ∑_β_i, β_j ∈ P p_ij e^i(A_i - A_j)D^†(β_i)ρ_cavity D(β_j), where ρ_cavity' is the density matrix representation of the cavity right after the qubit measurement and ρ_cavity describes the initial density matrix before the application of the input. The set P describes the distribution of possible displacements which can be received from the input. p_i is probability for receiving the symbol corresponding to a displacement β_i, and p_ij is the conditional probability of displacement β_j, given β_i. For the task considered in this work, these distributions are uniform, with no contributions from conditional probabilities. However, this description of the reservoir motivates the potential for the QRC to distinguish signals with complex correlations in the symbols of the message encoded.§ QUANTUM RESERVOIR CHARACTERIZATION§.§ CNOD tocsubsectionsec:si:reservoir_characterization:cnodHere, we provide the calibration of the CNOD unitary <cit.>, one of the components of our reservoir unitary (Fig. <ref>). The CNOD protocol implements the following unitaryCNOD(α) = D(α)| g ⟩⟨ e | + D(-α)| e ⟩⟨ g |.The protocol is implemented with two `Anti-symmetric pulses' sandwiching a qubit pi-pulse. In the frequency domain, the pulse is composed of two gaussian envelopes offset such that there is a zero-crossing at the qubit ground state frequency, and that the spectrum is anti-symmetric around this point (see Ref. <cit.>). The Anti-symmetric pulse is a conditional displacement, conditioned on the qubit being in the excited state.The motivation for using CNOD instead of a single tone displacement on resonance with the stark-shifted qubit frequency is that it enables the ability to perform conditional displacements at time scales much smaller than 2π/χ. Figure <ref>a displays the protocol for characterizing the anti-symmetric pulse. First, the qubit is unconditionally brought to the equator of the bloch sphere, with a wide-band X_π/2 pulse. After this, the anti-symmetric pulse acts on the cavity, followed by an qubit measurement, collapsing the cavity state to either D(α)| 0 ⟩ or | 0 ⟩. After collapsing the state, we perform a number-splitting spectroscopy on the cavity. This is performed with a conditional Y_π, conditioned on the kth cavity Fock state <cit.> followed by a second qubit measurement. By post-selecting on the first qubit measurement outcome, we can characterize the cavity state for each branch. Figure <ref>b and c show the number-splitting spectroscopy for the cavity state conditioned on the qubit being in the ground state vs excited, as a function of pulse amplitude. These curves are fitted with a single parameter scaling parameter that defines the relationship between pulse amplitude voltage and the amount of displacement α.§.§ Reservoir unitary characterization tocsubsectionsec:si:reservoir_characterization:characterizationWith our rotation gates and CNOD's calibrated, we describe in this section the calibration of signal drives toward the implementation of Eqs. <ref>-<ref>. We begin with a calibrating the duration of time our reservoir is exposed to the input signal. As discussed in Appendix <ref>, calibrating this delay is crucial for a faithful implementation of the geometric phase detection unitary introduced in this work. While it may seem that this restriction in the signal duration is contrived and unrealistic in a real-world setting where the signal is unknown, we argue we can get around this by putting a commercial fast switch. Figure <ref>a schematically describes the experimental protocol for calculating the delay between the two CNOD pulses. Here, we effectively try to undo a double conditional displacement via second double conditional displacement. Due to the dispersive shift, after the first conditional displacement, the state of the cavity conditioned on the excited state of the qubit will start rotating with respect to the state of the cavity conditioned on the ground state. After a period of 2π/χ, the this will return to the same position as the start. Undoing the displacement at this point in time will send the cavity state to vacuum. Figure <ref>b shows the fock distribution of the cavity as a function of the waiting time, and Fig. <ref>c shows the cavity state overlap with the vacuum state as a function of the waiting time. Next, we implement the full unitary given by Eqs. <ref>-<ref>, where the section corresponding to the input data displacement (Eqs. <ref> and <ref>) is given by the duration found in the results above. For this calibration, we implement the full unitary given by the diagram in Fig. <ref>a by varying the angle of the input displacement and looking at the dependence. Figure <ref>a shows the schematic overview of the calibration procedure. The geometric phase unitary is parameterized by a long displacement, whose angle we sweep. After the unitary we perform a qubit measurement, followed by a parity measurement. This calibration experiment is essentially identical to the time-independent reservoir computing experiments in terms of the control protocol. Here, instead of sending data from different distributions for the system to classify, we only vary the phase and amplitude of some input displacement so if we get the phase dependence we want. Figure <ref>b shows the distribution of measurement outcomes from measuring the qubit and the oscillator parity after the unitary is applied with α = 1 and β = 0.25. As the angle of the input is swept, the qubit probability of the qubit being in found in the ground state shifts to being found in the excited state. This is more evident in Fig.<ref>c where we plot the probability of measuring the qubit in the excited state P_e as a function of the phase of β for different amplitudes of β. In comparison we find great qualitative agreement with the expected result P_e = cos(2 |α||β|cos(δ) + π/4)^2, where δ = (α) - (β) (see Eq.<ref>), though we find an extra reduction in the dynamic range in P_e for increasing β due to qubit overheating.For our quantum reservoir tasks, we choose α to be quite small, near 0.2. The effect of this is a severe reduction in the dynamic range of P_e, but one that is easily distinguishable at 1000 shots. For all of our tasks, this was the mininum number of shots needed to get 100%. Keeping |α| small allows for a greater sensitivity in |β| without worrying about qubit overheating. §.§ Qubit & parity measurements tocsubsectionsec:si:reservoir_characterization:measurementsThe qubit and parity measurements performed in this work are the standard pulse schemes used in many previous works, with one change. The typical procedure of measuring the parity of a cavity state is similar to a Ramsey experiment (and perhaps more closer still to a `qubit-revival' experiment <cit.>), and importantly requires knowledge of the state of the qubit before the measurement is performed. In a quantum reservoir setting where measurement trajectories can be unknown, measuring the parity of the cavity is not straight-forward without post-selection or feedback. Here, since we perform a qubit measurement just before the parity measurement, we apply simple feedback that conditions the parity unitary on the measurement outcome of the preceding qubit measurement. The condition is such that the parity measurement outcome is now independent of the preceding measurement outcome. This reduces the order of correlations required to gain the same information: attaining the parity of the cavity only requires information about the parity measurement, whereas previously, second order correlations between the qubit and parity measurement was required. A further refinement to reduce trivial correlations in the measurement history would reset the qubit after the oscillator parity, however, due to limitations in the FPGA software, this was not implemented. §.§ Tuning T_2 via resonator-induced dephasing tocsubsectionsec:si:reservoir_characterization:t2Here we describe the experiment to reduce the qubit coherence time by pumping the readout resonator with photons during our reservoir experiments (see Fig. <ref>d). The calibration of this experiment involves performing standard a Ramsey T_2 experiment, modified with a pump on the readout resonator (Fig. <ref>a). Once populated, the resonator photons induce an dispersive shift, which, sends the qubit to the center of the Bloch sphere once the readout resonator is traced out. In principle, this interaction is coherent, and the qubit should see a revival; however, due to the leaky nature of the readout resonator by design, a coherent revival is not observed. As remarked in the end of Appendix <ref>, this experiment required an auxilliary AWG line. Figure <ref> denotes this as the `Readout Auxillary' line. Figure <ref>b shows the results of the Ramsey calibration with the readout pump on, for varying pump powers. We see a steady decrease in the qubit coherence time as the pump amplitude is increased as expected. The curves are fit to the equationP_e =cos (2πδ t)e^-t/T_2,where δ is an intentional detuning. Here, a Gaussian pulse was used as the readout pump. We expect that due to the construction of the reservoir, a flattop pulse may be more detrimental to the classification performance, since the Gaussian pulse has little amplitude during the CNOD unitaries shown in Fig. <ref>a. Finally, we note that the maximum T_2 shown in Fig. <ref> differs from the value quoted in Table <ref>. After preliminary calibration data corresponding to those in Fig. <ref>, the experiments in Fig. <ref>c were performed, after which the qubit T_2 was suddenly lowered. However, all experiments presented in this manuscript, with the exception of Fig. <ref>, were performed where the qubit T_2 matched that of Table <ref>. Given the conclusion that the qubit T_2 did not impact classification accuracies until it approaches the time between measurements, we decided to include the higher quality data presented in Fig. <ref>, rather than the preliminary data used to calibrate the results in Fig. <ref>c. § MACHINE LEARNING WITH THE QUANTUM RESERVOIR§.§ Output feature encoding tocsubsectionsec:si:ML:output_feature_encodingIn this work, we use measurement correlations as the output feature vectors from which the trained linear layer of our reservoir performs the classification. In this section, we provide details in how these were constructed from measurement results, as well as motivations and comparisons with other output encodings. As described in the main text, measurements of our reservoir involve two measurements following every data input: a qubit measurement and a parity measurement. The qubit measurement, which follows just after the input unitary, either extracts information about the input displacement (if the signal is time-independent), or performs some nontrivial back-action on the oscillator state (see Fig. <ref>). The parity measurement, which follows the qubit measurement, will simply measure the parity of the cavity state post-qubit measurement, and collapse the oscillator state to either even or odd Fock states. It is worth pointing out that measurements of the parity are done with an entangling unitary starting with a known qubit state and then performing a regular qubit measurement (see Appendix <ref> for details).In this manuscript, qubit measurements are performed using standard dispersive readout, which we review here, since the process involves a number of nonlinear steps (for a thorough review, see Ref. <cit.>). Each measurement outcome is the result of integrating a response signal from the readout resonator, and is defined by a single point on the I-Q plane. For sufficiently strong coupling between the readout resonator and the qubit compared with the resonator linewidth, the set of all possible integrated IQ points will form two (or more) localized and well-seperated blobs, indicating projective measurement with single-shot fidelity. These two (or more) blobs correspond to different states of the transmon, and single-shot fidelity refers to the ability to discern the state of the qubit using only one readout pulse. With knowledge of the location of these blobs, and which state they correspond to, we perform a threshold the measurement result to either `0' or `1', indicating the qubit ground state or excited state respectively. From a string of binary measurement outcomes, or bitstring, we form our feature vectors by first calculating the p-th central moment μ_p, defined as(μ_p)_ijkl… = 1/N_shots∑_n^N_shots (x_ni - ⟨ x_i⟩ ) (x_nj - ⟨ x_j⟩ ) (x_nk - ⟨ x_k⟩ )(x_nl - ⟨ x_l⟩ ) …,where the number of indices of μ_p is equal to p. Here x_ni is the nth repeated measurement result of observable x labeled by i. In our setting, i labels the i-th measurement in a sequence of correlated measurements before the system is reset. The expectation value ⟨ x_i ⟩ is taken over the shots N_shots– counting the number of system resets and repetitions. Faithful estimates of these moments are made with enough shots typically requiring on the order of a 1000 shots for the results presented in this manuscript.The central moments of Eq. <ref> are used in the construction of the output feature vector for the linear layer to perform the classification task. Specifically, the feature vector is generated by appending successively more and more central moments. We denote these appended feature vectors as μ⃗_≤ p for feature vectors containing up to p central moments, e.g. μ⃗_≤ 2 = [μ⃗_1,μ_2] is a feature vector constructed from appending the covariance to the mean. The first order moment here is a vector to denote that the mean is taken over repetitions of different measurements, whereas the covariance is a matrix and thus is not denoted as a vector. Additionally, we only take the upper triangle of the covariance at most, since that contains all the independent degrees of freedom of the symmetric covariance matrix. Figure <ref> contains classification results on the spiral dataset (Fig. <ref>) as a function of the number of shots for the feature vectors μ⃗_1, μ⃗_≤ 2 and μ⃗_≤ 3. We see that our quantum reservoir has non-trivial third-order correlations and that the reservoir leverages these correlations to boost classification accuracy. The covariance matrix averaged over the entire spiral dataset is plotted in Fig. <ref>, and the third order correlations are plotted in Fig. <ref>– plotted as a set of 2D matrices. In the third-order correlations in particular, we can begin to pick out by eye the differences in the two classes. In general, for arbitrary moments, the number of independent components is M+p-1p, where p is the order of the moment, and M is the number of measurements. This construction generally allows us to construct feature vectors that are smaller than the sample probability over all possible measurement trajectories, which is 2^M dimensional. However, as can be seen in Fig. <ref>, there is yet redundant information even after taking only the symmetric part, specifically, that the information tends to be very local and that measurements far apart tend not to be correlated. This has the physical interpretation that while measurements are indeed correlated, even possessing higher-order correlations, this correlation tends to be local due to the finite memory of the system. This motivates us to further restrict our feature vector to only capture the essential local correlations. Figure <ref> compares the classification performance of feature vectors generated with up to third-order moments, where we truncate the locality of the correlations. That is, the elements of the third order central moment (μ_3)_ijk is set to zero if | i - j | > d_H or | i - k| > d_H, for some integer d_H we interpret as a Hamming distance. We note that including third-order correlations between measurements that are up to three `sites' away nearly reproduces the classification accuracy of when you include all third-order central moments. Additionally, we compare the construction of feature vectors using truncated moments up to third-order with that of using the full sampled distribution as the feature vector. These last two statements were found to be true for all tasks presented in this paper. We used the truncated third-order correlations as the feature vector as the universal feature vector for all tasks presented.§.§ Training the linear layer tocsubsectionsec:si:ML:linear_layerThe only component of the reservoir that was trained to fit the dataset processed by the reservoir was the linear layer applied to the feature that the physical reservoir produced. The linear layer was an R× C matrix W_train and C-dimensional vector v_train applied to the R-dimensional reservoir feature x to get y=W_train x + v_trainwhere the largest of the C elements of y corresponded to the predicted class of the data point (C is the number of classes). To train the linear layer, we chose between two different approaches: the pseudo-inverse method and back-propagation through a softmax function on the output. First, we will describe the pseudo-inverse method. Let X be a N× (R+1) matrix consisting of R-dimensional reservoir features generated for N training points, with a row of 1's appended (this is to compute both W_train and v_train at once). Let Y be an N× C matrix consisting of C-dimensional column vectors that serve as labels for the training points such that Y_i,j=1 if j corresponds to the class of the i^th training point and zero otherwise. For an ϵ > 0, we construct W^'_train (W_train appended with v_train) as : W^'_train = (X^TX+ϵ I )^-1X^T Y In our case, the value of ϵ was swept to maximize the accuracy of the classification. In the limit of ϵ→ 0, pseudo-inverse matrix of Eq. <ref> minimizes the mean squared error between XW_train and Y, and so has been a popular choice for training the linear layer at the output of reservoirs <cit.>. However, our goal was to classify the input signals based on the largest element of the final output vector.Consequently, the linear layer that resulted in the lowest mean squared error with our labels was not always the linear layer that gave us the best accuracy. For this reason, we also used a second training method for our linear layer. This approach used softmax and back-propagation using the automatic differentiation package from PyTorch <cit.>. In this approach, the prediction vector y from Eq. <ref> is passed through the “Softmax" activation function:(y_prediction)_i = exp(y_i)/∑_j=1^C exp(y_j)We then computed the mean squared error between the resulting y_prediction and label for the training point that produced the underlying reservoir feature, and used back-propagation to compute the gradient for our linear layer. The linear layer was then updated using the ADAM optimizer <cit.> with the default settings of β_1=0.9,β_2=0.999 and a learning rate of 0.01. For our reservoirs, we tried both methods of training the linear layer and used whichever yielded the best accuracies. Empirically, we found that while pseudo-inverse training was better in some cases, training the linear layer with back-propagation often yielded quite large accuracy advantages over pseudo-inverse.§ SUPPLEMENTARY INFORMATION MACHINE LEARNING TASKS§.§ Classification of Radio-Frequency signals tocsubsectionsec:si:tasks_si:rfmlIn this section, we discuss about the algorithm for generating the dataset for the classification of digital modulation schemes on radio signals. The digital modulation scheme involves encoding sequences of binary values into the amplitude and phase of a radio signal for a fixed duration. The number of binary values encoded depends on the modulation scheme. For example, for BPSK (binary phase shift key), each symbol (change in property of the signal) encodes one bit of information. For 32QAM (quadrature amplitude key), there are 32 possible values, which allows each symbol to contain 5 bits of information. For this task, we keep the symbol rate fixed across all the tasks. Moreover, the pulses generated by the arbitrary waveform generator (AWG) all occur at the baseband frequency. This signal is then upconverted to the frequency of the cavity before sent to the device. To generate the set of possible sequences, we randomly select each symbol with equal probability. This corresponds to the case of each possible binary string of digits encoded to be equally likely. Due to memory constraints on the AWG, we cannot output a continuous encoded signal for long durations, corresponding to the regime of large samples of the reservoir. We circumvent this constraint by realizing that, for this task, there are no correlations in the encoded binary digit sequence (since each symbol is equally likely). Therefore, for example, the probability of a long binary digit sequence can be correctly emulated by sampling multiple short binary digit sequences and concatenating them together. For this task, we can simply achieve this by generating a signal with eight symbols, which is the number of symbols enter our QRC before its state is completely reset.§.§ Classification of noisy signals tocsubsectionsec:si:tasks_si:noise_mlTo generate the dataset describing the task of classifying noisy signals using the QRC (see Fig. <ref>), we start with emulating white noise. At each time step of the sampling rate of the AWG, we choose a value for in-phase and quadrature signals uniformly between the unit interval (up to an overall normalization). While this is limited to the sampling rate of the AWG (around 2× 10^9 samples per second), this is much larger than any relevant time scale of the experiment. Therefore the approximation of broadband white noise is appropriate to describe the effect of the signal on the system. We then apply “kernels" as a convolution in the time domain to each need seed of the white noise generated signal. This can also be thought of a bandpass filtering function in frequency domain. The classification task is then to identify the kernel. Each kernel is defined by a time domain function. The only hyper-parameter to describe each kernel is the overall scaling value. In this work, we set the DC component of this kernel in frequency domain to be the same for all classes (set to unit value without loss of generality). In the time domain, this corresponds to scaling the amplitude such that the area enclosed by the filter function in time is the same for all functions.. We do this to make sure that a direct integration of the signal over a time domain much longer than the correlation length introduced by the kernel, cannot distinguish the signals from each (see Fig. <ref>). The above normalization ensures the random variable associated with the value of this integrated value is the same for all distributions. Therefore, this ensures that any ability of the reservoir to classify the signals is intrinsically from its computational capacity to distinguish short-time correlations (in this work we choose a correlation time scale of 50ns and 600ns, with kernel functions of Gaussian, Lorentzian, and the Inverse function: generating a total of 6 classes.). § SIMULATION OF THE QUANTUM RESERVOIR§.§ Introduction tocsubsectionsec:si:simulation:introductionClassical simulations of the QRC can provide insight into the expected computational capacity in experiment. For our work, classical simulations of the dynamics of the reservoir were primarily performed with the aid of QuTiP <cit.>. The algorithm to estimate the classification accuracy for a given task then follows the same technique used in experiment, with training and testing datasets on the measurement outcomes of the simulation. We implement the Hamiltonian in Eq <ref>, by approximating the transmon as a qubit, and introducing a finite dimensional Fock truncation to the cavity subspace. It is important to ensure that the Fock truncation does not introduce any spurious effects, for it can be a source of non-physical non-linearities in the system. For example, a linear cavity, treated as a harmonic oscillator, only performs a linear transformation on an incoming analog radio frequency signal. However, if in simulation, the support of the state of the cavity exceeds the Fock truncation of the simulation, numerical errors introduce non-gaussian states in the cavity mode. Such effects will depend non-linearly on the input, and hence can effectively act as a “good" (but of course unphysical) reservoir! To ensure this doesn't happen in simulation, at every step of the unitary evolution, we monitor the probability of the wavefunction on the largest Fock state in the simulation. If this value goes above 1% during the simulation, a warning is raised, and the results of the simulations are discarded. To make the simulations efficient, we make certain assumptions on the quantum system. Firstly, we treat the reservoir controls of qubit rotations and conditional displacements with a “gate"-based unitary. However, to take into account the analog, continuous dynamical evolution implemented by the cross-Kerr interaction term in the Hamiltonian, the interval of the input into the system is implemeted with the full time-dependent Hamiltonian evolution (using QuTiP's “mesolve" functions). Finally, another approximation we make (in favor for simulation speed) is ignoring decoherence effects. To ensure this approximation is valid, we performed simulation with the stochastic wavefunction approach with photon loss and qubit dephasing rates measured in experiments <cit.>. We obtain differences in expected classification accuracies within error bars (which are obtained from different datasets from repeated simulations). This also gives us confidence that the role of decoherence in the system plays a minimal role in the computational capacity of the reservoir. The results of the simulations with the third central moment are plotted in Fig. <ref>. Interestingly, the performance as a function of the number of samples agrees to experiment within the same order of magnitude. This gives a good estimate for the experimental time required for produce a classification accuracy versus shots curves in experiments. For all the three tasks, the reservoir approaches 100% with sufficient samples or integration time of the input.§.§ The advantage of continuous-time continuous-variable QRCs over discrete-time qubit-based QRCs tocsubsectionsec:si:simulation:continuousIn this section, we benchmark the performance of our continuous-time-continuous variable QRC in comparison with other hardware implementations of reservoirs. To highlight the benefit of our QRC in processing time varying input signal, we compare the simulation of our reservoir with that of a recent QRC scheme involving repeated measurements on a multi-qubit based superconducting circuit quantum system <cit.>. For this comparison, we simulate the expected performance of both systems on the task of classifying different noise signals with the classes described in Fig. <ref>. While our reservoir can naturally interface with analog signal, this is not the case with the protocol introduced in <cit.>. For this simulation, the signal is samples at discrete time and input to the system as a scalar parameter (one for the in-phase value and one for quadrature value). To highlight the advantage of our QRC, we slightly modify the task introduced in the Fig. <ref>. Here, we normalize the six filter functions such that the integral of the filter function in frequency domain is kept constant. We do this such that the standard deviation associated with the distribution of the sampled signal is the same across all signals. The only information distinguishing amongst the signals is in the correlation between two close samples in time. To elucidate this reasoning, we simulation the performance of the two reservoirs as a function of the time duration in between two samples of the signal (in the case of the discrete qubit based reservoir) and integration windows (for our analog reservoir). Such a finite duration can arise from finite-pulse durations of reservoir protocols, qubit-measurement times, and the finite latency introduced by the classical FPGA processor. For example, for our experiment, this time is around 4μ s, mostly arising from the measurement of the qubit and the parity of the cavity. In experiment (Fig. <ref>), we had generated and timed the input wave forms such that the delay between inputs is essentially 0 μ s. For a typical IBM quantum device with mid-circuit measurement, the protocol used in Ref. <cit.>, the finite latency can be estimated to be around 8μ s <cit.>. The protocol for the discrete-time quantum reservoir is designed to only act on real values input signals. However, for a continuous signal in the rotating frame, we have both the in-phase and quadrature values. For the experimental quadrature, these values correspond to displacements on the oscillator in orthogonal directions. To extend the scheme presented in <cit.>, we do the following minimal change: we interleave the between sample points of in-phase and quadrature values. We could have chosen these points with the delay of τ in between each. However, this might have the effect of introducing twice the delay compared to the continuous time reservoir. Therefore, we choose the relaxed constraint of the input such that both the in-phase and quadrature values are chosen at the same point, with just a delay in between two different in-phase and quadrature points. §.§ Comparison to other reservoirs tocsubsectionsec:si:simulation:reservoirsA cavity coupled to a qubit is a hardware efficient quantum system to perform reservoir computing on analog signals. In this section, we motivate this by simulating the performance of other natural choices of quantum reservoirs: a single qubit, and a single cavity. The protocol for these systems are inspired by what one can naturally perform in experiment. To make a reservoir with a cavity, we couple the input into the cavity (as is the case for the experimental design). To readout the cavity, we perform a transmission style Homodyne measurement, which infers the mean field value of the cavity. This is a continuous form of measurement, where the output feature is a time dependent radio frequency signal at the frequency of the cavity mode. Since the cavity is always in a coherent state, the output time trace is linearly dependent on the incoming signal. For a fair comparison, we only use a handful of values from the time trace (as many as the number of measurements in the experiment). While this might seem restrictive, we process this via the same method as the case of the experimental reservoir, by computing the functional definition of the central moments. This does not necessarily make sense for this protocol, since the output do not correspond to samples from a discrete probability distribution, but can nevertheless introduce non-linearities in the representation of the feature vector. These non-linearities can improve the performance of the reservoir beyond a linear layer. This is observed for the case of time-indepedent Spiral classification, where the cavity reservoir performs better than random. This performance is solely due to the “post-processing" of the output of the reservoir we adopt for our experiment. However, for time dependent tasks, the performance is hardly better than random.Another natural candidate is a single qubit reservoir. For this case, we directly interface the signal to the qubit. A qubit is able to naturally represent non-linear functions of the input, which can be intuitively seen by visualizing the action of a qubit rotation on a Bloch sphere. As a fair comparison, we choose the same qubit reservoir controls in experiment, which involve qubit pulses before, during and after the continuous input. The output is a string of binary outcomes of qubit measurements, which can be done experimentally with the use of a readout resonator. Each reservoir of the qubit lasts twice as long as the experimental QRC to obtain the same feature vector size. This is then processed the same as the cavity qubit coupled reservoir, before applying a trained linear layer. Interestingly, the qubit fails to perform better than random for the Spiral task. On the other hand, it is able to reach near 100% accuracy for the time dependent signal classification tasks. The ability for even a single qubit to successfully perform a many-class classification task is illuminating at the remarkable processing capabilities of reservoir. However the total input signal required can be more than order of magnitude longer compared to that for the experimental QRC to achieve the same accuracy. The ability of a cavity coupled to a qubit system to perform significantly better than either of its components provides a clear picture of the important role entanglement can play. §.§ Multi-qubit reservoirs tocsubsectionsec:si:simulation:multiQuantum reservoir computing is a promising paradigm in the NISQ era. It is therefore interesting to consider the potential benefits in performance with larger devices, which are within reach of today's experimental capabilities. As a natural extension of our quantum reservoir, we consider a scenario of one continuous variable cavity mode dispersively coupled to multiple qubits. For simplicity, we assume the dispersive strength of each qubit to the cavity is the same. To motivate the capacity of such a reservoir, we simulate the system for upto four qubits to estimate the classification accuracy for the task of identifying correlated noise signals. The unitary protocol is illustrated in Fig <ref> (a.). The protocol begins with each a π/2 pulse on each qubit, which brings the state of the qubit onto the equator of the Bloch sphere. To entangle the qubits with cavity, we simulate the action of a generalized qubits conditioned cavity displacement. This involves a displacement on the cavity, whose value is different for each 2^N N-qubit possibilities. An example of the action of this operator is depicted in Fig <ref> (b.) for the case of two qubits. Here there are four possibilities of the states of the qubit, and each is associated with a displacement value of the four corners of the square. The choice of the displacement values was chosen somewhat arbitrarily, but serves to highlight a situation of an efficient multi-component entanglement. For the case of two qubits, the real and imaginary components of the displacement where either ± 0.5. For the case of three qubits, there are eight total possible states. The set of displacements chosen form a three by three grid state, ranging between ± 1, excluding the center of this grid (which is centered at the origin). For the case of four qubits, a four by four grid uniformly distributed between ± 1.5 cover all sixteen possibilities. The choice of which displacement corresponds to which qubits state was also made somewhat arbitrary (the motivation is even without much design choices, a reservoir can successfully implement machine learning!). For these simulations, each position of the grid is associated with a decimal value, increasing sequentially from left to right, starting from the top left and progressing towards the bottom right (starting with zero). The associate qubits state the displacement is conditioned on is the binary string representation of the decimal value, with a sequence length of the number of qubits. After the multi-qubit entangling conditional displacement gate, the cavity is subject to the input. The dynamics of the system are influenced by the cavity-qubit coupled dispersive interactions, where the interaction strength is the same between the cavity and all qubits and set to that of the experiment. Like the experimental QRC, each qubit is flipped with a π pulse in the middle of the input. The protocol ends with the same conditional displacement, before a π/2 pulse. The output of the reservoir is the measurement of each qubit, along with the parity measurement of the cavity. This protocol is repeated four times, to match the experimental protocol as much as possible. Fig <ref> (c.) is the classification accuracy as a function of the total time of input signal received for the reservoirs with different number of qubits, for the task of noise classification. While they all achieve essentially 100% accuracy, the total time required to achieve this accuracy drops significantly with increasing number of qubits. Other than the single qubit reservoir (the experimental protocol), the performance of the reservoir is similar both at the low and high signal duration regime, differing only in the intermediate regime. The reason for the difference in behavior of the performance for the case of cavity coupled to a single qubit is the slight change in reservoir protocol. To accurately account for the experimental protocol, the state of the qubit is determined by the outcome of the parity measurement of the cavity. This was not implemented in the simulations for multiple qubits. This ends up improving the performance of this reservoir for this task in the low signal duration regime. However, in the higher signal duration regime, increasing the number of qubits increases the accuracy. The classification error as a function of number of qubits in the reservoir is plotted in Fig <ref> (d.), including the case for just a cavity (zero number of qubits in the reservoir), for a select number of total shots of the entire reservoir. Very crudely, the error in classification seems to reduce exponentially with every additional qubit in the reservoir.§ THEORETICAL ANALYSIS OF THE EXPRESSIVITY OF OUR QRC FOR TIME-INDEPENDENT SIGNALS The ability of the QRC to perform better than an optimal linear layer on the input is in the reservoir's ability to express many non-linear functions of the input—its expressivity. Here, we quantitatively characterize the class of functions which can be represented by the oscillator component of the QRC for a time-independent input. In this regime, the input can be represented by two variables: the values of the in-phase and quadrature components. The output feature vector from the QRC is then a function of these two variables. where we have written α = |α| e^i ϕ_α, β = |β| e^i ϕ_β, and set |α| = 1/2. Choosing different values of ϕ_α gives rise to different output features of the QRC. In this experiment, we pick ϕ_α∈{0, π/2}, but in principle, one can add to the feature vector with more choices of ϕ_α. For example, one can choose ϕ_α∈{0, ω, 2 ω, …, (r-1) ω} where ω = 2π/r. The final output after the linear layer is an arbitrary linear combination of all the p_α(β) functions.Intuitively, the larger r, the more expressive the function space spanned by these features. Furthermore, the higher order central moments allow the output feature vector to represent powers of this probability: p_α(β)^n, for moments upto the nth order. We have shown that the qubit measurements extract the phase information of the input complex number β. Below we will focus on the oscillator parity measurement which is sensitive to the magnitude of β. Recall that the post-measurement (unnormalized) state of the cavity can be described by a sequence of alternating displacements and parity measurements (Eq. <ref>): |Ψ_x⃗(β) ⟩ = P_x_M D(β) ⋯ P_x_2 D(β) P_x_1 D(β) | 0 ⟩,where P_x_i is the projector of the i-th parity measurement with outcome x_i ∈{0,1}, with `0' standing for `even' and `1' for `odd'. That is, P_x_i = I + (-1)^x_iΠ/2, where Π = (-1)^a^† a. The corresponding probability of obtaining x⃗ = (x_1, x_2, …, x_M) as the sequence of measurement results given the input β is[ x⃗ | β] = ⟨Ψ_x⃗(β) | Ψ_x⃗(β) ⟩. To obtain a simplified expression for [ x⃗ | β], we will make use of the following formula:P_x D(β) P_y = D(β) + (-1)^x ⊕ y D(-β)/2 P_y, ∀ x ∈{0,1}, ∀ y ∈{0,1},which is an easy application of the commutation relation Π D(β) = D(-β) Π, with the latter being derived from Π a = -a Π. Using Eq. <ref>, we can remove all the explicit parity projectors in Eq. <ref>:|Ψ_x⃗(β)⟩ = ( ∏_i=1^MD(β) + (-1)^x_i ⊕ x_i-1 D(-β)/2) |0⟩,where for notational simplicity we have prepended the bit-string x⃗ by x_0 ≡ 0. Note that the order of the product does not matter since the terms commute with each other. It follows that: [ x⃗|β] = ⟨ 0 |( ∏_i=1^MD(-β) + (-1)^x_i ⊕ x_i-1 D(β)/2) ( ∏_i=1^MD(β) + (-1)^x_i ⊕ x_i-1 D(-β)/2) | 0 ⟩= ⟨ 0 |( ∏_i=1^M[ 1/2 + (-1)^x_i ⊕ x_i-1D(2β) + D(-2β)/4] ) | 0 ⟩. There are multiple methods to encode the measurements of the QRC. Representing every binary string of measurement outcomes as the feature, the output of the QRC are all the probabilities {[ x⃗ | β] }_x⃗∈{0,1}^M. From Eq. <ref>, it is not hard to see that when regarded as functions of β, these 2^M features linearly span a (M+1)-dimensional function space that has the following basis functions:f_k(β) := ⟨0| D(2kβ) |0⟩ = e^-2k^2 |β|^2,k=0,1,2,…,M.Therefore, the set of all functions realizable by the QRC combined with the linear layer is{ c_0 f_0(β) + c_1 f_1(β) + ⋯ + c_M f_M(β) : c_0, c_1, …, c_M ∈ℝ}. Given the large redundancy of the output feature encoding manifested above, a compact representation can be the centralized moments μ_i_1, i_2, …, i_k(β) := 𝔼[ ( x_i_1 - 𝔼[x_i_1] ) ( x_i_2 - 𝔼[x_i_2] ) ⋯( x_i_k - 𝔼[x_i_k] ) ]. These feature functions contain terms like 𝔼[x_1] 𝔼[x_2], 𝔼[x_1]^2, 𝔼[x_1] 𝔼[x_2] 𝔼[x_3], and so on. In particular, for any k, 𝔼[x_1]^k = ( 1/2 - e^-2|β|^2/2)^k can be written as a linear combination of centralized moments of order less than or equal to k. It follows that the QRC using at most k-th order centralized moments combined with the linear layer can realize (but not limited to) the following vector space of functions: ℋ_parity := { c_0 + c_1 e^-2 |β|^2 + c_2 ( e^-2|β|^2)^2 + ⋯ + c_k ( e^-2 |β| ^2)^k: c_0, c_1, …, c_k ∈ℝ}. Note that ℋ_parity is exactly the set of all degree-k polynomials in the variable w ≡ e^-2 |β|^2. Suppose that in some classification task, the magnitude of the input has an upper bound, say, |β|≤ 1, then w takes value in the closed interval [e^-2, 1]. By the Stone–Weierstrass theorem, in the limit k →∞, ℋ_k approximates all continuous functions of w on [e^-2, 1], and hence all continuous functions of |β| on [0,1].§ CLASSICALIZED RESERVOIR WITH THE MAXWELL–BLOCH APPROXIMATION To investigate the role that quantumness plays in our reservoir's performance, we performed a simulation of our quantum reservoir in a way that its behavior is classicalized, using the Maxwell–Bloch approximation <cit.>. This is a mean-field approximation that removes entanglement between qubit and cavity and has the cavity state remain coherent. The statistics extracted from the simulation are the same—namely probability of qubit excitation and cavity parity—but the underlying measurements are assumed to have no effect on the qubit or cavity. That is, the classicalized version of the reservoir removes the effects of measurement backaction as well. §.§.§ Setup: Hamiltonian and Lindbladian tocsubsectionsec:si:MB:hamiltonianThe Lindblad master equation <cit.> for the Hamiltonian <ref> can be written, then as:ρ̇ =-2π i[H,ρ] + 1/T_1,storageD_ρ[a] + 1/T_ϕ,qubitD_ρ[σ_z] +1/T_1,qubit((1-n̅)D_ρ[σ]+n̅D_ρ[σ^†])D_ρ[L] = Lρ L^† -1/2(L^† L ρ + ρ L^† L )1/T_ϕ,qubit = 1/T_2,qubit - 1/2*T_1,qubitHere, n̅ is the expected thermal occupation of the qubit. This quantity, along with T_1,T_2 of the storage and qubit are selected to match those of the experiment. The T_2 and thermal occupation of the cavity are negligible compared to all other terms in the time scale of the experiment and so are neglected. For an operator whose expected value we track, we can take an equation of motion with respect to that expected value:⟨̇ ̇Ȯ ̇⟩̇ =-2π i⟨[O,H]⟩ + 1/T_1,storage⟨D_O[a]⟩ + 1/T_ϕ,qubit⟨D_O[σ_z]⟩ +1/T_1,qubit((1-n̅)⟨D_O[σ]⟩+n̅⟨D_O[σ^†]⟩)D_O[L] = L^† O L -1/2(L^† L O + OL^† L ) §.§.§ Equations of Motion tocsubsectionsec:si:MB:eoms Tracking only first-order central moments (the means), we only need to keep track of ⟨ a⟩,⟨σ⟩, and ⟨σ_z ⟩. We assume that there is no entanglement between the qubit and cavity - meaning that at all times for any operator Q on the qubit and C on the cavity we have ⟨ QC⟩ = ⟨ Q⟩⟨ C⟩. We also assume that the state in the cavity remains coherent and therefore ⟨ a^† a⟩ = |⟨ a ⟩|^2. Using Eq. <ref> we can derive the following equations of motion:⟨̇ ̇ȧ ̇⟩̇ = 2π×(-iχ/2⟨σ_z⟩⟨ a ⟩ -id_storage^*(t) )- 1/2T_1,storage⟨ a ⟩⟨̇σ̇⟩̇ = 2π×( id_qubit^*(t)⟨σ_z ⟩ -iχ⟨σ⟩ |⟨ a⟩|^2) - 2/T_ϕ,qubit⟨σ⟩ -1/2T_1,qubit⟨σ⟩⟨̇σ̇_̇ż ̇⟩̇ = 2π×(2i(d_qubit(t)⟨σ⟩ -d_qubit^*(t)⟨σ⟩^*)) - 1/T_1,qubit(⟨σ_z⟩ - (2n̅-1) ) §.§.§ Gate effects tocsubsubsectionsec:si:MB:eoms:gates For operations in our reservoir which take place over very small time scales compared to the period of the cross-Kerr interaction term, such as qubit gates, we implement these as gates directly in our system. For a π/2-pulse along the x-axis, the resulting unitary isgiven by 1/√(2)(I-iσ_j). Within all expectations that include them, this results in the transformation: 1/2(σ +i[σ_j, σ] + σ_jσσ_j )From this, we get, for j=x, σ→σ+σ^†/2 -iσ_z/2,σ_z→σ_y = iσ -iσ^†and for j=y,σ→ -σ_z/2 + σ-σ^†/2,σ_z→ -σ_x = -σ-σ^†Finally, for the simple X_π pulse, the transformation is simply:σ→σ^†, σ_z→ -σ_z Finally, as for the CNOD operation, the way this was implemented in our experimental reservoir involved two opposite displacements at the same frequency, with a π-pulse on the qubit in between. Note, however, that in the mean field approximation, ⟨σ_z a⟩ = ⟨σ_z⟩⟨ a ⟩, and since our qubit was on the Bloch sphere equator during exposure to data and application of CNOD, this term goes to zero (and any deviation from the Bloch equator is due to noise and contains no information). Therefore,⟨ a ⟩ depends on the drive term only, and CNOD has no effect on the cavity, as the cavity is displaced and then undisplaced by an equal amount. Since the only significant term affecting the qubit on the time-scale of the CNOD is the |⟨ a ⟩|^2 term, which increases and decreases symmetrically about the π pulse applied to the qubit, the effect of the cavity is cancelled out and the entire procedure is equivalent to only a π-pulse on the qubit. This is, therefore, the only part our CNOD that survived in the “classicalized" version of our reservoir control protocol.§.§.§ Transferring our reservoir controls into the Maxwell-Bloch setting: investigating our protocol's performance in a classical setting tocsubsubsectionsec:si:MB:eoms:spiral As seen in Fig. <ref>(a), simulations of our reservoir using Maxwell-Bloch for classifying spiral, yielded classification accuracies that were no better than that of random guessing. The reason for this is that every point in the spiral has a point in the opposite class with identical magnitude but opposite phase in the quadrature space. However, the Maxwell-Bloch version of our reservoir is unable to differentiate two time-independent signals with identical magnitudes. As discussed in the previous section, our version of CNOD only affects the qubit with a π-pulse operation. Therefore, the only changes to the cavity are due to displacement caused by the data signal as a driving term, as seen from Eq. <ref> since ⟨σ_z⟩ =0 and so the qubit state does not influence our cavity during this stage. Consequently, when we measure cavity parity, this contains information about the magnitude of the signal but not its quadrature phase. The qubit, with the drives turned off, only has a dependence on the cavity in terms of ⟨σ_z⟩ |⟨ a ⟩|^2as seen in Eq. <ref>. The qubit excitation probability, which we will obtain later after a Y_π/2 pulse is a nonlinear function of the integral of the cavity's expected photon number over time. Therefore, information about the signal quadrature phase is absent from qubit measurements as well.Importantly, this is a flaw that our particular reservoir protocol suffers from in the Maxwell-Bloch setting. For instance, having unconditional displacements, that differ for each run of the reservoir, could help turn quadrature phase information in the signal into information about the storage cavity's displacement.This limitation interestingly had a very minimal effect on the performance of the classicalized reservoir when it came to classifying radio-frequency communication modulations. This can be seen in Fig. <ref>(b), whose shot-limited accuracies were close to those of the quantum reservoir simulation accuracies of Fig. <ref>(b) with the same number of central moments. This is likely because differences between consecutive quadrature phases of incoming signals now introduce interactions that affect both qubit and cavity. Another point noticed in our investigation with Maxwell-Bloch was that, while the performance hierarchy of third-order-cumulant-based features outperforming the sampling feature was maintained (see Fig. <ref>), the advantage was dramatically lower. The lack of variation is the reason why only third-order central moments are plotted in Fig. <ref>, as the performance curves would lie on top of each other. There are several reasons for this. Firstly, there are no longer higher-order correlations present due to the effects of entanglement between excitation measurements of the qubit and parity measurements of the cavity. Secondly, since this framework does not consider the effects of measurement back-action this removes correlation structures across measurement times that the quantum version takes advantage of. Nevertheless, due to some degree of correlation across measurements due to the signal itself, a weak advantage to considering higher-order central moments, and limitations of shot noise, did result in advantages in classification accuracy. This highlights the computational benefits our system reaps from considering signal correlations and central moments emerge from the information contained there, rather than a consequence of introducing the nonlinearity required to compute these values (since introducing the same correlation features for Maxwell-Bloch did not substantially change accuracy). § LEAKY ECHO STATE NETWORKS (LESN)§.§ Background tocsubsectionsec:si:lesn:background Leaky echo state networks <cit.> are a generalization of echo state networks (ESN) <cit.> that were found to outperform their parent design in prediction and classification of slow dynamic systems, noisy time series and time-warped dynamic patterns <cit.>. Given a sequence of inputs {u_n}_n=1^N, u_n∈ℝ^D, the state of the LESN reservoir after the n^th, x_n, is given by the following equation:x_n = (1-aγ)x_n-1 + γ f(W_in u_n + W_resx_n-1).Here, a,γ are fixed hyper-parameters in [0,1], and f is a nonlinear activation function. W_in is the D× R“encoding" matrix whose elements are selected uniformly at random from the interval [-w_in,w_in], where D is the dimension of the input, R is the dimension of the reservoir, and w_in is a fixed hyper-parameter. W_res is the R× R“reservoir" matrix. This matrix is constructed by first generating a matrix W_R, which is a random matrix whose elements are chosen to be zero with probability 1-p_s and a number sampled uniformly from the interval [-1,1] with probability p_s. The largest-magnitude singular value of this matrix, λ_max(W_R) is computed, and the reservoir matrix W_res is defined as:W_res = ρ/| λ_max(W_R) |W_Rwhere ρ is a fixed scaling hyper-parameter. Finally, the n^th output of the reservoir, y_n, is given byy_n = W_train x_n,where W_train is a R× C trainable linear layer, where C is the dimension of the desired output vector. §.§ Digital reservoir comparison tocsubsectionsec:si:lesn:comparisonAs a way of benchmarking the computational capacity of our physical reservoir, we compared it to the performance of a digital reservoir - an LESN - at varying widths and depths. We focused on the accuracy of classifying the spiral, since this is the most direct point of comparison, as the goal was to classify individual points of a signal rather, than multiple separate signals per-shot as with the time-dependent case. Here, for a depth of N, we sent in N identical two-dimensional data points (x,y) (so D=2) corresponding to the I and Q components of the signal that our experimental reservoir is meant to process, i.e. the spiral point coordinates. We used the rectified linear unit (ReLU) as our nonlinear activation function. Traditionally, sigmoid or tanh activation functions are used for LESN's <cit.>, but ReLU was found to work better for our application. To investigate how “trivial" it was to generate a classifier with the same capacity as our experiment, we generated 100 such LESN's at random, and found their average performance, and standard deviation. Hyper-parameters a,γ,w_in,ρ, and sparsity (p_s) were tuned in sweeps to improve performance as much as possible for each width and depth, in order to give the digital reservoir a competitive chance. The reservoir's computational capacity varies by the number of shots.Comparing Fig <ref>(b) to Fig <ref>, we found that, at around 10^3 shots, our physical reservoir achieved a performance comparable to that of about that of a 32-dimensional LESN reservoir, as seen by the fact that both oscillate around 99% classification accuracy, within about one percent. InFig <ref>, a 64-dimensional reservoir was found to be enough to classify the spiral data points with perfect accuracy and a fairly wide choice of parameters. Our reservoir, then, achieved at least the capacity of a, 64-dimensional LESN reservoir, past around 5*10^3 shots. Given the roughly 2× 16 dimensions of Hilbert space used by our reservoir, this computational capacity is on the order of what would be expected for a large shot number.[bibsm/general ]
http://arxiv.org/abs/2312.16166v1
{ "authors": [ "Alen Senanian", "Sridhar Prabhu", "Vladimir Kremenetski", "Saswata Roy", "Yingkang Cao", "Jeremy Kline", "Tatsuhiro Onodera", "Logan G. Wright", "Xiaodi Wu", "Valla Fatemi", "Peter L. McMahon" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226185436", "title": "Microwave signal processing using an analog quantum reservoir computer" }
Is photo-evaporation an important destruction mechanism?National Centre for Nuclear Research, ul. Pasteura 7, 02-093 Warsaw, [email protected] INAF - Osservatorio astronomico d'Abruzzo, Via Maggini SNC, 64100, Teramo, Italy INFN - Sezione di Perugia, Via A. Pascoli SNC, 06123, Perugia, Italy SISSA, Via Bonomea 265, Trieste, Italy IFPU – Institute for fundamental physics of the Universe, Via Beirut 2, 34014 Trieste, Italy Astronomical Observatory Institute, Faculty of Physics, Adam Mickiewicz University, ul. Słoneczna 36, 60-286 Poznań, Poland INAF – Osservatorio Astronomico di Padova, Vicolo dell’Osservatorio 5, 35122 Padova, Italy We investigate the role of photo-evaporation of dust exposed to the radiation field from hot young stars and planetary nebulae (PNe) as a possible destruction mechanism of dust grains in the interstellar medium (ISM).We estimate photo-evaporation induced by the feedback of individual or clustered young stars, of PNe and in the presence of a variable radiation field scaled with the interstellar radiation field. For PNe we investigate dust photo-evaporation of both dust grains already present in the ISM as well as those formed in the last phases of the evolution of thermally pulsing asymptotic giant branch (TP-AGB) stars. We include dust photo-evaporation rate in models of dust evolution in galaxies for different assumptions of the dust growth scenario, dust-to-gas ratios, star formation histories, and initial mass functions of the stars.For all the cases considered, we find that both photo-evaporation from young stars and from PNe are negligible with respect to other dust removal processes such as destruction from supernovae shocks, astration and possibly outflow. Grains are stable against photo-evaporation if they are exposed to a radiation field which is up to 10^7 times the interstellar radiation field.Dust grains of size ≥ 0.01 μ m are not efficiently destroyed by photo-evaporation also in the presence of a strong radiation field.Dust survival in harsh environments A. Nanni 1,2 S. Cristallo2,3D. Donevski1,4,5 M. J. Michałowski 6 M. Romano1,7 P. Sawant1 Received ...; accepted... ========================================================================================================================================================== § INTRODUCTIONFormation and survival of dust grains in galaxies have paramount implications for many astrophysical processes. Dust grains are responsible for the build-up of different molecules in space, and for gas cooling, promoting the formation of stars (e.g., ). Dust absorbs light mostly in the ultraviolet (UV) and visible wavelength and re-emits this energy in the infrared. Therefore, the physical quantities derived from the spectral energy distribution (SED) fitting, e.g. the star formation rate (SFR), must take the effect of dust into account. Despite the significance of dust formation, destruction, and survival in the interstellar medium (ISM), such mechanisms remain controversial.UV photons impinging on dust grains remove electrons from their surface and heat the gas. Through such a process grains become positively charged <cit.>. Photo-evaporation of dust grains is suggested to be relevant for the life-cycle of polycyclic aromatic hydrocarbons <cit.>. Indeed, the ISM may be enriched of small molecules from the photo-evaporation of PAHs, while in photo-dissociation regions PAHs may be created when UV photons and/or shock waves break down carbon grains <cit.>. Nevertheless, the role of dust photo-evaporation as a possible efficient destruction mechanism of dust grains in the ISM has not been equally well explored so far. Dust grains undergo photo-evaporation due to the intense radiation field in galaxies with high SFRs. In spite of that, in simulations of dust evolution in galaxies, the role of such a mechanism in relation to other processes, such as astration and supernovae (SN) shocks and large-scale galactic outflows has not been thoroughly assessed yet. In low-metallicity galaxies and the early Universe, where stars are typically hotter than at solar-like metallicity, and where the formation of massive stars with intense radiation fields may be favoured <cit.>, photo-evaporation may be potentially relevant.Recent observational studies show that the specific mass of dust (sM_ dust=M_ dust/M_⋆, where M_ dust and M_⋆ are the masses of dust and of stars, respectively) rises quickly at young ages and then decreases with age. Different classes of galaxies show evidence of this trend: massive and dwarf star-forming galaxies in the local Universe <cit.>, dusty galaxies and Lyman Break Galaxies identified at very high redshifts (2<z<6, ), as well as quiescent but dusty galaxies at low and intermediate redshifts (<cit.>). The core of the correlation between the sM_ dust and the specific star formation rate (sSFR=SFR/M_⋆) is believed to be an age-evolutionary sequence.From a theoretical viewpoint, various models tracking the dust, gas and metal content of galaxies that include recipes for dust formation, grain growth and destruction, as well as inflows and outflows, are able to reproduce the observed decline of sM_ dust with sSFR <cit.>. The majority of models find strong outflows driven by stellar feedback and SN explosions to explain the decline of sM_ dust with age. Predicted outflow efficiencies (ML=Ṁ/SFR, where Ṁ represents the mass outflow rate) are up to 80. In contrast to what simulations predict, recent observational works on both local and high-z sources (e.g. <cit.>) found a typical value of ML≈1. Using a novel spectral selection, <cit.> examined quiescent galaxies that showed no signs of energetic feedback from embedded active galactic nuclei. A significant scatter follows an anti-correlation between sM_ dust and age, suggesting distinct dust removal pathways over a range of timescales. Dust destruction from feedback from Type Ia SNe <cit.>, planetary nebulae (PNe) <cit.> or heating from winds of thermally pulsing asymptotic giant branch (TP-AGB) stars have also been proposed to explain the observed decline of sM_ dust with age in quiescent galaxies <cit.>.This all motivates us to re-examine under what conditions dust can survive in galaxies and what processes are dominant in producing the observed anti-correlation of sM_ dust and stellar age. Along with usually considered dust removal processes in the ISM such as SN shocks, astration, and outflow we additionally probe whether photo-evaporation due to the intense radiation input from massive stars and PNe can be an efficient destruction process for dust grains. § METHOD: DUST EVOLUTION MODELIn this work, we investigate the efficiency of dust photo-evaporation by including this process in calculations that follow dust evolution in the ISM of galaxies.We here do not consider all the details of dust condensation and destruction during the process of star formation or PNe, but only the effect of radiation feedback from stars already in their main sequence, i.e. in HII regions, or in the PN phase.§.§ General frameworkIn order to model gas and dust evolution in the ISM of galaxies, we first compute the chemical gas enrichment from stars by adopting the one-zone chemical evolution code omega <cit.> which includes population IIIstars <cit.>, Type II <cit.> and Type Ia supernovae<cit.> and TP-AGB stars <cit.>. We therefore compute the evolution of the gas mass M_ gas as well as the the mass of each metal species, M_ gas, i, and the mass of the dust species, j, either silicates (olivine and pyroxene) or carbon, M_ dust, j, similarly to other works in the literature <cit.>. If only inflow is neglected we obtain:dM_ gas/dt=dM^ SP_ gas, ej/dt -SFR -ML × SFR - ∑_ jdM_ dust, j/dt, where the last term takes into account the amount of metals locked into dust grains. dM_ gas, i/dt=dM^ SP_ gas, ej, i/dt -SFR M_ gas,i/M_ gas - ML× SFR M_ gas,i/M_ gas - -∑_ jn_ im_ i/m_ dust, jdM_ dust, j/dt, where the first two terms of each equation are computed by omega to which we refer for all the details. The first term represents the gas return from stellar population (SP). The initial mass function (IMF) of stars is assumed to be constant with time. The integral is performed between the minimum (M_ L=0.8 M_⊙) and the maximum (M_ U=120 M_⊙) mass of stars. The IMF is normalised in such a way that:∫^M_ U_ M_ L m IMF(m) dm = 1 M_⊙We here test two different IMFs: the Chabrier IMF <cit.> and a top-heavy IMF with the form:IMF(m) ∝ m^-α,with α=1.35. Such an IMF favours the formation of massive stars and can impact the efficiency of the destruction processes involved.The second term of the Eqs. <ref> and <ref> is the astration of gas and metals due to formation of stars while the last term is gas removal from the ISM operated by outflows. The last term of Eq. <ref> represents metal depletion from the gas phase into dust grains, where n_ i is the number of atoms of the element i in the monomer of the dust species j, and m_ i and m_ dust, j are the atomic mass of the element i and the dust monomer j, respectively. The evolution of dust grains of species j is computed as:dM_ dust, j/dt=dM^ SP_ dust, ej, j/dt -SFR M_ dust, j/M_ gas - dM^ SN_ destr, j/dt -dM^ YSs_ destr, j/dt- -dM^ PN_destr, j/dt-ML × SFR M_ dust, j/M_ gas+dM_ growth, j/dt. The first term of the equation represents the dust enrichment where the dust yields are approximated by making use of the metal yields: dM^ SP_ dust, ej, j/dt= f_ key, j/n_ key, jdM^ SP_ gas, ej, key, j/dtm_ dust,j/m_ key, j, where f_ key, j is the fraction of key element[The key element is defined as the least abundant among the elements that form a certain dust species divided by its number of atoms in the compound.] locked into dust grains, n_ key, j is the number of atoms of the key element in one monomer of dust, and dM^ SP_ gas, ej, key, j/dt is the gas mass injection rate of the key element from the SP. We assume f_ key, olivine=0.3, f_ key, pyroxene=0.3, f_ key, carbon=0.5, f_ key, iron=0.01 for TP-AGB stars, and f_ key, olivine=0, f_ key, pyroxene=0.5, f_ key, carbon=0.5, f_ key, iron=0.5, for SNe <cit.>. The quantity m_ key, j is the atomic mass of the key element. The second term in Eq. <ref> represents dust astration due to star formation. The terms dM^ SN_ destr, j/dt, is the dust destruction from SN shocks. We add the terms dM^ YSs_ destr, j/dt and dM^ PN_ destr, j/dt, corresponding to photo-evaporation due to young stars and PNe, respectively.The destruction term in PNe includes both the dust destroyed in the ISM (dM^ PN, ISM_destr, j/dt) and in-situ (dM^ PN, in-situ_destr, j/dt):dM^ PN_destr, j/dt=dM^ PN, ISM_destr, j/dt+dM^ PN, in-situ_destr, j/dtSimilarly to Eqs. <ref> and <ref> dust removal from the outflow is parameterised through the mass-loading factor. The last term is dust growth occurring in the ISM. We adopt the commonly-used delayed star-formation history (SFH) for the galaxy:SFR ∝1/τ^2e^-t/τ,where τ=10, 1000 Myrs are representative of a rapid burst of star formation and of a more continuous star formation, respectively. The SFH is normalised in such a way thatM_⋆=1 M_⊙ after 13 Gyrs. We however checked that the same kind of normalisation at different final ages of the galaxy does not change the results. §.§ Dust destruction from SN shocksThe dust destruction operated by SN shocks is modelled as in many works in the literature, e.g. <cit.>. The destruction time-scale is given by: τ_ destr, SN= M_ gas/R_ SN(t)M_ swept,where M_ gas evolves according to Eq. <ref>, R_ SN is the SN rate and M_ swept is the mass of gas swept-up for each SN event. We here assume M_ swept=1200 M_⊙ <cit.>.The destruction rate of dust is therefore:d M^ SN_ destr, j/dt=M_ dust, j/τ_ destr, SN.§.§ Dust photo-evaporationThe stellar parameters of individual stars have been calculated with the FUNS code <cit.>. Surface luminosities and temperatures have been extracted when 10% of the central hydrogen has been burnt. We consider a metallicity of Z=0.0001 plus enrichment of α-elements (0.7 dex, for oxygen, and 0.4 dex for the other α-elements) and stellar masses between 0.8 and 120 M_⊙. However, we verify that by increasing the upper limit of stellar mass up to 300 M_⊙ does not significantly change the results. The low metallicity and stellar evolutionary phase allow for the maximum possible effective temperatures, and therefore the largest possible dust photo-evaporation. At each distance from the star we compute the dust equilibrium temperature T_ j for each dust species j, either carbon or silicate: ∫_νκ_ abs, j(ν) B(ν)(T_ eff) W(r) dν=∫_νκ_ abs, j(ν) B(ν)(T_ j), dν,where κ_ abs, j(ν) is the mass absorption coefficients of the dust species j, in cm^2 g^-1 as a function of the frequency, B(ν)(T_ eff) and B(ν)(T_ j) are the black body emission at the effective temperature of the star and of the dust temperature, respectively, and W(r) is the dilution factor of radiation with the distance from the star, r, which for stars of radius R_* reads:W(r) = 1/2[1 - √(1-(R_*/r)^2)]. Stars are seldomly born isolated. Therefore, we consider the more realistic case of stars situated in stellar clusters. We assume for the radiation field the one obtained from a simple stellar population (SSP) computed for a single burst of star formation with Z=0.0001 and both the Chabrier and top-heavy IMF by means of the code PÉGASE.3 <cit.> with a lower and upper limit of 0.8 and 120 M_⊙, respectively. We adopt the stellar populationbased on “Padova” tracks <cit.>. We consider a zero-age SSP that provides the maximum of the photo-evaporation efficiency, and an upper limit for the star cluster mass of 10^5 M_⊙ <cit.>. In this case, Eq. <ref> becomes:∫_νκ_ abs, j(ν) L(ν)_ SSP/4π r^2 dν=∫_νκ_ abs, j(ν) B(ν)(T_ j), dν, where L(ν)_ SSP is the spectrum of the SSP.The mass absorption coefficients in Eqs <ref> and <ref> for the main populations of silicate and carbon are computed by starting from their optical properties of <cit.> for astronomical silicate and graphite in the wavelength range from 0.001–1000 μm which covers the entire emission spectrum of young stars and PNe[Absorption and scattering coefficients are available at <https://www.astro.princeton.edu/ draine/dust/dust.diel.html>]. We adopt the same size distribution of <cit.>. We do not consider PAHs evolution in our treatment since they usually represent a minor component in terms of the total dust mass in the ISM. For this reason, we exclude from the grain size distribution of carbon the contribution of small grains of size given by Eq. 2 of <cit.>.We compute the variation of the dust size with time due to evaporation of dust heated by starlight da/dt^ sub_ i in analogy to Eq. 3 of <cit.> and adopting the data in their Table 1:da/dt^ sub_ j=-1/ρ_ j√(A_ j m_H/2 π k T_ j) P_0 exp(-A_ j m_H L_ j/k T_ j),where ρ_ j and L_ j are the mass density and the latent heat of sublimation of the j-th dust species, A_ j is its mean molecular weight, P_0 is the saturated vapour pressure, k is the Boltzmann constant, and m_ H is the hydrogen mass. We conservatively adopt the same data of olivine also to compute the evaporation of pyroxene. Indeed, olivine is predicted to evaporate at temperature lower than the one of pyroxene. As a consequence, our choice maximises the evaporation efficiency of silicates.Small grains evaporate quicker than large ones, therefore, in order to find an upper limit of dust photo-evaporation, we assume that carbon and silicate grains formed have a typical size of a_ j = 0.01 μm which is about one order of magnitude smaller than the peak of distribution found by <cit.> for the Milky Way and adopted in <cit.>.We define the dust temperature for which the grains are stable against photo-evaporation as the temperature at which the time needed for evaporation for a grain of size a_ j = 0.01 μm is greater or equal to a critical age, age_ crit, which we set equal to the age at which the galaxy has built its entire stellar mass, age_ crit= 13 Gyrs:a_ j/da/dt^ sub_ j= age_ critWe obtain T_ carbon≈ 1330 K for carbon dust and T_ silicate≈ 877 K for silicates. The distance from the source at which the equilibrium temperature equals this sublimation temperature (Eqs. <ref> and <ref>) is the dust sublimation radius, R_ subl, j.In case of individual stars (ISs) or PNe, at each time-step the mass of dust destroyed in the volume of a shell of thickness equal to R^ IS/PN_ subl, j-R_* is:M^ IS/PN_ destr, j = M_ dust, j/M_ gas4/3π(R^3 _ subl, j-R_*^3) ρ_ gas.For stars in the ISM, ρ_ gas is either the gas density in the region of star formation or where the PN resides. For young stars, we set as density that of a molecular cloud ρ_ gas=10^5 m_ Hμ where μ is the mean molecular weight μ=1.37. Such a high value of the density is typical of star-forming clumps and cores <cit.>. The contribution of the destruction by each isolated star is weighted over the IMF (which is normalised as in Eq. <ref>):M^ IS, W_ destr, j = ∫^M_ U_M_ LIMF(m) M_ dust, j/M_ gas4/3π(R^3 _ subl, j-R_*^3) ρ_ gas dm. For the calculation of the dust destroyed by a single PN in the ISMwe use Eq. <ref> and assume ρ_ gas=50 m_ Hμ, a value typical of the cold atomic medium <cit.> which is an upper limit of the density of the medium in which PNe are expected to reside. The quantity M_ dust, j/M_ gas is the dust-to-gas ratio in the ISM calculated from Eqs. <ref> and <ref>.Planetary Nebulae however are complex objects in which the central hot white dwarf is surrounded by dust and gas at higher density than the one of the cold atomic medium. Dust is produced during the TP-AGB phase when the star loses mass at high rates (up to a few 10^-5 M_⊙ yr^-1).For computing dust destruction in-situ for PNe we assume ρ_ gas=10^4 m_ Hμ <cit.> and an upper limit of the total dust-to-gas ratio of 0.01 <cit.>. We assume that each of the dust species considered (olivine, pyroxene and carbon) has M_ dust, j/M_ gas equal to one third of the total value. In case the stars are born in clusters we assume that the radiation comes from a point source. In this case, the mass of dust destroyed is the one in a sphere of radius R^ SP_ subl, j:M^ SP_ destr, j = M_ dust, j/M_ gas4/3π R^3_ subl, jρ_ gas, where ρ_ gas is assumed to be the same as in the case of individual stars (ρ_ gas=10^5 m_ Hμ).The final dust destruction operated by star formation is therefore computed given the SFR as a function of time:d M^ YSs_ destr, j/dt= M^ IS, W/SP_ destr, j/M_ gas SFR, Photo-evaporation rate induced by PNe is instead calculated as: d M^ PN, ISM/in-situ_ destr, j/dt =M^ PN, ISM/in-situ_ destr, jdn_ AGB/dt, where dn_ AGB/dt is the AGB “birth” rate that end their evolution as PNe which is extracted from the code omega. We assume an effective temperature for all the PNe of T_ eff=2× 10^5 K and luminosity L=4 × 10^4 L_⊙ which are upper limits values for PNe with initial mass of 6 M_⊙ <cit.>.§.§ Dust growth in the ISMThe dust destruction processes included in our calculations depends on the dust-to-gas ratio as a function of time. Since such a quantity varies if dust growth in the ISM is included, we analyse two extreme cases: 1) no dust growth in the ISM; 2) fully efficient dust growth in the ISM. The variation of mass of dust due to growth is expressed as: dM_ growth, j/dt=4πda_ j/dta_ j^2 ρ_ j n_ s, j,where da_ j/dt is the variation of the dust size due to the addition of atoms and molecules on the grain surface and is computed following <cit.>, n_ s, j is the number of seed nuclei computed by dividing the mass of each dust species by the mass of one dust grain <cit.>. For this calculation we implicitly assume that all the grains in the ISM can potentially act as seed nuclei.§ RESULTS §.§ A simple caseIn order to give a sense of how much dust is destroyed for each solar mass of stars formed and for each PN, we here perform some simple calculations by considering a single value for the dust-to-gas ratio of 0.01 in the ISM <cit.>. As sublimation radius of dust in Eqs. <ref>, <ref> and <ref> we adopt the one of silicates, R_ subl, silicate, since this is larger than the one of carbonaceous grains.§.§.§ Dust photo-evaporation by isolated young starsBased on Eq. <ref> we first estimate how much dust is destroyed per M_⊙ of stars formed with the IMF distributed according to the Chabrier or top-heavy functions and normalised as in Eq. <ref>. For this test, we assumed stars to be formed in isolation. We obtain the fraction of ≈ 1.2× 10^-8 and ≈ 6.7× 10^-8 of dust destroyed, where the stars are distributed according to the Chabrier and top-heavy IMF, respectively.§.§.§ Dust photo-evaporation by stars in clustersFrom Eq. <ref> we find that the dust mass fraction destroyed per solar mass of stars formed in a cluster is ≈ 1.0× 10^-8 and ≈ 1.3× 10^-7 where stars are distributed according to the Chabrier and top-heavy IMF, respectively.In the case of the top-heavy IMF, such an estimate is larger than the one found for isolated stars, while the two values are comparable for the Chabrier IMF. The difference with the isolated stars case may be due to the diverse estimates of flux impinging on the grain surface in Eqs. <ref> and <ref>, and of the total volume cleared-up from dust in Eqs. <ref> and <ref>. The sublimation radius for the different dust species are R_ subl, silicate≈ 9×10^-3 pc, and R_ subl, carbon≈ 5×10^-3 pc. These values suggest that dust is destroyed by photo-evaporation in the immediate vicinity of the cluster, given that a typical radius of an HII region is of a few pc <cit.>.§.§.§ Dust photo-evaporation by PNeWe estimate from Eq. <ref> that the amount of dust destroyed in the ISM for each PN is ≈2.4× 10^-11 M_⊙.We obtain that the mass of dust in-situ destroyed by photo-evaporation is ≈ 4.9× 10^-9 M_⊙. Such a value is negligible compared to the dust yields produced along the entire duration of the TP-AGB phase <cit.>, and to the dust mass inferred from observations which is of the order of ≈ 10^-4 M_⊙ <cit.>. Recently, <cit.> have found that the amount of dust possibly destroyed by PN feedback is up to ≈ 60%. Therefore, an alternative mechanism different from photo-evaporation may be at work. As for the case of clusters, photo-evaporation is efficient in the vicinity of the source. We find R_ subl, silicate≈ 7×10^-4 pc, and R_ subl, carbon≈ 9×10^-5 pc.§.§ Dust exposed to an interstellar radiation field- (ISRF-) like radiationDust in the ISM as well as in the outer part of the cirmcumstellar envelopes of TP-AGB stars, is exposed to a background radiation field. We here assess if photo-evaporation may be relevant when dust is exposed to an external radiation field as often assumed in the literature <cit.>. In such an approach, the radiation field is parameterised as U× ISRF where U is a scaling which can be up to U_ max=10^7 and the ISRF is from <cit.>. For such a maximum value of the scaling factor we obtain from Eq. <ref> <cit.> an equilibrium temperature for silicate dust of T_ silicate≈154 K and of T_ carbon≈ 245 K for carbon. Such values are well below the sublimation temperatures derived in Section <ref> and therefore both silicate and carbon dust grains are stable against evaporation. §.§ Including dust destruction in dust evolutionary modelsWe include dust photo-evaporation by considering the case of young stars born in clusters and PNe, following the description provided in Section <ref>. In Fig. <ref> we show the overall evolution of sM_ dust in the ISM (upper panel) and we compare the efficiency of different dust depleting mechanisms by plotting the integrated amount of the specific mass of dust destroyed or removed as a function of time, sM_ destr=M_ destr/M_* (middle and lower panels). In other words, we integrate in time each of the destruction mechanisms corresponding to the terms from the 2nd to the 6th in Eq. <ref> and we divide by the stellar mass[The contribution of photo-evaporation from PNe has been splitted in the two terms of Eq. <ref>.]. We run a reference model by assuming M_ gas, ini=4 M_ *, fin (where M_ gas, ini and M_ *, fin are the initial mass of gas and the final stellar mass, respectively), τ=1000 Myrs in Eq. <ref>, M_ swept=1200 M_⊙, a Chabrier IMF, and a typical value of the mass-loading factor ML=1. Note that with such an assumption for ML astration and outflow provide the same contribution to dust removal. We consider the cases with and without dust growth in the ISM, and we vary M_ gas, ini, τ and the IMF. Fig. <ref> demonstrates that destruction from photo-evaporation is orders of magnitudes lower than SN shocks destruction, than dust astration, and than outflow removal. This is true both with and without dust growth in the ISM. The difference in the destruction efficiency for the two cases is minor. Only the model in which dust growth is not included shows a decline in sM_ dust as a function of time mainly due to dust destruction from Type II SNe. Such a trend is in qualitative good agreement with the one observed in the literature, however, a number of observable parameters like metallicity and gas fraction, beside specific mass of dust should be reproduced consistently by models before drawing some firm conclusion.In Fig. <ref> we show the same model as in Fig. <ref> (magenta lines) together with a model representative of gas rich galaxies with M_ gas, ini= 20× M_ *, fin. In the right panel we can appreciate that in this latter case the efficiency of the various removal processes is reduced due to the effect of lower values of M_ dust, j/M_ gas (see Eqs. <ref>, <ref> and <ref>). Photo-evaporation remains negligible with respect to the other processes. The plot shows that for gas rich systems with large M_ gas/M_⋆ the predicted sM_ dust does not show a decline as instead in the case with a smaller M_ gas/M_⋆.In Fig. <ref> we show the same model as in the previous plots (magenta lines) together with a model computed by assuming τ=10 Myrs in Eq. <ref> representing the case of a very short burst of star formation. In this latter case, the decline in the sM_ dust with time occurs at earlier times with respect to the case with τ=1000 Myrs due to the fast production of stars which explodes as Type II SN, and it is followed by an increase in the dust content due to the remaining contribution of Type II SNe, AGB stars and Type Ia SNe. Such a trend showing a decline followed by an increase in the dust mass seems to be at odd with the observed trend, but is obtained for the specific choice of parameters (e.g M_ gas, ini/M_⋆, fin) beside the SFH.In Fig. <ref> we show the effect of assuming different IMF (either <cit.> or top-heavy). A top-heavy IMF may be typical of low-metallicity environments <cit.>. In the upper panel, the model with the top-heavy IMF shows a more rapid decline with time with respect to the Chabrier case due to the efficient destruction by Type II SNe which are more numerous than for the Chabrier IMF. The dust produced by the top-heavy model, encompasses the one in which the Chabrier IMF is employed. Because of the larger sM_ dust the destruction mechanisms are more efficient than in the Chabrier case. Photo-evaporation remains negligible in both cases. For both the models, the trend found is in qualitative good agreement with the observations. § CONCLUSIONSWe find that photo-evaporation due to young stars and PNe has only a minor role in dust destruction, independently of the assumed efficiency of dust growth in the ISM, the initial mass of gas, the SFH, and the IMF. We do not exclude however the possibility of dust being destroyed by shocks generated by fast winds in HII regions and PNe. The investigation of such processes is beyond the scope of this work. The dust exposed to an interstellar radiation field U× ISRF is stable against sublimation up to the largest value assumed in the literature U_ max=10^7 <cit.>. These findings also imply that the total yields from TP-AGB stars are not normally destroyed by photo-evaporation induced by the ambient radiation field or when the PN phase is reached.The observed trend with increasing sM_ dust at early times followed by a decrease is qualitatively well reproduced for low values of M_ gas/M_⋆ found in passive galaxies, while for larger values of sM_ dust representative of systems with a large gas fraction (e.g. DGS) the observed trend is difficult to be reproduced if outflow is not extremely efficient (see also <cit.>). It is worth noticing that observationally we are able to probe specific dust mass above 10^-4 at each redshift. Therefore, at such values we don't expect photo-evaporation to be a relevant dust destruction process even with low metallicity and intense radiation field.aa A.N, M.R., P.S. acknowledge support from the Narodowe Centrum Nauki (NCN), Poland, through the SONATA BIS grant UMO-2020/38/E/ST9/00077.D.D. acknowledges support from the NCN through the SONATA grant UMO-2020/39/D/ST9/00720. M.J.M. acknowledges the support ofthe NCN through the SONATA BIS grant UMO-2018/30/E/ST9/00208 and the Polish National Agency for Academic Exchange Bekker program grant BPN/BEK/2022/1/00110.M.R. acknowledges support from the Foundation for Polish Science (FNP) under the program START 063.2023. We thank the anonymous referee for the careful reading of the manuscript and for her/his thoughtful comments.
http://arxiv.org/abs/2312.15998v1
{ "authors": [ "Ambra Nanni", "Sergio Cristallo", "Darko Donevski", "Michał J. Michałowski", "Michael Romano", "Prasad Sawant" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231226110413", "title": "Dust survival in harsh environments -- Is photo-evaporation an important destruction mechanism?" }
addtoresetequationsectionSNUTP23-002KIAS-P23070LCTP-23-20 Finite N black hole cohomologies Jaehyeok Choi^1, Sunjin Choi^2, Seok Kim^1, Jehyun Lee^1 and Siyul Lee^3 ^1Department of Physics and Astronomy & Center for Theoretical Physics,Seoul National University, Seoul 08826, Korea.^2School of Physics, Korea Institute for Advanced Study, Seoul 02455, Korea.^3Leinweber Center for Theoretical Physics, University of Michigan, Ann Arbor, MI 48109, USA. E-mails: [email protected], [email protected], [email protected], [email protected], [email protected] study new cohomologies for the BPS operators of the 𝒩=4 Yang-Mills theory withSU(3) and SU(4) gauge groups, to better understand the black hole microstates.We first study the index of these black hole operators andidentify their apparent threshold levels. For SU(3), we find many towers of statesand partial no-hair behaviors. We explicitly construct the threshold cohomologyin the SU(3) theory. We study throughout this paper asubsector of the field theory corresponding to the BMN matrix theory.We also argue that the BMN sector exhibits a black hole like entropy growth at large N.§ INTRODUCTION AND SUMMARYMicroscopically counting the black hole entropy<cit.> and better characterizing its microstatesare one of the central problems in quantum gravity. These questions were also naturally asked in AdS/CFT from its early days <cit.>.The entropy of BPS black holes in AdS_5 has been counted from the CFT dual rather recently<cit.> by studyingthe index <cit.>.Motivated by this success, we wish to better characterizethe BPS microstates which contribute to this entropy. Explicitly constructing these quantum states at strong coupling is difficult,even for BPS states. However, a modest version of this program wasposed in terms of the 1-loop BPS states and their cohomologies<cit.>.In this program, the 1-loop BPS states annihilated by a pair of classicalsupercharges Q, Q^† are studied from Q-cohomologies, assuming that they remain BPS at strong coupling. See <cit.> fora study of perturbative non-renormalization. There has been progress in this program recently<cit.>. In particular, new cohomologies <cit.>and 1-loop BPS states <cit.> beyond the familiar `multi-graviton type' operators were constructedin the SU(2) maximal super-Yang-Mills theory.Although the SU(2) results already shed interesting light on the black holemicrostates, this theory is at best a highly quantum toy model of AdS/CFT.The ultimate goal is to study the SU(N) theory at parametricallylarge N. With this in mind, in this paper, we find and study the new cohomologiesfor N>2. We shall detect and construct new cohomologies for theSU(3) and SU(4) theories, and discuss their physics. Identifying and constructing these operators are computationally very demanding.To partly overcome this difficulty, we focus on a quantum mechanical subsectorcorresponding to the BMN matrix model <cit.>. Among the full set of the BPS letters (elementary fields dressed by derivatives), thissubsector keeps three scalars ϕ̅^m (m=1,2,3), three fermions ψ_m+and a component f_++ of the field strength only, without any derivatives. The computationsare easier in this sector, though we should further pursue efficient methods. As we shall explain in section 2.1,this sector exhibits a black hole like entropygrowth at large N. Throughout this paper, we shallonly study cohomologies within this sector.In the rest of this section, we sketch the main technical/conceptual advances in this paper. Completely finding all cohomologies is a very difficult calculationwhen the charges or N are large. So far the comprehensive studiesare made only in <cit.> for N=2,3,4 till not-too-large charges.At fixed charges, the procedure consists of the following steps: * Find all Q-closed operators. * Find a subset of operators found in step 1 which are not Q-exact. * Find a subset of cohomologies in step 2 which are not of multi-graviton type.Each step demands substantial computational resources.One should also scan through all the charges by repeating these procedures. In this paper, we establish streamlined strategies, partly based on <cit.>, which allow faster detection and construction of new cohomologies. Before going through the 3 steps above,we compute the index over the non-graviton cohomologies at finite N. With this index computed, one can easily identify the charge sectors that contain non-graviton states and construct their cohomologies. Of course one could miss pairs of non-graviton states which cancel in the index:we give up finding all cohomologies and search for those captured by the index.Exactly computing the full index in the BMN sector is easy at not too large N. More difficult is to count the finite N gravitons to be subtracted, taking into account the trace relations of finite N matrices. As we review in section 3,counting finite N gravitons reduces to counting certain class of polynomials, whose generators are subject to certain relations. In principle, these relations can be systematically studied using theGröbner basis.In practice, finding Gröbner basis can be computationally very difficult.We used a hybrid method of the Gröbner basis (in a subsector in which this basis can be found easily) and a more brutal counting of independent polynomials by computer, order by order in the charges. Even after identifying the charge sectors to study from the index, the steps 1, 2, and 3 are still quite hard.We have established the following procedures which make steps 1 and 2 somewhat easier, and trivialize step 3 within our setup.See p.23 for how our implementation trivializes step 3.As for step 1, we present a class of ansatz for the Q-closed operators.In order for the final cohomology not to be of graviton type, Q acting on the operator should vanish by trace relations. Systematically finding useful trace relations is difficult. We find a method of constructing a class of operators which become Q-closed only after imposing trace relations.Our ansatz uses the trace relations of the graviton cohomologies that we detected while computing the index. Trace relations of gravitons mean that certain polynomials of single-graviton cohomologies are Q-exact.These relations satisfy `relations of relations', i.e. certain linear combinations of trace relations (with the coefficients being graviton cohomologies) are identically zero.In other words, relations of relations are linear combinations of Q-exact terms which vanish. So they provide operators which become Q-closed thanks to the trace relations, which are our ansätze for the non-graviton cohomologies.See section 4.1 for more details. Some Q-closed operators mentioned in the previous paragraphare not Q-exact, providing new cohomologies, while others are Q-exact.Determining whether our ansätze are Q-exact or not, step 2, is very hard.We developed a numerics-assisted approach to make this stepaffordable on the computer, by ordering the Grassmann variables and then inserting many random integers to the matrix elements.See section 4.2 for the details.The new cohomology thus constructed, and often just the index for the non-graviton cohomologies, provides insights on the structures of the BPS spectrum. First, for SU(3) and SU(4), we address the threshold (lightest) non-graviton cohomologies as seen by the index.Let us denote the SO(6) R-charges and spins by R_1,R_2,R_3 and J_1,J_2 respectively.For SU(3), the index detects one fermionic threshold cohomology at R_1=R_2=R_3, J_1=J_2 (≡ J) and j≡ 2(R_1+R_2+R_3)+6J=24.(j is a combination which can grade the index.) For SU(4), the indexdetects six fermionic threshold cohomologies at J_1=J_2≡ J andj=28 in the [2,0] representation of SU(3)⊂ SO(6).Combining with the results known in the SU(2) theory <cit.>, the threshold levels for j at N=2,3,4 form a non-decreasing sequence 24,24,28.These are apparent thresholds for N=3,4.They would be true thresholds if there are no lower cohomologies canceling in the index or outside the BMN sector.For SU(3), we construct the threshold cohomology.The threshold cohomology is fermionic and has charges R_1=R_2=R_3=5/2 (≡ R) and J=3/2, thus the scaling dimension E=3R+2J=21/2.The scaling dimension is larger than E=19/2 of the SU(2) threshold operator at j=24.So the energy threshold for non-graviton operators increases in N.Such an increase is also expected at large N because the graviton description should be reliable till E=N.The non-graviton index itself reveals various organizedpatterns which allow us to guess some underlying structures of the spectrum.For SU(4), we barely managed to detect the threshold level and did not go to higher orders to find richer structures.However, we computed the SU(3) non-graviton index up to j≤ 54, much higher than the threshold j=24, and observed various intriguing structures.First of all, like the SU(2) non-graviton index, we find organized towers of states.In SU(2) BMN case, we found just one tower of states starting from the threshold operator.However, for SU(3), there are more than one towers starting at different levels.We also find that some towers are related to others by developing hierarchies.We constrain their charge structures and compare them with the expectedasymptotic entropy of the BMN sector. Possibly, these towers will continue toexist at larger N's and may be related to a particular subset of the giant graviton towers <cit.>. Another important aspect of the SU(3) non-graviton index is the partial no-hair behavior.Namely, the index does not see many product cohomologies made of core black hole cohomologies multiplied by graviton cohomologies.In SU(2), many of these product states not seen by the index were shown to be Q-exact, thus absent in the BPS spectrum <cit.>.So the black holecohomologies do not want to be dressed by certain gravitons, reminiscent of the black hole no-hair theorem.It was also pointed out, both from SU(2) QFT and the large N gravity dual, that the BPS no-hair theorem holds only partially.We find an empirical signal that the SU(3) no-hair theorem may also be partial, in that assuming particular graviton hairs makes the remaining core spectrum better organized.See section 3.1 for the details.The remaining part of this paper is organized as follows.In section 2, we explain the cohomology problem, the BMN sector and its entropy, and the graviton cohomologies.In section 3, we present our strategies for computing the non-graviton index and present the results for N=3,4.Qualitative discussions are also made for the SU(3) index. In section 4, we present the new SU(3) cohomology at the threshold level. In section 5, we discuss future directions.In the appendix, we list the graviton trace relations for the SU(3) theory.§ THE COHOMOLOGY PROBLEM The 𝒩=4 Yang-Mills theory with SU(N) gauge grouphas six real scalars Φ_ij=-Φ_ji (subject toΦ^ij∼1/2ϵ^ijklΦ_kl),fermions Ψ_iα,Ψ^i_α̇ and the gauge fieldA_μ∼ A_αβ̇, all in the SU(N)adjoint representation. (Here, i,j⋯=1,⋯,4, α=±,β̇=±̇, μ=1,⋯,4.)For later convenience, we arrange these fields into𝒩=1 supermultiplets as follows, with manifestcovariance only for the SU(3)⊂ SU(4) part of R-symmetry,vector multiplet : A_αβ̇ , λ_α=Ψ_4α , λ̅_α̇=Ψ^4_α̇ ,3 chiral multiplets : ϕ_m=Φ_4m , ϕ̅^m=Φ^4m , ψ_mα=-iΨ_mα , ψ̅^m_α̇=iΨ^m_α̇ ,where m=1,2,3. In this paper, we consider the Euclidean CFT on ℝ^4. This is related to the Lorentzian CFT on S^3×ℝ by radial quantization,which regards the radius of ℝ^4 as the exponential of the Euclidean time τ and makes a Wick rotation τ=it. Here we note the operator-state map,in which the local operators at the origin of ℝ^4 map to the states propagatingin S^3×ℝ. The CFT carries a marginal coupling constant g_ YM. The CFT is invariant under 32 supersymmetries, represented by the16 Poincare supercharges Q^i_α, Q_iα̇ and the16 conformal supercharges S_iα, S^i_α̇.In the radially quantized theory, S's are Hermitian conjugates of Q's: S_i^α=(Q^i_α)^†, S^iα̇=(Q_iα̇)^†. Together with other symmetry generators, these superchargesform the PSU(2,2|4) superconformal algebra. The most important part ofthe algebra for this paper is <cit.>{Q^i_α,S_j^β}=1/2Hδ^i_jδ_α^β+ R^i_jδ_α^β+J_α^βδ^i_j ,where H is the dilatation operator (or the Hamiltonian on S^3×ℝ multiplied by the radius of S^3), R^i_j is the SU(4) R-charges, and J_α^β is the left SU(2) ⊂ SO(4) angular momenta. The BPS states/operators of our interest preserve2 Hermitian supercharges Q≡ Q^4_- and Q^†≡ S_4^-,thus called 1/16-BPS states/operators. These two supercharges satisfyQ^2=0, (Q^†)^2=0, and from (<ref>) one obtains2{Q,Q^†}=H-(R_1+R_2+R_3+J_1+J_2) .On the right hand side, we expressed 2R^4_4=-R_1-R_2-R_3 and2J_-^-=-J_1-J_2 in terms of the five charges which rotate the mutually orthogonal2-planes on ℝ^6⊃ S^5 and ℝ^4⊃ S^3, respectively,all normalized to have ±1/2 values for spinors. The BPSoperators of our interest saturate the boundE≥ R_1+R_2+R_3+J_1+J_2. The charges R_I, J_i on the right hand sideare part of the non-Abelian charges and cannot depend on the coupling g_ YM. However, E is in general a function of g_ YM, so thata BPS state may become anomalous as g_ YM changes.The gauge-invariant BPS operators are easily identified (and counted) inthe free theory limit, g_ YM→ 0. They are given byany gauge-invariant operators made of the following fieldsϕ̅^m , ψ_m+ , f_++ , λ̅_α̇as well as the derivatives ∂_+α̇ acting on these fields,subject to the free equation of motion constraint∂_+α̇λ̅^α̇=0. See, for instance,<cit.> for a more detailed explanation. We want to study how many of these operators remainBPSat the 1-loop level. The dilatation operator H(g_ YM)can be expanded in g_ YM^2, H(g_ YM)=∑_L=0^∞ g_ YM^2L H_(L). At least in perturbation theory, this operator can bediagonalized within the subspace of free BPS operators.[More precisely,for the gauge invariance in the interacting theory, the subsectoris defined at g_ YM≠ 0 by promoting the derivatives ∂_+α̇appearing in the operators to the covariant derivativesD_+α̇≡∂_+α̇-i[A_+α̇, ].]Within this subspace, H_(0) is zero.We want to find the subset of free BPS operatorswhich are annihilated by H_(1). Within the free BPS sector,one finds that{Q(g_ YM),Q^†(g_ YM)}=H(g_ YM)-∑_I R_I-∑_i J_i= ∑_L=1^∞ g_ YM^2LH_(L) .Q and Q^† also depend on g_ YM.Since the free BPS fields are annihilated by them at the leading 𝒪(g_ YM^0) order, their coupling expansions start from the 𝒪(g_ YM^1) `half-loop' order. Therefore, the leading 1-loop Hamiltonian H_(1) in (<ref>) is given by the anticommutator of Q and Q^† at the half-loop order. In particular, Q_(1/2) at 𝒪(g_ YM^1) is precisely the supercharge of the classical interacting field theory.So the 1-loop BPS operators should be annihilated by both Q and Q^† at the classical half-loop order.Due to the nilpotency of Q and Q^†, the operators annihilated by Q and Q^† are in 1-1 map with the cohomologies of Q, which are Q-closed operators with identifications of operators which differ by Q-exact operators.We shall construct and study the representatives of the cohomologies of the classical half-loop supercharge Q, which maps to the 1-loop BPS operators.The actions of classical (half-loop) Q on the free BPS fields are given by Qϕ̅^m=0 ,Qλ̅_α̇=0, Qψ_m+=-i/2ϵ_mnp[ϕ̅^n,ϕ̅^p] ,Qf=-i[ϕ̅^m,ψ_m+] , [Q,D_+α̇]=-i[λ_α̇,} ,where we absorbed the g_ YM factors on the right hand sides into the normalization of fields.It is well known that there are fewer BPS states at the 1-loop level than in the free theory.It has been conjectured (for instance, explicitly in <cit.>)that the 1-loop BPS states remain BPS at general non-zero coupling.Some perturbative evidence of this conjecture was discussed in <cit.>.Various discussions in this paper will assume this conjecture.In our studies, the index over these cohomologies will be important. It is defined byZ(Δ_I,ω_i)= Tr[(-1)^F e^-Δ_1 R_1-Δ_2 R_2-Δ_3 R_3 -ω_1 J_!-ω_2 J_2]with the constraint Δ_1+Δ_2+Δ_3=ω_1+ω_2 (mod 4π i). The trace is taken over the BPS states, or equivalently over the cohomologies.We may regard it as the index over the cohomologies of Q given by (<ref>).A matrix integral formula for this index is given in<cit.>. §.§ The BMN sector The cohomology problem has a consistent truncation to the so-called BMN matrix model.This is the sector in which only the following three letters are used toconstruct the operators,ϕ̅^m ,ψ_m+ ,f_++ ,without any derivatives acting on them. In this paper, to simplify the calculations,we shall study cohomologies only in this sector. To simplify notations,from now on we call them(ϕ̅^m,ψ_m+,f_++) → (ϕ^m,ψ_m,f) .This truncation is consistent only in the classical interacting field theory,i.e. till 1-loop BPS states. The truncation forbidsall the modes in the classical field theory with J_1≠ J_2. In general quantumtheories, the letters carrying nonzero J_1-J_2 charges may mix with the fieldsin the BMN sector. So the 1-loop BPS operators in the BMN sector may mix with non-BMN operators at higher orders in g_ YM^2.But if the conjecture that the spectrum of 1-loop BPS states is isomorphic to theBPS states at general coupling is true, the spectrum computed in the BMN sector will remain unchanged. Since the truncation is made for the modes of elementary fields,the same truncation can be implemented to compute the indexover the cohomologies in the BMN sector, Z(Δ_I)= Tr_ BMN[(-1)^Fe^-∑_I=1^3Δ_I(R_I+J)] ,where J=J_1+J_2/2.Its matrix integral expression is given by <cit.>Z(Δ_I) = 1/N!∫_0^2πd^Nα/(2π)^N∏_a≠ b(1-e^iα_ab)∏_a,b=1^N∏_I<J (1-e^-Δ_I-Δ_Je^iα_ab)/∏_a,b=1^N[(1-e^-(Δ_1+Δ_2+Δ_3)e^iα_ab) ∏_I=1^3(1-e^-Δ_Ie^iα_ab)] × (1-e^-(Δ_1+Δ_2+Δ_3))∏_I=1^3(1-e^-Δ_I)/∏_I<J(1-e^-Δ_I-Δ_J) ,where the second line (inverse of the U(1) index) is multipliedto make it an SU(N) index rather than U(N). This integral can be computed either exactly using the residue sumor in a series expansion in t defined by(e^-Δ_1,e^-Δ_2,e^-Δ_3)=t^2(x,y^-1,x^-1y).The entropy of BMN cohomologies will be smaller than the entropy of all cohomologies.Despite, the large N BMN entropy will still exhibit the black hole like growth. Taking j (schematically) to be the charges,the black hole like entropy growth is S(j,N)=N^2f(j/N^2) ,for a function f(x) which does not explicitly depend on N,where N≫ 1, j≫ 1 and the ratio ϵ≡j/N^2 does not scale in N.Roughly, the scaled charge parameterϵ measures the size of the black hole in the AdS unit. Inthis subsection, we show that the BMN entropy scales like (<ref>)when ϵ is parametrically small (but not scaling in N).We expect the same to be true at general ϵ, although we do not prove it. We first compute the large N entropy in the `small black hole' regime:j≫ 1, N≫ 1 and ϵ≡j/N^2 fixed and much smallerthan 1 (but not scaling in N).[The term `small black hole'has at least three different meanings in the literature. It sometimes denotesstring scale black holes, for which 2-derivative gravity description breaks downnear the horizon.In our example, since ϵ does not scale in N, the 2-derivativegravity is reliable everywhere. Also, small black holes sometimes mean AdS black holes with negative specific heat or susceptibility.What we call `small black holes' belong to this class, but are more specific.Our notion is precisely the same as <cit.>.]This regime is reached by taking all Δ_I's to be small.The approximate large N calculation of the entropycan be done by following all the calculations in section 5.3 of <cit.>with minor changes in the setup. In particular, the calculations from(5.88)to (5.91) there can be repeated by simply replacing all2-(-e^γ)^n-(-e^γ)^-n by 1 (which are the denominatorsof the letter indices in the two setups) and rememberingthat β_I there are Δ_I/2 here. The resultingeigenvalue distribution is along the interval α∈ (-π,π) on the real axis(the gap closes in the small black hole limit), with the distribution functionρ(α)=3/4π^3(π^2-α^2) .The free energy log Zof this saddle point is given bylog Z=-3N^2/π^2Δ_1Δ_2Δ_3 .(For small black holes with negative susceptibility, the grand canonical index is notwell defined. Whenever we address log Z, a Laplace transformation to the micro-canonical ensemble is assumed.) The entropy at given charges q_I≡ R_I+J is given by extremizing S_ BMN(q_I;Δ_I)=log Z+∑_I q_IΔ_I ,in Δ_I's, which is given byS_ BMN(q_I)=2π√(q_1q_2q_3/3N^2) .This expression is valid when q_I=N^2ϵ_I with ϵ_I≪ 1.The entropy ∼ N^2√(ϵ_1ϵ_2ϵ_3)∝ N^2exhibits a black hole like scaling (<ref>). S_ BMN is smaller than the full entropyS(q_I)=2π√(2q_1q_2q_3/N^2)in the small black hole regime <cit.> by a factor of 1/√(6). This is natural since the BMN truncation loses cohomologies.However, the fact that S_ BMN(q_I) scales like S(q_I) impliesthat the truncation provides a good simplified model for black holes,at least for ϵ_I≪ 1.At large charges, we have not computed the asymptotic entropies at large N and q_I. Instead, we have computed the large charge entropies at N=2,3,4. This was done by first computingthe exact index Z(t) by a residue sum, and then extracting the coefficient Ω_j of the expansion Z(t)=∑_j Ω_jt^j for largej=2(q_1+q_2+q_3). This calculation was done both by saddle point approximation atlarge j, and also by expanding Z(t) in t by computer until very high order.The asymptotic BMN entropies S_ BMN(j,N) at very large j≫ 1 are given byS_ BMN(j,2)∼ 3log j ,S_ BMN(j,3)∼ 2log j , S_ BMN(j,4)∼ 8 log j .Even at small N's, these are much slower growthsthan the full entropy, which is<cit.>S∝ (N^2-1)^1/3 j^2/3 .The large discrepancy S_ BMN/S≪ 1 is natural since truncating a 4d QFT toquantum mechanics loses almost all cohomologies at higher energies.This is because the dominant part of the entropy (<ref>)is supposed to come from the infinitely many letters dressed by derivatives.If n unconstrained derivatives can appear in the operators,one expects an entropy S∝ j^n/n+1.The case with n=2 BPS derivatives in 4d indices yields (<ref>), while the quantum mechanical case withn=0 replaces j^0/0+1 by log j. So we expect the BMN subsector to give us good lessons onthe small black holes for sufficiently large N.We hope that the studies at finite N=3,4 in this paper will start to unveil important structures which will continue at larger N's. §.§ The graviton cohomologies Some cohomologies are very well known, which are the multi-graviton cohomologies. We want to exclude them in our discussions. In this subsection, we review the notion of graviton cohomologies, especially at finite N, and also explain how to list and count them. The multi-graviton cohomologies are `defined' to be the polynomials of single-trace cohomologies.(This definition naturally yields the familiar large N cohomologies for the supergravitons.)Single-trace cohomologies are completely understood <cit.>, as we shall review in a moment.Single-trace cohomologies are nontrivial cohomologies atarbitrary N by definition, since no trace relations apply within the single-trace sector. Polynomials of these single-trace cohomologies define the multi-trace cohomologies. Some polynomials may be trivial,i.e. Q-exact at finite N. However, they are Q-closed at arbitrary N withoutusing any trace relations.This will be in contrast to the black hole cohomologies, which should becomeQ-closed only after applying trace relations at particular N. When N is larger than the energy, the multi-graviton operatorsdefined above are all nontrivial cohomologies since no trace relations canbe applied to make them Q-exact. So in this setup, the `graviton cohomologies'defined abstractly in the previous paragraph actually map to the familiar1/16-BPS graviton states in AdS_5× S^5. Trace number of the operatoris regarded as the particle number. At finite N, all the multi-trace operators mentioned in the previous paragraphsare still Q-closed. However, some of their linear combinations may be zero orQ-exactwhen their energies are larger than N, due to the trace relations. So the independent graviton cohomologiesreduce at finite N. Such reductions of states are a well known finite N effect inthe gravity dual. It is called the stringy exclusion principle <cit.>,which happens because gravitons polarize into D-brane giant gravitons<cit.>.The reduction/exclusion mechanism is the same for any Nin QFT, making it natural to call them `finite N gravitons' at general finite N. Now we concretely explain the list of the graviton cohomologies. One starts by listing the single-trace graviton cohomologies. These are completely foundand collected into supermultiplets. The relevant algebra for these multiplets is the PSU(1,2|3) subset of the superconformal symmetry PSU(2,2|4) that commuteswith Q,Q^†. The multiplets for single-trace graviton cohomologies are called S_n with n=2,3,⋯ <cit.>. S_n is obtained byacting the Poincare supercharges Q^m_+, Q_mα̇ and thetranslations P_+α̇ in PSU(1,2|3) onthe following primary operatorsu^i_1i_2⋯ i_n= tr(ϕ̅^(i_1ϕ̅^i_2⋯ϕ̅^i_n)) .See <cit.> for more details. At large N,multiplying the operators in S_n's yields independent multi-trace cohomologies. At finite N, trace relations reduce the independent single-traceand multi-trace operators. Following <cit.>, we first identifythe dependent single-trace operators as follows. Using theCayley-Hamilton identity, one can show that all single-trace operators in S_n≥ N+1can be expressed as polynomials of operators in S_n≤ N <cit.>. So it suffices to use only the operators in S_n≤ N togenerate graviton cohomologies. The remaining single-trace generators in S_n≤ Nare not independent when we multiplythem. In other words, there are further trace relations for gravitons within S_n≤ N. The last trace relations are not systematically understood, to the best of ourknowledge. To simplify the discussions, let us consider the BMN sector only from now.The subset of PSU(2,2|4) that acts within the BMN sector is SU(2|4).The subset SU(1|3)⊂ SU(2|4) commutes with Q,Q^† and generatesthe supermultiplets of BMN cohomologies. In each S_n, there is a finite number of single-trace generators in the BMN sector.They are given by (u_n)^i_1⋯ i_n =tr(ϕ^(i_1⋯ϕ^i_n)) (v_n)^i_1⋯ i_n-1_j =tr(ϕ^(i_1⋯ϕ^i_n-1)ψ_j) -`trace'(w_n)^i_1⋯ i_n-1 =tr(ϕ^(i_1⋯ i_n-1)f+ 1/2ϵ^jk(i_p∑_p=1^n-1ϕ^i_1⋯ϕ^i_p-1ψ_jϕ^i_p+1⋯ϕ^i_n-1)ψ_k) .Here, `trace' denotes the terms to be subtracted to ensure thatthe contractions of the upper/lower SU(3) indices are zero. The BMN multi-graviton cohomologies arepolynomials of u_n, v_n, w_n. These polynomials are subject to trace relations.These trace relations hold up to Q-exactterms.[In principle there might be relations which hold without any Q-exact terms. In practice, with extensive studies of the SU(2) andSU(3) graviton operators in the BMN sector, all trace relations of thissort that we found have nontrivial Q-exact terms.]For instance, the lowest trace relations for N=2 are R_ij≡ϵ_ikmϵ_jln(u_2)^kl(u_2)^mn =Q[-iϵ_a_1a_2(i tr(ψ_j)ϕ^a_1ϕ^a_2)] .More concretely, some components of these relations aretr(X^2) tr(Y^2)-[ tr(XY)]^2∼ 0 ,tr(XY) tr(XZ)- tr(X^2) tr(YZ)∼ 0 ,where ∼ hold up to Q-exact terms. Such Q-exact combinations are zeros incohomology. Of course multiplying gravitons to such relations yields furtherrelations. Trace relations cannot be seen if one does not know that the `meson' or `glueball' operators u_n,v_n,w_n are made of `gluons' ϕ,ψ,f. To enumerate graviton cohomologies without overcounting,we first consider the Fock space made by the operators {u_n,v_n,w_n} with n=2,⋯, N and then take care of the trace relations to eliminate the dependentstates.It is important to find all fundamental trace relations of the polynomials of u_n,v_n,w_n,which cannot be decomposed into linear combinations of smaller relations.Let us denote by R_a({u_n,v_n,w_n}) the fundamental trace relations,with a being the label.Non-fundamental trace relations are obtained by linear combinations of R_a's,∑_a f_a({u_n,v_n,w_n})R_a({u_n,v_n,w_n}) .In general, (<ref>) is nonzero and Q-exact.However, for some choices of f_a's, the combination (<ref>)may be exactly zero. If (<ref>) exactly vanishes,this yields a `relation of relations.'In terms of the mesonic variables u_n,v_n,w_n, they are trivial expressions, meaningthat various terms just cancel to zero. They just represent the waysin which fundamental relations R_a can be redundant at higher orders.For example, consider the relations R_ij of (<ref>) in theSU(2) gauge theory.Some relations of these relations are given byu^ikR_jk(u_2)-1/3δ^i_ju^klR_kl(u_2)=0 ,in the [1,1] representation. For instance, one can immediately see for i=1, j=2that u^1iR_2i=u^11[u^23u^13-u^12u^33] +u^12[u^33u^11-(u^13)^2]+u^13[u^12u^13-u^11u^23]=0 .This is a trivial identity if expanded in mesons. u^11R_21 and -u^12R_22-u^13R_23 represent same constraintu^11(u^23u^13-u^12u^33)=Q[⋯],implying that R_ij's are not independent. Interestingly, trace relations described so far will be used in section 4 to construct the ansatz for the non-graviton cohomologies. In the meantime, we shall exploit a more practical way of enumerating the graviton cohomologies, as we explain in section 3. § THE INDEX In this section, we explain how to enumerate the finite N graviton cohomologies described in section 2.2. Then it is straightforward to compute the index over graviton cohomologies, and subtract from the full index to obtain the index over non-graviton cohomologies. The results for SU(3) and SU(4) are presented in respective subsections.Recall that graviton operators in the BMN sector are the polynomials ofmesons (<ref>). We wish to enumerate linearly independent operators among these, i.e. we wish to mod out by linear relations between them. There are two main strategies that we exploit to ease this computation: the eigenvalue counting and the Gröbner basis.Let us explain the first idea, the eigenvalue counting. We first review how the multi-gravitonsmade of the chiral primaries u_n of (<ref>) are enumerated.Based on rather physical arguments, <cit.> proposed to countthem by taking all three scalars ϕ^m to be diagonal matrices.[The argumentis often dubbed `quantizing the moduli space' of the QFT. For exact quantum states,it relies on the protection of the moduli space against quantum corrections. At the level of classical cohomologies, its proofshould be elementary, although we do not pursue it here.] With this restriction, the problem of enumerating gauge-invariantoperators reduces to enumerating Weyl-invariant polynomials of the eigenvalues.Now our interest is in counting the finite N graviton cohomologiesinvolving all the descendants (<ref>) in S_n, not only the chiral primaries u_n. The descendants are obtained from u_n by actingthe supercharges in PSU(1,2|3). Since the single-graviton states belong to absolutely protected multiplets S_n, and since their multiplications trivially remainin cohomology both for free and 1-loop calculations, we can generatethe descendants by acting the supercharges of the strictly free theory <cit.>.The actions of free supercharges are linear so thatdiagonal ϕ^m's transform to diagonal ψ_m and f. (In the BMNsector, the supercharges Q^m_+ in SU(1|3) act linearly even in the classicalinteracting theory.) Therefore, ψ_m and f that appear in descendants can be taken to be diagonal matrices as well, for the purpose of enumerating graviton operators.So the counting of graviton operators is reduced to the countingof certain polynomials of the eigenvalues. We have N-1 eigenvaluesfor each field ϕ^m, ψ_m and f, so in total 7(N-1) variables are needed to describe graviton operators in the BMN sector. Let us denote these eigenvalues collectively as λ_I. Let us also denote the `mesonic generators' {u_n,v_n,w_n} (<ref>) with n=2,⋯,N, collectively as g_i's. These are now regarded as polynomials g_i(λ_I) of the eigenvalues λ_I. Then, we want to count the polynomials p(g_i) of the mesons g_i, which can be written as polynomials p(g_i(λ_I)) of eigenvalues λ_I.These polynomials are not all independent because certain polynomials p(g_i) of g_i's may be zero when written as polynomials p(g_i(λ_I)) of λ_I. Such polynomials can be thought of as constraints on the space of polynomials. These are remnants of the trace relationsof the N× N matrices. Had we been keeping all the N× N matrix elements,trace relation would have been zero up to a Q-exact term. Since the action of Qyields a commutator, the Q-exact term vanishes when the fields are diagonal. So general trace relations up to Q-exact terms reduce to exact polynomial constraints.Counting constrained polynomials is a classic mathematical problem, with known solution. This brings us to the second strategy that we exploit: the Gröbner basis.See e.g. <cit.>. Let us briefly explain a flavor of its properties andhow it is used to solve the enumeration problem. Recall that the multi-graviton operators are given by the set of all polynomials p(g_i) of g_i's. However, this set is overcomplete and therefore not suitable for the counting purpose, because of the constraints. That is, some of the polynomials are zero and consequently some of the polynomials are equivalent to each other. We want to better understand the constraints, i.e. polynomials of g_i that are zero. The constraints appear because each meson g_iis not an independent variable but instead made of the gluons λ_I, i.e. g_i=g_i(λ_I) where the right hand side is a polynomial of λ_I that corresponds to the meson g_i. All constraints are derived from the fact thatG_i(g_i, λ_I) ≡ g_i-g_i(λ_I) = 0 ,for each meson labeled by i. Therefore, the set of all polynomials of the mesons g_i and the eigenvalues λ_Ithat are zero (also known as the ideal) is generated by (<ref>),in the sense that any element of this set can be written as ∑_i q_i(g_i,λ_I) G_i (g_i,λ_I) ,where q_i(g_i,λ) are polynomials of g_i and λ_I. If we restrict to elements of this `set of zeroes’ that only involve g_i but not λ_I, those will be precisely the constraints that mod out the set of all polynomials p(g_i).Although (<ref>) is the most intuitive basis that generates the set of zeroeslike (<ref>), it is often not the most convenient basis.The same set of zeroes can be generated by many different choices of the basis, possibly with different numbers of generators. Gröbner basis is one of these choices with the following special property. Let {G_a(g_i,λ_I)} be a basis of the set of zero polynomials of (g_i,λ_I). Then, for any polynomial p (g_i,λ_I), suppose one tries to `divide’ this polynomial by the basis {G_a (g_i,λ_I)}. This is a process of writing the polynomial asp (g_i,λ_I) =∑_a q_a (g_i,λ_I) G_a (g_i,λ_I)+r (g_i,λ_I) ,where r (g_i,λ_I) can no longer be `divided by’ {G_a (g_i,λ_I)}, which can be well-defined by setting an ordering scheme between variables and their monomials. Naturally, r (g_i,λ_I) can be thought of as the remainder of the division. In general, there can be multiple ways — with different q_a and r — to write p (g_i,λ_I) as (<ref>). The special property of the Gröbner basis is that if {G_a (g_i,λ_I)} were the Gröbner basis of the set of zeroes, then the remainder r (g_i,λ_I) is unique for each given p (g_i,λ_I). Note that since {G_a (g_i,λ_I)} generates the set of zeroes, (<ref>) implies that the polynomial p (g_i,λ_I) is equivalent to its remainder r (g_i,λ_I). It follows that the set of all polynomials p (g_i,λ_I) is identical to the set ofall possible remainders r (g_i,λ_I) under division by the Gröbner basis. However, unlike in the set of all polynomials p (g_i,λ_I), there are no polynomials in the set of all remainders that are equivalent due to the constraints, because otherwise one of them should have been divided once more to yield the other as the remainder. Therefore, the set of remainders can be used to count the number of independent polynomials of (g_i,λ_I) under constraints.There is a canonical procedure to find the Gröbner basis of the set of zeroes given one choice of basis (<ref>), known as Buchberger's algorithm. Many computer algebra softwares implement this algorithm or its improved versions. The Gröbner basis depends wildly on the ordering scheme between variables and monomials, so it is important to choose a nice ordering scheme which eases the calculations.This ordering is difficult to know in advance, so some amount of trials and errors is involved in finding the Gröbner basis.By setting an appropriate ordering scheme,it is possible to consistently truncate the Gröbner basis for zero polynomials of (g_i,λ_I), into that for zero polynomials of g_i only. Then, the set of all possible remainders r(g_i) under division by the truncated Gröbner basis form a faithful — complete but not overcomplete — set of all independent polynomials of g_i, and therefore the set of all independent graviton operators. Moreover, one can easily construct a monomial basis for this set of remainders, from which it is straightforward to compute both the partition function and the index over graviton operators.Employing the two strategies — the eigenvalue counting and the Gröbner basis — explained so far, we have obtained a closed-form expression for the graviton index for the SU(2) theory, Z_ grav^SU(2) = 1+3t^4-8t^6-6t^10+10t^12+ 9t^14-9t^16+16t^18 -18t^20-3t^22+t^24-3t^26+9t^28-2t^30+3t^32-3t^34/(1-t^4)^3 (1-t^8)^3 . This result was obtained in <cit.> where the eigenvalue counting strategy was used but the Gröbner basis was not. Here, we have reproduced this result by finding a Gröbner basis of relations between SU(2) BMN gravitons that consists of 66 generators (after truncation), and counting the set of all possible remainders under division by those.Unfortunately, the computation of the Gröbner basis quickly becomes very cumbersome ifthe generators of the constraints { g_i - g_i(λ_I)} are numerous and complicated.For relations between a subset of SU(3) BMN gravitons that do not involve f, i.e. u_n and v_n in (<ref>), we found the Gröbner basis with 1170 generators (after truncation) after several hours of computation on a computer. For the complete set of SU(3) BMN gravitons including w_n, we were unable to find the Gröbner basis due to lack of computing resources: it takes months at least and it is tricky to parallelize. Therefore, we have devised a hybrid method to take maximal advantage of the Gröbner basis obtained for the non-f subsector as we now describe.We first list the complete and independent monomial basis of graviton operators, i.e. set of monomials of the mesons g_i, that consist of u_n,v_n but not of w_n (n=2,3), up to the charge order j=54. This can be done for any order j because the Gröbner basis for the non-f subsector has been obtained. Then, one can construct an overcomplete set of all graviton operators by multiplying each basis from the previous step by arbitrary numbers of w_2 and w_3, again up to j=54. Note that w_2 and w_3 include 3 and 6 different species of single-graviton operators, respectively, so the size of the overcomplete set grows quickly.It is helpful to fragment the problem by classifying the operators according to their charges. Namely, each charge sector is specified by 4 non-negative integers 2J and q_I = R_I + J (where I=1,2,3). Note that the overall order j used for grading the operators is j=2(q_1+q_2+q_3) and therefore it is always even in the BMN sector. This classification is useful because all single-graviton operators u_n,v_n,w_n and therefore all multi-graviton operators have definite charges, and operators with different sets of charges can never have a linear relation between them. Moreover, different charge sectors with merely permuted charges (q_1, q_2, q_3) should contain the same number of independent graviton operators. Therefore, we separately consider the overcomplete basis of gravitons in each charge sector with q_1 ≤ q_2 ≤ q_3.In order to count linearly independent operators among the overcomplete set in any charge sector, we rewrite each operator as a polynomial of the eigenvalues. This is done by substituting the mesons with corresponding eigenvalue polynomials u_n(λ_I), v_n(λ_I) and w_n(λ_I), which are obtained by writing the gluons in terms of their eigenvalues. For the eigenvalues of the SU(3) traceless elementary fields, we use the convention f = [f_100;0f_20;00 -f_1-f_2 ] , and likes.This process of multiplying polynomials requires some computational strategy. On one hand, it will be very inefficient to multiply the meson polynomials u_n(λ_I), v_n(λ_I) and w_n(λ_I) every time for each operator, because hundreds of different operators may have a factor in common, which is itself a graviton operator in a not-so-high order.So it will be wise to compute the intermediate operator only once and store it as a polynomial, making it available when computing a higher-order operator that can be obtained by multiplying more mesons to it. On the other hand, storing all intermediate gravitons as polynomials takes up too much memory, and in fact many gravitons near the maximum charge order (j=54) will not be used so repeatedly, especially if the computation is parallelized so that one computer will not compute all the charge sectors. We find balance between the two approaches by computing and storing polynomials for allcombinations of u_2 and u_3 that appear in the independent basis of the non-f sector, and similarly for allcombinations of v_2 and v_3 that appear in the independent basis. Then, we multiply one from each of these two storages and polynomials for w_2 and w_3 as needed, for each operator in an overcomplete basis.The number of independent polynomials within each charge sector is determined as the rank of their coefficient matrix. We have used the software<cit.> for finding the Gröbner basis, writing each operator as an eigenvalue polynomial, and extracting the coefficient matrix within each charge sector, andfor computing the rank of the matrix.The computation of indices for the SU(3) theory have been performed up to j=54 on personal computers. For example, the computation for the charge sector (2J, q_1, q_2, q_3) = (7,9,9,9), which turns out to be the largest, the coefficient matrix was 31026 × 20940 with rank 3242. For the counting of SU(4) BMN gravitons, we take a similar hybrid approach. Separation into charge sectors works identically to the SU(3) theory. However, computation of the Gröbner basis is even more heavy, both time-wise and memory-wise, so we were only able to obtain the Gröbner basis for a subsector of SU(4) BMN gravitons involving u_n (n=2,3,4), i.e. the chiral primaries. We first list the complete and independent monomial basis of the chiral primaries u_n using the Gröbner basis, up to the order j=30. Then we construct an overcomplete set of all multi-graviton operators within each charge sector by multiplying each independent basis by appropriate numbers of v_2, v_3, v_4, w_2, w_3 and w_4, again up to j=30.We write each operator in the overcomplete basis as a polynomial of the eigenvalues. For the traceless elementary fields in the SU(4) theory, we used the following convention for the diagonal entries: f = [ f_1 0 0 0; 0 f_2 0 0; 0 0 f_3-f_1 0; 0 0 0 - f_2 - f_3 ] , which slightly simplifies the polynomials compared to the more canonical convention f =diag(f_1,  f_2,  f_3,  -f_1-f_2-f_3).For optimal balance between time and memory, we have computed and stored polynomials for allcombinations among u_2 that appear in the independent basis of the chiral primary sector, and similarlyfor all combinations among u_3 and u_4 that appear in the independent basis. We also computed and stored polynomials for all possible combinations of v_2 and v_3, of v_4, and of w_2, w_3 and w_4 that appear in the overcomplete basis. Finally, for each inequivalent charge sector q_1 ≤ q_2 ≤ q_3, we have assembled polynomials from the storages, extracted the coefficient matrix of the overcomplete polynomials, and computed its rank.The computation of indices for the SU(4) theory have been performed up to j=30 on personal computers. For example, the computation for the charge sector (2J, q_1, q_2, q_3) = (3,5,5,5), which turns out to be the largest, the coefficient matrix was 12079 × 116042 with rank 3788. §.§ SU(3)Following the computational procedures explained above, we have computedthe SU(3) graviton index Z_ grav until t^54 order.We write the difference Z-Z_ grav with the full index Z, which is the index over non-graviton cohomologies or the `black hole' cohomologies, asZ-Z_ grav=Z_ core(Δ_I)·∏_I=1^31/1-e^-Δ_Ie^-Δ_1-Δ_2-Δ_3·∏_I<J(1-e^-Δ_I-Δ_J) .The factors that dress the index over core non-graviton cohomologies will be explained shortly. Z_ core(Δ_I)≡ f(t,x,y) withe^-Δ_1=t^2x, e^-Δ_2=t^2y^-1, e^-Δ_3=t^2x^-1ycan be expanded asf(t,x,y)=∑_j=0^54∑_ R_j (-1)^F( R_j)χ_ R_j(x,y)t^j+𝒪(t^56) ,where R_j runs over the SU(3) irreducible representations whichappear at t^j order (j is even in the BMN sector), χ_ R_j(x,y) isits character, andF( R_j) is its fermion number.The representations R_j appearing in the expansion of f, togetherwith their bosonic/fermionic natures, are shown in Table <ref>. We have classified the representations into several groups, i.e. what we suspectto be the fermionic towers F_0,...,F_4, the bosonic towers B_1,...,B_3,and the remainders F_ exc, B_ exc for which wedo not see particular patterns (thus named `exceptional').We comment on the factors which we have taken out in (<ref>). The factor ∏_I<J(1-e^-Δ_1-Δ_2) accounts for SU(1|3) descendants. For each non-graviton cohomology in R_j that contributes to Z_ core, the entire SU(1|3) multiplet obtained by acting the three fermionic generators Q_+^m must also be non-graviton cohomologies. Every such multiplet is a long multiplet of the SU(1|3), so the corresponding character is simply the contribution from the primary times the factor ∏_I<J(1-e^-Δ_1-Δ_2). This fact can be argued using the embedding supergroup PSU(2,2|4) of the 4d 𝒩=4 theory. For any of the three generators Q_+^m to annihilate the SU(1|3) primary, the primary of a bigger representation of PSU(2,2|4) that includes the SU(1|3) multiplet must be annihilated by Q_+^4 and by the SU(4)_R lowering operator that is not part of the SU(3) ⊂ SU(4)_R. The only PSU(2,2|4) representations that satisfy this property are B_1 B̅_1 [0;0]^(0,n,0), namely the graviton operators, or the identity. For details on the relevant representation theory, we refer to <cit.>, particularly its section 2.2.4, or to appendix B of <cit.>.The second factor of (<ref>) was taken out for an empiricalreason, with an expectation that they come from the graviton hairs of w_2'sin (<ref>).Namely, we conjecture that w_2 gravitons multiplyingthe core black hole cohomologies represented by Z_ core providenontrivial product cohomologies. Although we have little logical justification of the last claim (except that similarhairs are allowed in the SU(2) theory), we think that the phenomenological evidenceof this claim is compelling since various simple patterns inTable <ref> are clear only after factoring it out. Now we comment on various structures that we observe from Table <ref>. We first discuss the possible product cohomologies obtained by multiplyingblack hole cohomologies and gravitons. When a product is Q-exact, it is interpreted <cit.>as a finite N generalization of the black hole no-hair theorem in the BPS sector.[This does not mean that the product states are absent:the Q-exact products acquire anomalous dimensions and become non-BPS.Similar structures are known in the gravity dual:a charged bulk field develops hairs around AdS black holesin the non-BPS regime, which become pathological/singular in the BPS limit<cit.>.]We cannot conclude just from the index whether these product states existor not. However, if possible product states do not appear in the index, it issuggestive that the corresponding hair is not allowed in the cohomology. In the SU(2) theory, several simple product cohomologies which do not appear in the index wereexplicitly shown to be Q-exact. Similar calculations are much more difficult for SU(3), which we did not attempt. First of all, as explained above, all possible product cohomologies obtained bymultiplying w_2 are factored out. Mainly appealing to the simpler spectral structuresof the resulting Z_ core as shown in Table <ref>, we conjecture that the w_2 hairs are (at least mostly) allowed. Apart from w_2,we discuss below the possibilities of other graviton hairs. We dividethe discussions into the possible hairs of w_3 (which are the only gravitonsheavier than w_2) and the rest. Table <ref> does not seem to signal hairs from gravitons which are no heavier than w_2. The gravitons with j chargesno larger than w_2 are: u_2 in [2,0]_4, v_2 in [1,1]_6,u_3 in [3,0]_6, v_3 in [2,1]_8, where the subscript denotes j.Some products clearly do not appear in Table <ref>.For instance, [0,0]_24^F_0 in Table <ref> times u_2 does notappear in the index, because we find no states at [2,0]_28 with fermionicstatistics in the table. There are many more products which similarly do not appearin the table. Of course, just by matching charges andrepresentations, there are several possibilities in which an entry of Table <ref>can be accounted for by a tensor product of another entry and these gravitons.For instance, u_2 may multiply the state [p,q]_j in F_1,...,F_4 or B_1,...,B_3 (with eitherq=0,1) to yield [p+2,q]_j+4 in the same tower. However,we feel that this seems quite unlikely. For instance, in F_1, this may explain [5,0]_34,[7,0]_38,⋯ from [3,0]_30, but not[4,0]_32,[6,0]_36,⋯. In other words, all these towers are more naturallygenerated by adding one scalar at a time (as an adjoint letter) rather thantwo scalars (in the gauge-invariant form of u_2). Similarly, u_3may multiply [p,q]_j in a tower to yield [p+3,q]_j+6 in thesame tower, but again for the same reason we feelthis unappealing. Apart from these, sporadically, there exist several group theoretic possibilitiesof light graviton hairs in Table <ref>, in particular accounting forsome entries in F_ exc or B_ exc as product cohomologies.We shall not list all the possibilities here. It may be worthwhileto think about whether some of the `exceptional' cohomologies inF_ exc or B_ exc can be accounted for by such sporadic graviton hairs. Now we consider possible hairs obtained by multiplying w_3 of (<ref>).Again, this is just a possibility from matching the charges/representations.It is group theoretically possiblethat the towers F_2, F_3 are the hairy cohomologies obtained bymultiplying w_3's to the F_1 tower. This isbecause the product of [n,0]_j in F_1 and [2,0]_10decomposes to[n,0]_j⊗[2,0]_10=[n+2,0]_j+10⊕ [n,1]_j+10⊕[n-2,2]_j+10 ,containing[n+2,0]_j+10 which is in F_2. Similarly, multiplying two such gravitons,one obtains states in [n+4,0]_j+20 which is in F_3.If F_4 also forms a tower, with partial cancellations with B_2,then F_4 may also be product cohomologies of F_1 and these gravitonsbecause the right hand side of (<ref>) contains [n,1]_j+10.Similarly, if the tower B_2 exists (by cancelling partly withF_4), B_2 and B_3 could be product cohomologies of B_1 and w_3'sbecause [n,1]_j⊗[2,0]_10 contains [n+2,1]_j+10. The possibilities discussed above suggest that the colored towers (cyan or gray) in Table <ref> may be product cohomologies containing w_3. If these colored towers are indeed obtained by multiplying w_3 toF_1 or B_1, note that the last possible tower of [n-2,2]_j+10in (<ref>) does not exist in the index, which could meanthat this part of the product is Q-exact. If the possibilities raised in the previous paragraph are indeed true, it impliesthat the allowed hair structure of the w_3 gravitons in [2,0]_10 is moredelicate than that of w_2. For instance, our conjecture on w_2 is that their hairs are universally allowed, irrespective ofthe core black hole cohomology chosen. On the other hand, if the scenario of the previous paragraph is true,the allowed/disallowed combinations of the w_3 hair aredetermined after entangling the gravitons with the core black hole states.This is not the familiar form of the no-hair theorem of semiclassical black holephysics. That is, the no-hair theorem as well as its violation is stated for a givenblack hole background, which represents the whole ensemble of states.It will be interesting to see if the allowed hairs exhibit subtledependence on the fine-grained information withinthe black hole ensemble or not,and if they do, whether it may have implications to the fuzzball paradigmof black holes <cit.>. We emphasize that all the scenarios discussedabove can be straightforwardly confirmed/disproved once the cohomologies ofF_1,B_1 are constructed, by checking whether the product cohomologiesdiscussed above are Q-exact or not.Now we discuss the possible charge structures and the field contentsof the towers. We first make general considerations. Supposethat n_ϕ scalars, n_ψ fermions and n_f field strengths are used tomake the operator in SU(3) representation [p,q]. The operator is not necessarilymade by just one choice of n_ϕ,n_ψ,n_f but superposes many different terms in general.So we are in fact studying the structures of a given term in the operator. Let us also introducethe following non-negative integers:l pairs of contractions are made between the SU(3) indices of ϕ's and ψ's;m_ϕ pairs of ϕ's are contracted with ϵ_abc to yield lower indices; m_ψ pairs of ψ's are contracted with ϵ^abc to yield upper indices; b_ϕ threesomes of ϕ's are contracted with ϵ_abc; b_ψ threesomes of ψ's are contracted with ϵ^abc.Once the above contractions are made, the remaining upper/lower indices arerespectively symmetrized to yield the [p,q] representation (after subtractingcertain terms to ensure that the indices are traceless).So one obtains p = n_ϕ-l-2m_ϕ-3b_ϕ+m_ψ , q = n_ψ-l-2m_ψ-3b_ψ+m_ϕ .Since ϕ, ψ and f carry (R,J)=(1/3,0), (1/6,1/2)and (0,1) respectively, the charges are given byR = n_ϕ/3+n_ψ/6= p/3+q/6+l/2+m_ϕ/2+b_ϕ+b_ψ/2 ,J = n_ψ/2+n_f=q/2+n_f +l/2+m_ψ+3b_ψ/2-m_ϕ/2 , j = 6(R+J)=2p+4q+6n_f+6l+6m_ψ+6b_ϕ+12b_ψ . Now we apply these results to the tower F_1, for the states with[p,q]_j=[n+3,0]_30+2n, where n=0,1,2,⋯. (Similar studies canbe made to other towers in Table <ref> except F_0.)Inserting these expressions to (<ref>), one obtains4 = n_f+l+m_ψ+b_ϕ+2b_ψ ,J = n_f+l/2+m_ψ+3b_ψ/2-m_ϕ/2 , R = n/3+1+l/2+m_ϕ/2+b_ϕ+b_ψ/2≥n/3+1 .The last inequality is saturated if and only ifl=m_ϕ=b_ϕ=b_ψ=0. On the other hand, fromj=2n+30 and the lower bound of R, one obtainsJ=j/6-R= 5+n/3-R≤ 4 .This implies that F_1 is a tower in which R increasesindefinitely with J≤ 4 bound.From the first equation of (<ref>), n_f,l,m_ψ,b_ϕ,b_ψare all bounded from above. Also, from the second equation of (<ref>),one finds n_ψ=l+2m_ψ+3b_ψ-m_ϕ. The first three terms on the righthand side are bounded, so n_ψ is bounded from above. Furthermore, for the non-negativityof n_ψ, m_ϕ is also bounded from above. The only unbound non-negative integer is n_ϕ. So this tower has increasing numbersof scalars. In other words, this is a Kaluza-Klein tower, carrying increasing momentum charges on S^5,rather than a higher-spin tower.We do not know which type of tower F_0 is, between higher-spin and Kaluza-Klein.Apparently, the states in F_0 appear when j increases by 6, exceptat j=54 where we found no states in [0,0] representation.It is unclear which of the following is true (if any):(i) the absence of the tower F_0 beyond thischarge; (ii) the multiple tower structure within F_0 with differentperiods; (iii) existence of an exceptional bosonic cohomology at this orderwhich cancels with the tower at j=54.Related to this tower,we comment that the SU(2) cohomologiesin the BMN sector showed the following index <cit.>:Z-Z_ grav = Z_ core(Δ_I)·∏_I=1^31/1-e^-Δ_Ie^-Δ_1-Δ_2-Δ_3·∏_I<J(1-e^-Δ_I-Δ_J) ,Z_ core = =-∑_n=0^∞ t^24+12n .The whole tower of cohomologies for Z_ core was also constructed at arbitrarily large j, with R fixed and J increasing. So this is a higher spin tower of core black hole primaries. This suggests that the SU(2) tower may be protected by certain symmetries, perhaps ofthe sort discussed in<cit.>. It is not clear whether the F_0 tower in Table <ref>can be understood in a similar way. The Kaluza-Klein towers are apparently not related to any symmetries and thusmay not be protected at large energies or large j.In particular, it will be interesting to see if these towers arerelated to the towers of giant gravitons appearing inthe so-called `giant graviton expansion' of the index <cit.>.This expansion recasts the index as an auxiliary summationover a tower, with increasing giant graviton number. More precisely,the giant graviton tower includes both finite N gravitons (or the finite Ntrace relations to be subtracted) as well as new black hole states formed by bound statesof D-branes and open string excitations. In this framework, in certain charge regimes,the black hole entropy is obtained by first computing the entropy S(j,n) withfixed giant graviton number n and then maximizing S(j,n) in n at fixed j <cit.>. (See <cit.> for howone may generalize this calculation.) Indeed there is a signalthat the giant graviton tower loses its meaning at higher n≫ N <cit.>. With this interpretation in mind, it will be interesting to compute Z-Z_ grav eithertill higher orders in j or exactly, to see whether the KK towerssurvive at arbitrary high j or not.We feel that the latter possibility is more natural. Conceptually,these Kaluza-Klein towers look analogous to the D-brane giant gravitontowers which are bad variables at very large energies.More practically,we suspect the disappearance of these towers for the following reason.At least till j=54, the major observed pattern of the fermionic towers isF_1,F_2,F_3,⋯. The tower F_n starts from [3+2n,0]_30+10n and the k'th excited states are in [3+2n+2k,0]_30+10n+2k. If all these towers survive at very large j, the following states contributeto Z_ core at charge j:[j,0]_2j+24⊕ [j-3,0]_2j+24⊕[j-6,0]_2j+24⊕⋯ .The number of states carried by this sequence is proportional toΩ_j^F∼ j^3. Similar calculation can be done for the tower of bosonic statesB_1,B_2,B_3,⋯, supposing that they continue indefinitely.The number of states at large charge j from these towers isalso proportional to Ω_j^B∼ j^3. The two large numbersΩ_j^F,Ω_j^B do not cancel substantially, so that Ω_j^F-Ω_j^B is also at j^3 order. The other two factorsin (<ref>), from the graviton hair and thesupermultiplets, do not change this asymptotics. On the other hand,recall from section 2.1 that the asymptotic degeneracy of the full BMN indexat N=3 scales like j^2, which is much smaller than j^3. One possible explanationis that the SU(3) graviton part Z_ grav of the index scales like j^3 and makes a fine-tuned cancellation withthe tower contribution to yield the j^2 scaling. We cannotcheck or rule out this possibility because we could not computeZ_ grav exactly.Although this is logically possible, there is no natural reason to expect thisfine-tuned cancellation. Another possibility is that the hierarchies of towersF_1,F_2,F_3,⋯ and B_1,B_2,B_3,⋯ stop existing at large enough j.It will be interesting to explicitly check the situation at large j.§.§ SU(4)In the SU(4) case, we computed Z_ grav till j=30 level.The index Z-Z_ grav over non-graviton cohomologies is given byZ-Z_ grav=[-χ_[2,0](x,y)t^28 -χ_[3,0](x,y)t^30+𝒪(t^32) ]·∏_I<J(1-e^-Δ_I-Δ_J) .The second factor generates the Fock space of each SU(1|3) multiplet,while the first factor in the square parenthesis represents the non-graviton primaries. One finds that the BMN index predicts an apparent threshold of non-gravitoncohomologies at j=6(R+J)=28. Again, conservatively, this is an upper boundfor the threshold for two different reasons: first because the index may miss apair of canceling threshold cohomologies at lower charges, and also because the truethreshold might lie outside the BMN sector (carrying nonzero SU(2)_r spin J_1-J_2). Anyway, the above apparent threshold is higher than the SU(3) threshold.So it is natural to expect that it was an exception thatthe SU(2) and SU(3) thresholds were the same: the (apparent)thresholds for j=6(R+J) are 24,24,28,⋯ for N=2,3,4⋯.To obtain the threshold level in terms of energy E=3R+2J,one should construct the actual cohomologies which account for thet^28 term. This will not be done in this paper.§ CONSTRUCTING COHOMOLOGIES The cohomologies we would like to construct should be Q-closed and not Q-exact.Unlike gravitons, the Q-closedness of the black holecohomologies should be ensured by the trace relations. (Otherwise, if it is a cohomology at given energy and at arbitrary values of N, it is a graviton cohomology.) So it is important to knowwhat kind of nontrivial trace relations are available for N× N matriceswhen the number of fields is larger than N. It seems to be widely believed that all SU(N) trace relations are derivedfrom the Cayley-Hamilton identity. For instance, see <cit.>(p.7, below eqn.(19)) and <cit.>. But in practice it is inefficient to search for the tracerelations that we need just from this identity. Fortunately, we alreadyknow many trace relations from the calculations reported in section 3. Namely, when enumerating finite N gravitons,we have counted them subject to various trace relations of the generators g_i.So one can take advantage of these trace relations to construct black hole cohomologies.This leads to our `ansatz' for black hole cohomologies, which weexplain now. We can motivate the ideas with a simple example in the SU(2) theory <cit.>.A representative of the threshold non-graviton cohomology in SU(2) is given byO_0≡ϵ^abc(v_2)^m_a(v_2)^n_b tr(ψ_(cψ_mψ_n))where v_2 is the graviton operator in the S_2 multiplet.Let us see how this operator becomes Q-closed. Acting Q on O_0,Q acts only on tr(ψ_(cψ_mψ_n)) since v_2 is Q-closed. One obtainsQ tr(ψ_(cψ_mψ_n))∝ϵ_ab(c(v_2)^a_m(v_2)^b_n)≡ R(v_2)_cmnafter using SU(2) trace relations.Plugging this into QO_0, one obtainsQO_0∝ϵ^abc(v_2)^m_a(v_2)^n_bR(v_2)_cmn=0 .At the last step, one can show that the quartic mesonic polynomialϵ^abc(v_2)^m_a(v_2)^n_bR(v_2)_cmnis identically zero <cit.>. From the viewpoint of section 3, (<ref>) aregraviton trace relations and the last step of (<ref>) isa relation of relations. So the operator O_0 is shown to be Q-closed byusing the trace relations and a relation of relations of the finite N graviton operators.This idea can be extended to construct operators which become Q-closedonly after using trace relations. Namely, for each relation of relationssuch as (<ref>), we can construct a Q-closed operator such as (<ref>). We still need to check that they are not Q-exact for them to representnontrivial Q-cohomologies. Also, there are non-graviton cohomologies which are not constructedthis way <cit.>. For these reasons, the Q-closed operators constructed in this way are mere ansätze for the non-graviton cohomologies. In the appendix, we have collectedall SU(3) fundamental trace relations including u_n,v_n only,and manifestly wrote them in Q-exact forms. We have also found trace relationsinvolving u_n,v_n,w_n till j=20 order. We have also found all relations between the fundamental gravitontrace relations at j=24 and some of them at j=30 orders in the SU(3)⊂ SO(6)_Rsinglet sector where the index predicts non-graviton cohomologies (see Table <ref>).In other charge sectors, one can immediately write down Q-closed operatorsif one finds new relations of the fundamental trace relations.When we write a fundamental trace relation R_a in a Q-exact form,R_a∼ Qr_a, there is an ambiguity in r_a by addition ofarbitrary Q-closed operators. We partly fix it so thatr_a vanishes when all the letters are restricted to diagonal matrices. Since the Q-closedoperators constructed from relations of relations are linear combinations of r_a's, theyvanish with diagonal letters. This makes it impossible for our ansatz to be gravitons.So our ansatz either yields Q-exact operators or non-graviton cohomologies. Based on the ansatz, in subsection 4.1, we construct a number of gauge-invariant Q-closed non-graviton operators at j=24 order that are singlets of SU(3) global symmetry. Only one of them is not Q-exact, representingthe non-graviton cohomology predicted by the index. In subsection 4.2, we sketch how we checked the (non-)Q-exactness of these operators, while also showing that there are no other non-graviton cohomologies in the j=24, SU(3) singlet sector.§.§ SU(3) threshold cohomology from ansätze In this subsection, we present the explicit form of the black hole cohomology at the threshold level j=24 which is singlet under SU(3) ⊂ SU(4)_R global symmetry,in the BMN sector of the SU(3) gauge theory. We first list the non-graviton Q-closed operators from our ansatz. We find one non-Q-exact operator among them, which is the threshold cohomology.At j≡ 6(R+J)=24, operators are further distinguished by R-chargesR ≡R_1+R_2+R_3/3. The BMN operators which are SU(3) ⊂ SU(4)_R singletssatisfy R_1=R_2=R_3 and J_1=J_2. Then the possible charges of the operators are (R,J) = (n/2,8-n/2) where n=0,⋯ , 8. In each charge sector, the number of lettersis fixed to n+4. However, our ansatz further restricts the charges since acting Q on our ansatz should become a polynomial ofu_2,3, v_2,3, w_2,3. As a result, there exist total 7 possible charge sectors within our ansatz: (R,J) = (n/2,8-n/2) where n=1,⋯ , 7.When (R,J) = (1/2, 7/2) or (1,3), there are no Q-closedoperators within our ansatz using the trace relations in the appendix. One can understand it heuristically as follows. At these charges,R is so small that only a small number of scalars is admitted. As the graviton generators contain at least one scalar field, only few types of graviton polynomialsexist in these sectors, which are not enough to host relationsof relations. Therefore, these charge sectors are incompatible with our ansatz.The other 5 charge sectors host Q-closed operators in our ansatz,whose explicit forms will be presented below.We now present the Q-closed non-gravitonoperators in each of the five charge sectors, (R,J) = (n/2,8-n/2) where n=3,⋯ ,7. For convenience, werewrite here the definition of the single-trace generators of the SU(3)BMN gravitons u_2,3, v_2,3, w_2,3:u^ij≡tr(ϕ^(iϕ^j)) , u^ijk≡ tr(ϕ^(iϕ^jϕ^k)) ,v^i_j ≡tr( ϕ^i ψ_j) - 1/3δ^i_jtr(ϕ^a ψ_a) , v^ij_k ≡ tr(ϕ^(iϕ^j)ψ_k)- 1/4δ^i_ktr( ϕ^(jϕ^a)ψ_a)- 1/4δ^j_ktr( ϕ^(iϕ^a)ψ_a) , w^i ≡tr(f ϕ^i + 1/2ϵ^ia_1a_2ψ_a_1ψ_a_2) , w^ij≡ tr(f ϕ^(iϕ^j) +ϵ^a_1a_2(iϕ^j)ψ_a_1ψ_a_2) .i) (R,J) = (3/2, 5/2) The operators in this sector are made of 7 letters. The possible numbers (n_ϕ,n_ψ,n_f) of scalars, fermions and f in each term are (n_ϕ, n_ψ, n_f) = (4,1,2), (3,3,1),(2,5,0). We find 1 Q-closed operator in this sector from the trace relations and a relation ofrelations in appendix A. This Q-closed operator is given byO^(2,1)≡65u^ij(r_20^(2,1))_ij -39w^ij (r_14^(1,1))_ij +5w^i (r_16^(1,1))_i +312v^jk_i (r_16^(1,2))^i_jk +26v^j_i(r_18^(1,2))^i_j +6w^i (r_16^(0,3))_i .The superscripts denote (n_f, n_ψ) of the terms with maximal n_f in the operator.r_j^(n_f,n_ψ)'s are given in (<ref>), (<ref>) where R_j^(n_f,n_ψ-1)≡ i Qr_j^(n_f,n_ψ)'s arethe fundamental trace relations. The Q-closed operator (<ref>) turns out to be Q-exact.In fact, (<ref>) is even under the parity transformation of <cit.>. It is already known thatall such even operators in this charge sector are Q-exact for all N≥ 3 <cit.>. ii) (R,J) = (2, 2) The operators in this sector are made of 8 letters. Allowed (n_ϕ, n_ψ, n_f) are (6,0,2), (5,2,1),(4,4,0). We find 4 Q-closed operators in this sector given byO_1^(1,2)≡-3v^(j_i w^k) (r_10^(0,1))^i_jk -3u^(ijw^k) (r_12^(0,2))_ijk +ϵ_a_1a_2i u^a_1j w^a_2(r_12^(0,2))^i_j , O_2^(1,2)≡-9u^a(iv^j)_a (r_14^(1,1))_ij +10ϵ_a_1a_2(iu^a_1kv^a_2_j) (r_14^(1,1))^ij_k + 30 v^(j_i w^k) (r_10^(0,1))^i_jk +60u^(jkv^l)_i (r_14^(0,3))^i_jkl , O_3^(1,2)≡-3u^a(iv^j)_a (r_14^(1,1))_ij+6ϵ_a_1a_2(iu^a_1kv^a_2_j) (r_14^(1,1))^ij_k +4u^ijk (r_18^(1,2))_ijk+14v^(j_i w^k) (r_10^(0,1))^i_jk -6w^ij (r_14^(0,2))_ij-12ϵ^a_1a_2(iv^j_a_1v^k)_a_2 (r_12^(0,2))_ijk-4v^j_a v^a_i (r_12^(0,2))^i_j , O_4^(1,2)≡-3u^a(iv^j)_a (r_14^(1,1))_ij+14ϵ_a_1a_2(iu^a_1kv^a_2_j) (r_14^(1,1))^ij_k +8v^jk_i (r_16^(1,1))^i_jk+42v^(j_i w^k) (r_10^(0,1))^i_jk+12 u^(ijw^k) (r_12^(0,2))_ijk-24 w^ij (r_14^(0,2))_ij-36ϵ^a_1a_2(iv^j_a_1v^k)_a_2 (r_12^(0,2))_ijk- 8 v^jk_i (r_16^(0,3))^i_jk .All operators in (<ref>) are Q-exact.iii) (R,J) = (5/2, 3/2) The operators in this sector are made of 9 letters. Allowed (n_ϕ, n_ψ, n_f) are (7,1,1),(6,3,0). We find 13 Q-closed operators in this sector given byO_1^(1,1)≡ϵ_a_1a_2 iu^a_1 (j w^k) a_2 (r_10^(0,1))^i_jk , O_2^(1,1)≡ϵ_a_1a_2 iu^a_1 jk w^a_2 (r_10^(0,1))^i_jk , O_3^(1,1)≡ϵ_a_1a_2iϵ_b_1b_2j u^a_1b_1u^a_2b_2k (r_14^(1,1))^ij_k+5v^a_i v^jk_a (r_10^(0,1))^i_jk -2v^(j_a v^k)a_i(r_10^(0,1))^i_jk , O_1^(0,3)= -ϵ_i a_1 a_2( 4 u^a_1 bv^j a_2_b + 3u^j a_1bv^a_2_b) (r_12^(0,2))^i_j = 1/2 iQ ((r_12^(0,2))^i_j (r_12^(0,2))^j_i), O_2^(0,3)= -ϵ_a_1 a_2 (i(u^a_1(kv^l)a_2_j)+ u^kl a_1v^a_2_j)) (r_12^(0,2))^ij_kl = 1/2 iQ ((r_12^(0,2))^kl_ij (r_12^(0,2))^ij_kl) , O_3^(0,3)≡ -u^a(iv^jk)_a (r_12^(0,2))_ijk , O_4^(0,3)≡ -ϵ_a_1 a_2 i u^a_1 bv^a_2_b (r_14^(0,2))^i , O_5^(0,3)≡ 6v^a_i v^jk_a(r_10^(0,1))^i_jk +6u^a(ijv^k)_a (r_12^(0,2))_ijk+ϵ_a_1a_2 i u^a_1 bjv^a_2_b (r_12^(0,2))^i_j ,O_6^(0,3)≡ 24 v^(j_a v^k)a_i(r_10^(0,1))^i_jk+6 u^a(iv^j)_a (r_14^(0,2))_ij- ϵ_a_1a_2(i u^a_1 kv^a_2_j) (r_14^(0,2))^ij_k,O_7^(0,3)≡v^a_i v^jk_a(r_10^(0,1))^i_jk-10v^(j_a v^k)a_i(r_10^(0,1))^i_jk+6u^a(ijv^k)_a (r_12^(0,2))_ijk +10 ϵ_a_1a_2(i u^a_1 klv^a_2_j) (r_12^(0,2))^ij_kl ,O_8^(0,3)≡5v^a_i v^jk_a(r_10^(0,1))^i_jk-2v^(j_a v^k)a_i(r_10^(0,1))^i_jk+9u^a(ijv^k)_a (r_12^(0,2))_ijk +6ϵ_a_1 a_2 i u^a_1 (ju^kl) a_2 (r_14^(0,3))^i_jkl ,O_9^(0,3)≡ 6v^a_i v^jk_a(r_10^(0,1))^i_jk+12v^(j_a v^k)a_i(r_10^(0,1))^i_jk+18u^a(ijv^k)_a(r_12^(0,2))_ijk -ϵ_a_1a_2(i u^a_1 kv^a_2_j) (r_14^(0,2))^ij_k,O_10^(0,3)≡ 38v^a_i v^jk_a(r_10^(0,1))^i_jk+4v^(j_a v^k)a_i(r_10^(0,1))^i_jk+24u^a(ijv^k)_a(r_12^(0,2))_ijk +5u^(jkv^l)_i (r_14^(0,2))^i_jkl .All except for O^(0,3)_6 in (<ref>)are Q-exact.Therefore, a representative of the cohomology in this sector can be written asO≡ -6O_6^(0,3)= 288 v^j_a v^ka_i ϵ_c_1c_2(j( ϕ^c_1ϕ^c_2ϕ^iψ_k)) -72 v^a_b v^bk_a ϵ_c_1c_2(k( ϕ^c_1ϕ^c_2ϕ^dψ_d)) +36ϵ_a_1a_2i u^a_1 kv^a_2_j[ 2(ϕ^(iϕ^cϕ^j)ψ_(cψ_k))+2(ϕ^(i|ϕ^cϕ^|j)ψ_(cψ_k)) . .+9 ( ϕ^(iϕ^jψ_(cϕ^c)ψ_k))-6 ( ϕ^(iϕ^j)ψ_(cϕ^c ψ_k))]-9ϵ_a_1a_2j u^a_1 bv^a_2_b[ 2(ϕ^(jϕ^cϕ^d)ψ_(cψ_d))+2(ϕ^(j|ϕ^cϕ^|d)ψ_(cψ_d)) .. +9 ( ϕ^(jϕ^dψ_(cϕ^c)ψ_d))-6 ( ϕ^(jϕ^d)ψ_(cϕ^c ψ_d))]-20 u^aiv^j_aϵ_b_1b_2b_3[2 (ψ_(iψ_j)ϕ^b_1ϕ^b_2ϕ^b_3) + (ψ_(iϕ^b_1ψ_j)ϕ^b_2ϕ^b_3)] -36 u^aiv^j_aϵ_b_1b_2(i[(ψ_j)ψ_cϕ^b_1ϕ^b_2ϕ^c)+(ψ_j)ψ_cϕ^b_1ϕ^cϕ^b_2)+(ψ_j)ψ_cϕ^cϕ^b_1ϕ^b_2)] -36 u^aiv^j_aϵ_b_1b_2(i[(ψ_j)ϕ^b_1ψ_cϕ^b_2ϕ^c)+(ψ_j)ϕ^b_1ψ_cϕ^cϕ^b_2)+(ψ_j)ϕ^cψ_cϕ^b_1ϕ^b_2)]-36 u^aiv^j_aϵ_b_1b_2(i[(ψ_j)ϕ^b_1ϕ^b_2ψ_cϕ^c)+(ψ_j)ϕ^b_1ϕ^cψ_cϕ^b_2)+(ψ_j)ϕ^cϕ^b_1ψ_cϕ^b_2)]-36 u^aiv^j_aϵ_b_1b_2(i[(ψ_j)ϕ^b_1ϕ^b_2ϕ^cψ_c)+(ψ_j)ϕ^b_1ϕ^cϕ^b_2ψ_c)+(ψ_j)ϕ^cϕ^b_1ϕ^b_2ψ_c)]+12 u^aiv^j_aϵ_b_1b_2(i[5(ψ_j)ϕ^b_1ϕ^b_2)(ψ_cϕ^c)+2(ψ_j)ϕ^(b_1ϕ^c))(ψ_cϕ^b_2)-2 (ψ_j)ϕ^b_2)(ψ_cϕ^(b_1ϕ^c))] .The scaling dimension of this cohomology O is E=3R+2J=21/2. Note that therepresentative found above does not contain the letter f.iv) (R,J) = (3, 1) The operators in this sector are made of 10 letters. Allowed (n_ϕ, n_ψ, n_f) are (9,0,1),(8,2,0). We find 6 Q-closed operators in this sector given byO^(0,2)_1 ≡ - ϵ_a_1a_2i u^a_1 b u^jkv^a_2_b (r_10^(0,1))^i_jk +2ϵ_a_1a_2 iu^a_1 b u^a_2 (jv^k)_b (r_10^(0,1))^i_jk , O^(0,2)_2 ≡ -6ϵ_a_1a_2i u^a_1b(jv^k)a_2_b (r_10^(0,1))^i_jk- ϵ_a_1a_2(i u^a_1(kv^l)a_2_j) (r_12^(0,1))^ij_kl , O^(0,2)_3 ≡ -ϵ_a_1a_2i u^a_1 b u^jkv^a_2_b (r_10^(0,1))^i_jk- ϵ_a_1a_2(i u^a_1klv^a_2_j) (r_12^(0,1))^ij_kl , O^(0,2)_4 ≡ -ϵ_a_1a_2i u^a_1 b u^jkv^a_2_b (r_10^(0,1))^i_jk+ ϵ_a_1a_2(iϵ_j)b_1b_2 u^a_1b_1 u^a_2b_2 u^kl (r_12^(0,2))^ij_kl , O^(0,2)_5 ≡ -4ϵ_a_1a_2i u^a_1 b u^jkv^a_2_b (r_10^(0,1))^i_jk- 24ϵ_a_1a_2i u^a_1b(jv^k)a_2_b (r_10^(0,1))^i_jk- ϵ_a_1a_2(iϵ_j)b_1b_2 u^a_1b_1 u^a_2b_2k (r_14^(0,2))^ij_k , O^(0,2)_6 ≡ -ϵ_a_1a_2i u^a_1 b u^jkv^a_2_b (r_10^(0,1))^i_jk+ 12ϵ_a_1a_2i u^a_1b(jv^k)a_2_b (r_10^(0,1))^i_jk+ 3ϵ_a_1a_2i u^a_1(j u^kl)a_2 (r_14^(0,2))^i_jkl .All the operators in (<ref>) are Q-exact.v) (R,J) = (7/2, 1/2) The operators in this sector are made of 11 letters. Allowed (n_ϕ, n_ψ, n_f) is (9,1,0). We find 1 Q-closed operator in this sector given byO^(0,1) ≡36 ϵ_a_1a_2a_3ϵ_b_1b_2 i u^a_1b_1 u^a_2b_2 u^a_3jk (r_10^(0,1))^i_jk+5 ϵ_a_1 a_2 a_3ϵ_b_1 b_2 b_3 u^a_1 b_1u^a_2 b_2u^a_3 b_3 r_12^(0,1) -6 ϵ_a_1a_2(iϵ_j)b_1b_2 u^a_1b_1 u^a_2b_2 u^kl (r_12^(0,1))^ij_kl .The operator (<ref>) is Q-exact.In summary, we have found 1 fermionic black hole cohomology using our ansatz, which is a singlet under SU(3) ⊂ SU(4)_R, at j=24 whose representative is given by (<ref>). Its charges and scaling dimension are given by (R,J,E) = (5/2,3/2,21/2).§.§ Q-exactness checks and ansatz-independent studies In this subsection, we sketch how to determine Q-exactness of various Q-closed operators introduced in the previous subsection. We also show that (<ref>) is the only non-graviton cohomology at j=24 in the SU(3) singlet sector most generally, without imposing the ansatz. To check whether a given operator is Q-exact or not, especially to checknon-Q-exactness, one has torule out all possible ways of writing the operator as Q of `something'. That being said, one needs to construct all possible operators that can participate in `something' (the meaning of which will be made clear shortly) and show that the target operator is linearly independent of Q-actions of them. More specifically, we divide the check of Q-exactness into 4 steps, that we summarize as follows.* Construct all gauge-invariant operators whose Q-action may participate in reproducing the target.* Count the number of linearly independent operators from step 1, and extract the maximal subset of linearly independent operators. This is called the basis.* Act Q on the basis operators, then again count and extract the maximal subset of linearly independent ones between them.* Check if the target is linearly independent of the result of step 3.Now we explain what operators `may participate in reproducing the target' in step 1. This consists of two criteria: the charges and the parity under permutation.First, the charges of the target operator constrain the charges, thus the letter contents of the basis operators. Note that the action of Q increases R_I=1,2,3 by 1/2 and decreases J = J_1 = J_2 by 1/2. Therefore, the basis operators must have the set of charges that differ by the corresponding amount from the target, otherwise their Q-actions are disjoint from the target. Note that all of our targets are SU(3) singlets, so we always have R=R_1=R_2=R_3.Second, all of our targets being singlets under the SU(3) subgroup of the SU(4) R-symmetry group, imposes a stronger constraint than just restricting to the charge sectors with R_1=R_2=R_3. Each basis operator must be invariant under cyclic permutation ϕ^i→ϕ^i+1 and simultaneously ψ_i→ψ_i+1, where i=1,2,3 mod 3. Moreover, if there are even/odd number of ϕ's and ψ's combined, which carries one SU(3) index each, it requires even/odd number of Levi-Civita symbols to write the operator covariantly while contracting all indices. Therefore, we may restrict to i) operators with even number of ϕ's and ψ's combined, that are even under all 3! permutations of SU(3) indices, and ii) operators with odd number of ϕ's and ψ's combined, that are even under even (cyclic) permutations of SU(3) indices and odd under odd (swap) permutations of SU(3) indices. Also note that this permutation property commutes with the action of Q, so that Q of a non-trivial operator satisfies this property if and only if the original operator does. This permutation property is necessary but not sufficient for an operator to be an SU(3) singlet. However, we impose this property on the basis instead of requiring SU(3) singlets, because the latter requires many sums over dummy indices and thus the former is computationally more efficient. Our conclusions on the singlet sector will be valid despite.For example, suppose that the target operator is (<ref>), which has charges (R,J) = (5/2,3/2). Operators whose Q-action may reproduce this target operator must then have (R,J) = (2,2). Possible choices of letter contents are (n_ϕ, n_ψ, n_f) = (6,0,2), (5,2,1), or (4,4,0), and numbers of ϕ^i minus numbers of ψ_i must be equal between i=1,2,3. Further taking into account the permutation property, the basis operators whose Q-action `may participate in reproducing the target' (<ref>) can be classified into the following 7 subsectors. ((-1)^ϵ in subsectors 5 and 6 indicates minus sign for odd permutations, because there are odd number of ϕ's and ψ's in those subsectors.)* Subsector 1: (ϕ^1)^4(ψ_1)^4 + (permutations)* Subsector 2: (ϕ^1)^3(ϕ^2)^1(ψ_1)^3(ψ_2)^1 + (permutations) * Subsector 3: (ϕ^1)^2(ϕ^2)^2(ψ_1)^2(ψ_2)^2 + (permutations)* Subsector 4: (ϕ^1)^2(ϕ^2)^1(ϕ^3)^1(ψ_1)^2(ψ_2)^1(ψ_3)^1 + (permutations) * Subsector 5: (ϕ^1)^3(ϕ^2)^1(ϕ^3)^1(ψ_1)^2f^1 + (-1)^ϵ (permutations)* Subsector 6: (ϕ^1)^2(ϕ^2)^2(ϕ^3)^1(ψ_1)^1(ψ_2)^1f^1+ (-1)^ϵ (permutations)* Subsector 7: (ϕ^1)^2(ϕ^2)^2(ϕ^3)^2f^2 + (permutations)Appropriate sums over permutations of single- and multi-trace operators in each of these subsectors are the result of step 1, some of which we write down below to help visualize: tr(ϕ^1ϕ^1ψ_1ϕ^1ψ_1ψ_1ϕ^1ψ_1) +tr(ϕ^2ϕ^2ψ_2ϕ^2ψ_2ψ_2ϕ^2ψ_2) +tr(ϕ^3ϕ^3ψ_3ϕ^3ψ_3ψ_3ϕ^3ψ_3) , tr(ϕ^1ϕ^1ϕ^2ψ_2) tr(ψ_1ψ_2) tr(ϕ^2ψ_1) +tr(ϕ^2ϕ^2ϕ^3ψ_3) tr(ψ_2ψ_3) tr(ϕ^3ψ_2) +tr(ϕ^3ϕ^3ϕ^1ψ_1) tr(ψ_3ψ_1) tr(ϕ^1ψ_3)  +tr(ϕ^3ϕ^3ϕ^2ψ_2) tr(ψ_3ψ_2) tr(ϕ^2ψ_3) +tr(ϕ^1ϕ^1ϕ^3ψ_3) tr(ψ_1ψ_3) tr(ϕ^3ψ_1) +tr(ϕ^2ϕ^2ϕ^1ψ_1) tr(ψ_2ψ_1) tr(ϕ^1ψ_2) , tr(ϕ^2ϕ^2ψ_1ψ_2ϕ^3) tr(fϕ^1ϕ^1) +tr(ϕ^3ϕ^3ψ_2ψ_3ϕ^1) tr(fϕ^2ϕ^2) +tr(ϕ^1ϕ^1ψ_3ψ_1ϕ^2) tr(fϕ^3ϕ^3)  -tr(ϕ^3ϕ^3ψ_1ψ_3ϕ^2) tr(fϕ^1ϕ^1) -tr(ϕ^1ϕ^1ψ_2ψ_1ϕ^3) tr(fϕ^2ϕ^2) -tr(ϕ^2ϕ^2ψ_3ψ_2ϕ^1) tr(fϕ^3ϕ^3) .Given the operators from step 1, the rest is relatively straightforward, at least conceptually. There are non-trivial trace relations between operators from step 1, so in step 2 weextract linearly independent basis operators. Then in step 3, we consider Q-actions of the basis operators, and again count the number of linearly independent ones among them. These should form a complete basis of all Q-exact operators in the target charge sector and with the aforementioned permutation property. Therefore, the target operator is Q-exact if and only if it is a linear combination of the Q-actions of the basis operators. More generally, if there are multiple target operators, the number of cohomologies among them would be equal to the number of linearly independent ones among the basis and all target operators, minus the number of linearly independent ones among the basis only.Each of step 2-4 involves counting and/or finding linearly independent operators among a given set of gauge-invariant operators. Each operator is a sum over single- and multi-trace operators written in terms of seven species of fields ϕ^m, ψ_m and f. To completely account for trace relations between them, we first convert the operators written in terms of adjoint fields into polynomials of their matrix elements, by substituting f= [f_1f_2f_3;f_4f_5f_6;f_7f_8 -f_1-f_5 ] , and likes for 6 other fields. In this way, every operator is now written as a polynomial of 8 × 7 = 56 variables, 24 of which are Grassmannian. So the problem boils down to finding linear dependence between a set of polynomials. Although this is the same problem that was encountered while computing the graviton index in section 3, the same method of extracting the coefficient matrix is extremely unpractical here. It is because there are four times as many variables (recall that for counting gravitons, we substituted each field with a diagonal matrix), and therefore exponentially larger number of monomials appear in polynomials. As a result, the coefficient matrix will have a huge number of columns that is not viable for computers.For this reason, we have devised a numerics-assisted approach to find linear dependence between the polynomials with large number of variables. The approach stems from the basic fact that if some linear combination of certain polynomials vanishes, it must also be zero if we attribute any specific number to each variable. So let us represent each polynomial by an array of numbers, i.e. a row vector, by substituting each variable with a set of randomly chosen integers. Then we examine the linear dependence between vectors, instead of polynomials.The substitution can be repeated for arbitrarily many sets of integers, so the row vector can be made arbitrarily long. Obviously, the length of the row vectors, i.e. number of columns, must be at least as many as there are independent polynomials. Otherwise, it will be always possible to find a relation between the row vectors even if the polynomials they represent are independent. On the other hand, the length of the row vectors need not be much more than the number of independent polynomials, as we will explain shortly.This makes it clear why this method is efficient. It naturally realizes the basic principle that in order to distinguish n different entities, one needs at least n data for each entity, whereas extracting the coefficient matrix for the polynomials with so many variables will equivalently convert each polynomial into an unnecessarily long vector.In step 3, basis operators in different sectors as classified by their refined gluon numbers cannot have linear relations between them because they can have no monomials in common. Therefore, in this step the set of linearly independent basis can be found within each sector, and this allows one to use fewer number of columns, reducing the computational load. On the other hand, after Q has acted on the basis, the Q-exact operators can have terms with different refined gluon numbers, and polynomials from different sectors mix. Therefore, in step 4 (and of course step 5) one needs to collect polynomials from all sectors and this requires a larger number of columns. Moreover, Q acting on a basis operator makes it a sum over multiple single- or multi-trace operators, so the polynomials become longer and it takes longer to substitute variables with numbers. This is why it is helpful to reduce the number of basis as much as possible in step 3, although step 3 can be skipped in principle. There are two issues with this approach that we need to address. The first is that 24 of 56 variables are Grassmannian, which cannot be properly substituted with c-numbers. The second is that randomness is involved in this approach, and it may lead to errors albeit unlikely.The issue with Grassmann variables can be easily addressed by ordering them in a definite manner within each monomial. That is, we fully expand each polynomial (which includes eliminating squares of Grassmann variables), and let variables be multiplied only in a certain order within each monomial. During this process the coefficients may flip signs, but the result of this process is unique for each polynomial. Once we have done this, none of the Grassmann properties will be used when finding linear relations between the polynomials, because each monomial is now compared verbatim with monomials in other polynomials. Therefore, it is now safe to substitute Grassmann variables with c-numbers. This principle was also implied while extracting the coefficient matrix of graviton operators in section 3.As for the randomness, first note that substituting (sufficiently many sets of) random integers never miss the true dependence between polynomials. If there is a true linear dependence between polynomials, i.e. a linear combination that vanishes, the same linear combination must be zero for whatever numbers are put in, so the row vectors corresponding to the polynomials must be linearly dependent. Note that all polynomials have rational coefficients and we put in random integers, so there is no issue with machine precision.However, the converse is possible: this method may find false linear dependence between polynomials. This is simply because a non-vanishing polynomial may evaluate to zero when certain values are put into variables. That is, the randomly chosen values could miraculously be the roots of the polynomial. This type of error can be made arbitrarily more unlikely by increasing the number of columns, i.e. number of sets of random integers that are put in. Let us roughly estimate the unlikelihood.Suppose that the number of columns is m+n where m is the true number of independent polynomials. For this method to find a false dependence, both of the followings must happen: i) there exists a non-trivial linear combination of the polynomials that vanishes for the first m sets of random integers, and ii) this polynomial further vanishes for the additional n sets of random integers. The probability of i) is relatively difficult to estimate, since it involves intricate tuning of m-1 coefficients in a linear combination of the polynomials. Therefore we only estimate the probability of ii) as follows. A typical basis polynomial such as Q-action of those in (<ref>) These are used in step 3 and 4 of determining Q-exactness of Q-closed operators in the charge sector (R,J)=(5/2,3/2), of which one is the non-graviton cohomology (<ref>) evaluates to ∼ 10^28 when a random integer between 1 and 1000 are substituted into each variable. (See Fig. <ref>. for an example.) This is a natural scale considering that the typical polynomial is a sum over ∼ 10^6 monomials (with both signs) that each consists of 9 letters, so for example 10^6× (10^2.5)^9∼ 10^28. This value is far smaller than the number of all possible random choices — which is (10^3)^56 if all 7 gluons, thus 7× (3^2-1) variables, are involved — so each integer value within magnitude ∼ 10^28 will be sufficiently populated. Furthermore, since a typical polynomial consists of many∼ 10^6 monomials, we assume that the evaluation of the polynomial is like a random walk with sufficient iterations, and thus the factorization property of integers is blurred. For these reasons, let us assume that the distribution of the evaluated values is continuous. Then, the probability that this value falls within O(1) is estimated to be ∼ 10^-28, even accounting for the shape of the distribution. For ii), this must happen for n independent sets of random variables, so the probability of ii) is estimated to be 10^-28n. In step 3, n was taken to be 175, so the estimated probability of ii) is 10^-4900.This method of detecting linear dependence was used between numerous sets of polynomials while determining Q-exactness of various operators in different charge sectors. Numbers that appeared in the previous paragraph slightly differ between occasions. Typical values of the polynomials differ because they consist of different numbers of letters, and n is inevitably different because the number of columns are set before we know the number of true independent polynomials. However, in any case, we use at least n ≥ 30 and the estimated probability of ii) has order of magnitude of a few negative hundreds at the worst. Furthermore, when a Q-closed operator is determined to be Q-exact, we checked analytically the relation between the target and basis polynomials to further eradicate the margin for error.Employing the method explained so far, we have constructed the basis operators in each and all charge sectors with R_1 = R_2 = R_3 at j=24 order, with the aforementioned permutation property. We have also evaluated the Q-actions of the bases, that should form the basis of Q-exact operators. Then we have determined Q-exactness of all Q-closed non-graviton operators obtained from our ansatz in the previous subsection. The result is that all operators except for the fermionic (<ref>) are Q-exact.From the fact that we have constructed and counted all operators and their Q-actions in the R_1 = R_2 = R_3 charge sectors at j=24 order with the permutation property, we can also prove the non-existence of any other SU(3) singlet non-graviton cohomology at j=24 order. Recall that the result of step 2 is a complete basis of all operators, in a given charge sector (R,J) and with permutation property that is designed to include all SU(3) singlets. There are further linear relations between Q-actions of these basis operators,reducing the number of independent Q-exact operators at charge sector(R+1/2,J-1/2) in step 3. The reduced operators correspond to the Q-closed operators at charge sector (R,J):(#closed)_(R,J) = (#basis)_(R,J) - (#exact)_(R+1/2,J-1/2) .Then the number of Q-cohomologies is given by(#coh.)_(R,J) = (#closed)_(R,J)-(#exact)_(R,J) .Meanwhile, we can also count the number of independent graviton cohomologies in these charge sectors and with the same permutation property, like we counted the full set of gravitons in subsection 3.1. The number of non-graviton cohomologies is given by(#BH coh.)_(R,J) = (#coh.)_(R,J)-(#gravitons)_(R,J) .We present all the numbers mentioned in this paragraph in Table <ref>. We find only one non-graviton cohomology in the (R,J) = (5/2,3/2) sector, which is the fermionic cohomology presented in (<ref>). Since the operators with the permutation property in the R_1=R_2=R_3 charge sectors include all SU(3) singlets, we conclude that (<ref>) is the only non-graviton cohomology that is an SU(3) singlet at order j=24. The computation presented in this subsection, of constructing the basis operators and counting independent ones between them and their Q-actions, is essentially the sort of computation that was performed in <cit.>, although we find our numerics-assisted approach to be more efficient. Moreover, we have only performed this computation in the R_1=R_2=R_3 charge sectors at j=24 order in the BMN sector, and further restricted to operators with certain permutation property. This is because we focused on the SU(3) singlet sector at order j=24, where the non-graviton index indicated the existence of a non-graviton cohomology. § FUTURE DIRECTIONSIn this paper we studied the Q-cohomologies of 4d maximal super-Yang-Mills theoryfor the local BPS operators at 1-loop level. We detected new non-graviton cohomologiesfrom the index for the SU(3) and SU(4) gauge theories, and constructedthe apparent SU(3) threshold cohomology. A goalof this program is to identify and characterize the microstates of BPS black holesin the SU(N) theory with parametrically large N. Although we are currently veryfar from this goal, several novel structures are observed for SU(3) and SU(4)theories which we hopefully think will shed light on the large N black hole physics. Constructing new cohomologies (non-graviton type, or black hole type) requires usto find operators which become Q-closed only after using the trace relations offinite-sized matrices. Although the basic principle for the trace relations shouldbe simple (e.g. repeated uses of the Cayley-Hamilton identities), it is hard ingeneral to analytically generate useful trace relations. In this paper,we have developed a semi-systematic way of constructing Q-closed operatorsusing certain trace relations. Within this framework,the main technical bottleneck is proving that the constructed Q-closedoperator is not Q-exact. This demands us to check if the targetQ-closed operators are Q-exact or not after using all possible trace relations.We developed numerics-assisted computational strategiesto check this on a computer. In <cit.>, Q-exactness was easy to show for a particular class of operatorsin the BMN sector. Namely, if a BMN operator contains a term without any scalars, thisoperator cannot be Q-exact. This is because Q acting on the BMN fields always generateone of more scalars, so that Q cannot generate a term without scalars. An infinite number of cohomologies with this property was found in <cit.>,whose checks of Q-nonexactness were trivial. These cohomologies are beyond our ansatz in this paper.This implies that our ansatz is far from sufficient to generate all the non-gravitoncohomologies, even within the BMN sector. It would be highly desirable, if possible,to combine the analytic insights learned in <cit.> and in this paper. Merely knowing the index Z-Z_ grav over non-graviton cohomologies is very useful tolearn their novel spectral structures. Section 3.1 has extensively discussedthe SU(3) non-graviton index up to charge j=54, finding the hints ofnovel partial no-hair behaviors and the tower structures. Technically, the full BMNindex Z is relatively easy to compute at not too large N, by computing the residuesum for the integral formula (<ref>).The harder part is to computethe finite N graviton index Z_ grav by taking into account the trace relations.This counting problem reduces to counting independentpolynomials of 4(N-1) bosonic and 3(N-1) fermionic variables subject to constraints.In principle this counting problem can be solved completely by knowing the so-calledGröbner basis of constraints. In practice, constructing the Gröbner bases is verycumbersome, even on computer. We have partially obtained the Gröbner bases for theSU(3) BMN gravitons in the subsector not containing the letter f. To expand theseresults to the full BMN sector including all letters, we did a rather brutalcomputations on computer order by order in the charges. For SU(4), the uses ofthe Gröbner bases were more limited to obtain results of section 4.2. It will be very desirable to compute the full Z_ grav, and thusZ-Z_ grav, exactly. There are several features that we would like tocheck with these exact results. For instance, many towers of states wereobserved in Z-Z_ grav in the SU(3) theory till j=54, and to us it isunclear whether these towers continue indefinitely or not. One may wonder if the analysis of spectrum will be simpler at large N,ignoring 1/N corrections in the computations. However, whether a state isBPS or not is an exact property so that 1/N corrections may affectthe answer. Nevertheless, there should be large N simplifications for studyingthe near-BPS operators with small anomalous dimensions,say, at order 1/S∼1/N^2 <cit.>.Some studies in this direction were made in <cit.>. 0.5cm Acknowledgements 0.2cmWe thank Minkyoo Kim, Eunwoo Lee, Masaki Shigemori and especiallyShiraz Minwalla for helpful discussions and comments.We also thank Goojin Kwon for independently deriving some results in section 2.1. This work is supported in part by the NRF grant 2021R1A2C2012350 (JC, SK, JL),a KIAS Individual Grant PG081602 at Korea Institute for Advanced Study (SC), the DoE grant DE-SC0007859 (SL) and Rackham Predoctoral Fellowship (SL).This research was supported in part through computational resources and services provided by Advanced Research Computing (ARC), a division of Information and Technology Services (ITS) at the University of Michigan, Ann Arbor. § GRÖBNER BASISLet’s consider a set of polynomials that is closed under addition of its elements, and multiplication by arbitrary polynomials.[For a more mathematical introduction to Gröbner basis and its applications to counting problem subject to constraints, see chapter 2,3,7 of <cit.>] Here, some of the variables of polynomials may be Grassmann odd.This kind of set is called an (two-sided) ‘ideal’ in the field of mathematics. For example, following set of polynomials is an ideal.J={p_1(x^2-u)+p_2(y^2-v)+p_3(xy-w)}where p_1,p_2,p_3 are arbitrary polynomials in (commuting) variables (x,y,u,v,w). One can easily see that this set is closed under addition, and multiplication by any polynomials (from left and right). Here, {x^2-u,y^2-v,xy-w} is a `basis' (or more often referred to as `generators') of the set J because any element of J can be written as (polynomial-coefficient) linear combination of these polynomials.Being a `Gröbner basis' requires a little more than being just a basis.Any `leading term' of the ideal must be divisible by some leading term of the Gröbner basis. This is the defining property of Gröbner basis. And this simple property leads to many powerful consequences as we will see later in this appendix. Before that, we must explain a few things about the word `leading term'.A ‘leading term’ of a polynomial is defined after we pick a ‘monomial ordering’. Monomial ordering is literally an ordering among monomials. There are a few examples of orderings that are used mostly in the literature. Let’s explain one of them which we will use throughout this appendix for illustration. It is called ‘lexicographic ordering’ (lex order for short).If our variables are x,y,u,v,w as in the example above, in lex order with x>y>u>v>w, whenever ‘larger’ variable is in greater power, that monomial is ‘larger’. For example, x^2>x y>xv^10>x>y^100>y. The total degree of monomial is not important in lex order. This is an example of `elimination ordering', which means that whenever variables that we want to `eliminate' is in greater power in the monomial, it is the larger one. As we will see later, for our purposes, it is important that we use an elimination ordering. But it is well-known in the literature that often, computing Gröbner basis is much harder with elimination ordering. (This is one of the reasons why our computation becomes difficult very quickly.)With fixed ordering, obviously, the leading term is the `largest' term among all the terms of a polynomial.Now that we know what leading term means, let's look at some examples of Gröbner basis. In lex order with x>y>u>v>w, a Gröbner basis[Gröbner basis is not uniquely determined from its definition. But `reduced Gröbner basis' is unique <cit.>. This issue isn't important for our purposes.] of J is given byG={uv-w^2,y^2-v,xw-yu,xv-yw,xy-w,x^2-u}In other words, any leading term of J is divisible by one of {uv,y^2,xw,xv,xy,x^2} which are the leading terms of G. But how can one be sure that this is true?Here, one can resort to the Buchberger's criteria which is equivalent to the definition of Gröbner basis. It is stated as follows : Pick an arbitrary pair (p_1,p_2) of elements from a basis. Then cancel out the leading terms of each other to produce a new polynomial in J in the following way. Let's say p_1=xv-yw,p_2=xy-w for illustration.y p_1-v p_2 = -y^2w+vwIn other words, multiply appropriate monomial to each polynomial such that their leading terms become their least common multiple, and then subtract with each other.§ GRAVITON TRACE RELATIONS In this appendix, we first list the trace relations between the graviton cohomologies in the BMN sector of the SU(3) theory. Then we construct therelations of relations at j=24 and j=30 which are singlets underthe SU(3) ⊂ SU(4)_R global symmetry. These are the two sectors in whichthe index predicted fermionic cohomologies in the SU(3) singlet. The results at j=24are used in section 4.1 to construct the threshold cohomology. The results at j=30 provideansätze for the Q-closed operators, whose Q-exactness are not checked in this paper. The trace relations are the linear dependence between the multi-trace operators, up to Q-exact operators, due to the finite size of the matrices. In this appendix, we shall only consider the trace relations between gravitons. Let us first arrange the trace relations by their level j and distinguish them into two types; fundamental ones and the others. The fundamental trace relations at level jcannot be written as linear combinations of the trace relations at lower levels j' (< j), multiplied by the gravitons at level j-j'.All trace relations of gravitons can be expressed as linear combinations of the fundamentaltrace relations with the coefficients being graviton cohomologies.We explicitly constructed the fundamental trace relations till certain levels,which will be presented below.The single-trace generators of the SU(3) BMN gravitons are given byu^ij≡tr(ϕ^(iϕ^j)) , u^ijk≡ tr(ϕ^(iϕ^jϕ^k)) ,v^i_j ≡tr( ϕ^i ψ_j) - 1/3δ^i_jtr(ϕ^a ψ_a) , v^ij_k ≡ tr(ϕ^(iϕ^j)ψ_k)- 1/4δ^i_ktr( ϕ^(jϕ^a)ψ_a)- 1/4δ^j_ktr( ϕ^(iϕ^a)ψ_a) , w^i ≡tr(f ϕ^i + 1/2ϵ^ia_1a_2ψ_a_1ψ_a_2) , w^ij≡ tr(f ϕ^(iϕ^j) +ϵ^a_1a_2(iϕ^j)ψ_a_1ψ_a_2) ,where we suppressed the subscript of u_n, v_n, w_n since it can be easily read off from the number of the indices. Note that the Q-actions on ϕ, ψ, f are given by Q ϕ^m=0 , Qψ_m=-i/2ϵ_mnp[ϕ^n,ϕ^p] , Qf=-i[ϕ^m,ψ_m] .We would like to find the fundamental trace relations of (<ref>).It is helpful to start from the Gröbner basis for the trace relations.The Gröbner basis contains all fundamental trace relations. In general, the Gröbner basisalso contains some non-fundamental trace relations.We shall obtain the fundamental trace relations from the Gröbner basis by induction.At the lowest level of the trace relations, all of them are fundamental. Namely, every generator of the Gröbner basis at such level are the fundamental relations. For the SU(3) theory, the lowest level is j=10.In order to organize them into covariant forms in the SU(3) global symmetry,we use the following computational strategy (which also proves useful at higher orders). We list the polynomials of (<ref>) which have the same representationsas the lowest fundamental trace relations at j=10. Among them, we should find particular linear combinations which vanish when all off-diagonal elements of ϕ^m,ψ_m, f are turned off, since the graviton trace relations vanishwith diagonal fields.Once such combinations are identified, keeping ϕ,ψ,f general in this combinationwill yield the Q-exact operators for the lowest fundamental trace relations. This way, we can find the fundamental trace relations at the lowest level.[Therecan be linear combinations which vanish even when the off-diagonal elements are turned on.In principle, they can also be the trace relations but most of them are just the identities thathold at arbitrary N. In practice, we only find them as mesonic identities between(<ref>) rather than the trace relations.]Now, suppose that we found all fundamental trace relations until the level j. We can construct the fundamental trace relations at j+2 as follows. We first construct all non-fundamental trace relations at level j+2 by multiplying suitable graviton cohomologies to the fundamental ones below the level j. Not all of them are linearly independent so we should extract a linearly independent set among them. This lets us to compute the SU(3) character of the non-fundamental trace relations at level j+2. Next, we consider a union of the non-fundamental trace relations and the Gröbner bases at level j+2. Note that the Gröbner basis will contain all fundamental trace relations and some non-fundamental ones. We extract a linearly independent set among such union, which contains all fundamental and non-fundamental relations. We also compute the SU(3) character over them. Finally, we subtract the former character from the latter, which yields the SU(3) character of the fundamental trace relations at level j+2. Then we list the multi-trace operators using (<ref>) which can account for it as before.Among them, we find particular linear combinations which vanish when all off-diagonal elements of ϕ,ψ, f are turned off, and which are linearly independent from the non-fundamental trace relations we constructed above. The final results are the fundamental trace relations at level j+2. In this way, one can construct the fundamental trace relations inductively.In principle, one can obtain all fundamental trace relations of gravitons from the above induction. For the SU(2) theory, it can be easily done. We found a 66-dimensional Gröbner basis, and there exist 48 fundamental trace relations among them. However, for the SU(3) theory, we could not do a similarcalculation since the construction of the Gröbner basis is time-consuming. We constructed it only in two subsectors:(1) all trace relations between u_2, u_3, v_2, v_3, and(2) trace relations between u_2, u_3, v_2, v_3, w_2, w_3 until j ≤ 20.From the subsector (1), which has 1170 generators, we obtained all fundamental trace relations between u_2, u_3, v_2, v_3, i.e. the relations which do not involve f's. There are in total 287 relations whose lowest level is j=10 and the highest level is j=30. On the other hand, from the subsector (2), we could generate the fundamental trace relations involving f's until j ≤ 20. There are in total 130 relations involving f's between 14 ≤ j ≤ 20.These are enough to construct relations of relations at j=24. Before presenting their explicit forms, we first explain our notation. When we write down certain operator in the irreducible representation 𝐑 under SU(3) ⊂ SU(4)_R as O^i_1i_2i_3..._j_1j_2j_3..., the actual form of such an operator should be understood as O^i_1i_2i_3..._j_1j_2j_3... subtracted by its trace part to make it traceless, like [n,0] : O^i_1i_2i_3⋯ i_n→ O^i_1i_2i_3⋯ i_n ,[0,n] : O_i_1i_2i_3⋯ i_n→ O_i_1i_2i_3⋯ i_n , [1,1]: O^i_j → O^i_j - 1/3δ^i_j O^a_a , [2,1]: O^ij_k → O^ij_k -1/2δ^(i_kO^j)a_a ,[1,2]: O^i_jk→O^i_jk -1/2δ^i_(jO^a_k)a , [3,1]: O^ijk_l → O^ijk_l -3/5δ^(i_lO^jk)a_a,[1,3]: O^i_jkl→ O^i_jkl -3/5δ^i_(jO^a_kl)a , [2,2]: O^ij_kl→ O^ij_kl - 4/5δ^(i_(k O^j)a_l)a+ 1/10δ^(i_(kδ^j)_l) O^a_1a_2_a_1a_2 ,and so on. Here, [· ,· ] are the Dynkin labels for SU(3). Below, we list the explicit forms of the fundamental trace relations according to their level j and representation under SU(3) ⊂ SU(4)_R as t^j [R_1',R_2']. The relations which do not involve f's are given as follows:t^10 [1,2](u_2u_3): (R_10^(0,0))^i_jk = ϵ_a_1 a_2 (jϵ_k) b_1 b_2 u^a_1 b_1 u^i a_2 b_2t^12 [0,0](u_2u_2u_2) : R_12^(0,0) =ϵ_a_1 a_2 a_3ϵ_b_1 b_2 b_3 u^a_1 b_1u^a_2 b_2u^a_3 b_3t^12 [2,2](u_2u_2u_2,u_3u_3) : (R_12^(0,0))^ij_kl = ϵ_a_1 a_2 (kϵ_l) b_1 b_2( u^a_1 b_1 u^a_2 b_2 u^ij + 6 u^a_1 b_1 (i u^j) a_2 b_2)t^12 [0,3](u_2v_3): (R_12^(0,1))_ijk = ϵ_(i|a_1a_2ϵ_|j|b_1b_2 u^a_1b_1v^a_2 b_2_|k)t^12 [1,1](u_2v_3, u_3v_2) : (R_12^(0,1))^i_j =ϵ_j a_1 a_2( 4 u^a_1 bv^i a_2_b + 3u^i a_1bv^a_2_b) t^12 [2,2](u_2v_3, u_3v_2) : (R_12^(0,1))^ij_kl =ϵ_a_1 a_2 (k(u^a_1(iv^j)a_2_l)+ u^ij a_1v^a_2_l)) t^14 [1,0](u_2u_2v_2) : (R_14^(0,1))^i = ϵ_a_1a_2a_3 u^i a_1 u^b a_2v^a_3_bt^14 [0,2](u_2u_2v_2, u_3v_3) : (R_14^(0,1))_ij = ϵ_a_1a_2 (i|(ϵ_b_1b_2b_3 u^a_1b_1 u^a_2b_2v^b_3_|j) -2 ϵ_|j) b_1b_2u^a_1 b_1 cv^a_2b_2_c ) t^14 [2,1](u_2u_2v_2, u_3v_3) : (R_14^(0,1))^ij_k =ϵ_k a_1a_2(3 u^(a_1 b u^ij)v^a_2_b + 4u^a_1b u^a_2 (iv^j)_b +24 u^a_1 b (iv^j) a_2_b )t^14 [1,3](u_2u_2v_2, u_3v_3) : (R_14^(0,1))^i_jkl =ϵ_(j|a_1 a_2ϵ_|k| b_1 b_2(u^a_1 b_1 u^a_2 b_2v^i_|l) + 6 u^ia_1b_1v^a_2b_2_|l)) t^14 [3,2] (u_2u_2v_2, u_3v_3) : (R_14^(0,1))^ijk_lm = ϵ_a_1a_2(l( u^(a_1 i u^jk)v^a_2_m) + 6 u^a_1 (ijv^k)a_2_m))t^14 [1,3](v_2v_3) : (R_14^(0,2))^i_jkl = ϵ_a_1 a_2 (jv^a_1_kv^i a_2_l)t^16[0,1](u_2v_2v_2, v_3v_3):(R_16^(0,2))_i = ϵ_i a_1 a_2(12u^bcv^a_1_bv^a_2_c +13u^a_1 bv^a_2_cv^c_b + 12v^a_1b_cv^a_2c_b ) t^16[1,2] (u_2v_2v_2, v_3v_3): (R_16^(0,2))^i_jk =ϵ_a_1a_2(j(3u^ibv^a_1_k)v^a_2_b -7u^ia_1v^b_k)v^a_2_b + 6u^a_1bv^i_k)v^a_2_b+ 24v^a_1b_k)v^ia_2_b ) t^16[2,3] (u_2v_2v_2,v_3v_3): (R_16^(0,2))^ij_klm = ϵ_a_1a_2(k(u^a_1(iv^j)_lv^a_2_m)+ 3v^a_1(i_lv^j)a_2_m)) t^18[0,0](u_3v_2v_2): R_18^(0,2) = ϵ_a_1a_2a_3 u^a_1bcv^a_2_b v^a_3_c t^20[1,0](v_2v_2v_3) : (R_20^(0,3))^i = 2v^a_cv^b_av^ic_b - 3v^i_av^c_bv^ab_c t^22[2,0](u_2v_2v_2v_2) : (R_22^(0,3))^ij = u^ijv^a_bv^b_cv^c_a -3 u^a(iv^j)_bv^b_cv^c_a +3 u^abv^(i_av^j)_cv^c_b t^24[0,0] (u_2v_2v_2v_3) : R_24^(0,3) =ϵ_a_1a_2a_3u^a_1bv^a_2_bv^a_3c_dv^d_c t^26[1,0] (v_2v_2v_2v_3): (R_26^(0,4))^i= v^i_a v^a_bv^d_cv^bc_dt^30[0,0] (v_2v_2v_2v_2v_2) : R_30^(0,5) =v^a_bv^b_cv^c_dv^d_ev^e_a t^30[3,0] (v_2v_2v_2v_2v_2) : (R_30^(0,5))^ijk= ϵ^a_1a_2(iv^j_a_1v^k)_a_2v^b_cv^c_dv^d_b .Here, the superscripts of R denote (n_f, n_ψ) of the terms with maximal n_f in the trace relations and the subscripts denote their j. Their SU(3) representations can be read off from the number of upper and lower indices. The listed trace relations vanish up to Q-exact operators whose explicit form will be discussed below. As explained before, this is the exhaustive set of the fundamental trace relations of gravitons which do not involve f's. The fundamental trace relations involving f's until j ≤ 20 are given byt^14[0,2](v_2v_3,u_2w_3) : (R_14^(1,0))_ij= ϵ_a_1a_2(i(8v^a_1 b_j)v^a_2_b + 5ϵ_j)b_1b_2u^a_1b_1w^a_2b_2) t^14[2,1](v_2v_3,u_2w_3,u_3w_2): (R_14^(1,0))^ij_k =2 v^(i_av^j)a_k -5v^a_kv^ij_a +3ϵ_ka_1a_2u^a_1(iw^j)a_2 + 3ϵ_ka_1a_2u^ija_1w^a_2t^16[0,1] (v_3v_3, u_2v_2v_2, u_2u_2w_2) :(R_16^(1,0))_i =ϵ_ia_1a_2(48 v^a_1b_1_b_2v^a_2b_2_b_1+ 9 u^b_1b_2v^a_1_b_1v^a_2_b_2- 13 ϵ_b_1b_2b_3 u^a_1b_1 u^a_2b_2 w^b_3) t^16[1,2] (v_3v_3, u_2v_2v_2, u_3w_3, u_2u_2w_2) : (R_16^(1,0))^i_jk=ϵ_a_1a_2(j|(24v^i a_1_b v^ba_2_|k) + 2u^i a_1v^a_2_b v^b_|k)- 6u^a_1 bv^a_2_b v^i_|k)..+6ϵ_|k)b_1b_2u^ia_1b_1 w^a_2b_2 +ϵ_|k)b_1b_2u^a_1b_1 u^a_2b_2 w^i )t^16[3,1] (v_3v_3, u_2v_2v_2, u_3w_3, u_2u_2w_2) : (R_16^(1,0))^ijk_l = 24 v^(ij_a v^k)a_l + 7 u^(ijv^k)_a v^a_l -6u^a(iv^j_a v^k)_l + 18 ϵ_la_1a_2 u^a_1(ij w^k)a_2 +3ϵ_la_1a_2 u^(ij u^k)a_1 w^a_2t^16[1,2](v_2w_3, v_3w_2) : (R_16^(1,1))^i_jk = ϵ_a_1a_2(j(v^a_1_k)w^a_2i + v^a_1i_k)w^a_2) t^18[0,0](v_2v_2v_2,u_2v_2w_2) : R_18^(1,1) = v^a_1_a_2v^a_2_a_3v^a_3_a_1 -3ϵ_a_1a_2a_3 u^a_1 bv^a_2_b w^a_3t^18[1,1](v_2v_2v_2,v_3w_3,u_2v_2w_2) :(R_18^(1,1))^i_j = 9v^i_a_1v^a_1_a_2v^a_2_j-24ϵ_ja_1a_2v^ia_1_b w^b a_2 -13 ϵ_ja_1a_2 u^ia_1v^a_2_b w^b -16ϵ_ja_1a_2 u^ibv^a_1_b w^a_2 +5ϵ_ja_1a_2 u^a_1bv^i_b w^a_2t^18[0,3](v_2v_2v_2,v_3w_3,u_2v_2w_2) : (R_18^(1,1))_ijk = ϵ_a_1a_2(i|(3v^a_1_|j|v^a_2_bv^b_|k)- 3ϵ_b_1b_2|jv^a_1b_1_k) w^a_2b_2-ϵ_b_1b_2|j u^a_1b_1v^a_2_k) w^b_2) t^18[2,2](v_2v_2v_2,v_3w_3,u_2v_2w_2) :(R_18^(1,1))_ij^kl = 2v^(i_av^j)_(kv^a_l) -6 ϵ_a_1a_2(kv^a_1 (i_l)w^j)a_2-ϵ_a_1a_2(k u^ijv^a_1_l)w^a_2t^20[0,2](v_2v_2w_2,u_2w_2w_2,w_3w_3): (R_20^(2,0))_ij = 2ϵ_a_1a_2(i|v^a_1_bv^b_|j) w^a_2 - 3 ϵ_a_1a_2a_3v^a_1_iv^a_2_jw^a_3 + ϵ_ia_1a_2ϵ_jb_1b_2u^a_1b_1w^a_2 w^b_2 + 3 ϵ_ia_1a_2ϵ_jb_1b_2w^a_1b_1w^a_2b_2 .The relations involving one f appear from j=14 and those involving two f's appear from j=20. We do not find any relations involving three f's until j ≤ 20.As explained before, the trace relations (<ref>), (<ref>) vanish up to Q-exact operators, which we now construct explicitly. In principle, one should first construct the complete basis of the Q-exact operators, which have the same level j and SU(3) representation with the target trace relation. (The Q-action does not change j and SU(3) representation.) However, in practice, we can make some ansätze for the Q-exact form to reduce the dimension of Q-exact basis. One of our working assumptions is that the maximal number of f's appearing before the Q-action is the same as that of the trace relation. There is a priori no reason to assume that but it turns out to be true for our examples. After imposing this assumption (and a couple of extra practical assumptions), we find a particular linear combination of the Q-exact operators in our basis for the target trace relation. In general, when we write R_I∼ Qr_I for a trace relationR_I, there exist ambiguities of r_I since we can add arbitrary Q-closed operatorsto r_I. We partly fix them by requiring r_I to vanish when ϕ,ψ,f are restrictedto diagonal matrices. We do not know whether such a requirement can be satisfied in general, but it does work for our examples. The purpose of this requirement will be explained later. The otherambiguities are fixed by hand to get compact expressions.Below, we list the operators r_j^(n_f,n_ψ) related to the fundamental trace relations R_j^(n_f,n_ψ-1) by i Q r_j^(n_f,n_ψ) = R_j^(n_f,n_ψ-1). We will not list all r_j^(n_f,n_ψ)'s, but only those which are used in section 4.1. For the relations without f's in (<ref>), we obtain(r_10^(0,1))^i_jk =-2 ϵ_a_1a_2(j( ϕ^a_1ϕ^a_2ϕ^iψ_k)) ,r_12^(0,1) = ϵ_a_1a_2a_3[ 6 ( ψ_b ϕ^a_1) ( ϕ^b ϕ^a_2ϕ^a_3) - ( ψ_b ϕ^a_1ϕ^a_2) ( ϕ^bϕ^a_3)] -3ϵ_a_1a_2a_3[ (ψ_b ϕ^b ϕ^a_1ϕ^a_2ϕ^a_3) +(ψ_b ϕ^a_1ϕ^b ϕ^a_2ϕ^a_3) +(ψ_b ϕ^a_1ϕ^a_2ϕ^bϕ^a_3) +(ψ_bϕ^a_1ϕ^a_2ϕ^a_3ϕ^b )],(r_12^(0,1))^ij_kl =-2ϵ_a_1a_2(k[ (ψ_l)ϕ^(iϕ^j)ϕ^a_1ϕ^a_2) +7(ψ_l)ϕ^(i|ϕ^a_1ϕ^|j)ϕ^a_2) ] ,(r_12^(0,2))_ijk = 1/2ϵ_a_1 a_2 (i( ϕ^a_1ψ_jϕ^a_2ψ_k)), (r_12^(0,2))^i_j =6 ( ϕ^(iϕ^a)ψ_(aψ_j)) - 5 (ϕ^[iψ_aϕ^a]ψ_j) , (r_12^(0,2))^ij_kl = (ϕ^(iϕ^j)ψ_(kψ_l)) , (r_14^(0,2))^i=3 ( ϕ^i ψ_a_1ϕ^a_1ϕ^a_2ψ_a_2) + 2 ( ϕ^i ϕ^a_1) ( ϕ^a_2ψ_(a_1ψ_a_2))-6 ( ϕ^i ψ_a_1) ( ϕ^[a_1ϕ^a_2]ψ_a_2) - (ϕ^i ψ_a_1ψ_a_2) (ϕ^a_1ϕ^a_2) , (r_14^(0,2))_ij= 5/9ϵ_a_1a_2a_3[2 (ψ_(iψ_j)ϕ^a_1ϕ^a_2ϕ^a_3) + (ψ_(iϕ^a_1ψ_j)ϕ^a_2ϕ^a_3)]+ϵ_a_1a_2(i[(ψ_j)ψ_a_3ϕ^a_1ϕ^a_2ϕ^a_3)+(ψ_j)ψ_a_3ϕ^a_1ϕ^a_3ϕ^a_2)+(ψ_j)ψ_a_3ϕ^a_3ϕ^a_1ϕ^a_2)]+ϵ_a_1a_2(i[(ψ_j)ϕ^a_1ψ_a_3ϕ^a_2ϕ^a_3)+(ψ_j)ϕ^a_1ψ_a_3ϕ^a_3ϕ^a_2)+(ψ_j)ϕ^a_3ψ_a_3ϕ^a_1ϕ^a_2)]+ϵ_a_1a_2(i[(ψ_j)ϕ^a_1ϕ^a_2ψ_a_3ϕ^a_3)+(ψ_j)ϕ^a_1ϕ^a_3ψ_a_3ϕ^a_2)+(ψ_j)ϕ^a_3ϕ^a_1ψ_a_3ϕ^a_2)]+ϵ_a_1a_2(i[(ψ_j)ϕ^a_1ϕ^a_2ϕ^a_3ψ_a_3)+(ψ_j)ϕ^a_1ϕ^a_3ϕ^a_2ψ_a_3)+(ψ_j)ϕ^a_3ϕ^a_1ϕ^a_2ψ_a_3)]-1/3ϵ_a_1a_2(i[5(ψ_j)ϕ^a_1ϕ^a_2)(ψ_a_3ϕ^a_3)+2(ψ_j)ϕ^(a_1ϕ^a_3))(ψ_a_3ϕ^a_2)-2 (ψ_j)ϕ^a_2)(ψ_a_3ϕ^(a_1ϕ^a_3)) ] , (r_14^(0,2))^ij_k=12(ϕ^(iϕ^aϕ^j)ψ_(aψ_k))+12(ϕ^(i|ϕ^aϕ^|j)ψ_(aψ_k)) +54 ( ϕ^(iϕ^jψ_(aϕ^a)ψ_k))-36 ( ϕ^(iϕ^j)ψ_(aϕ^a ψ_k)),(r_14^(0,2))^i_jkl= 2ϵ_a_1a_2(j[(ϕ^i ϕ^a_1ϕ^a_2ψ_k ψ_l)) +3(ϕ^i ϕ^a_1ψ_k ϕ^a_2ψ_l))-2(ϕ^iψ_k ϕ^a_1ϕ^a_2ψ_l))] ,(r_14^(0,3))^i_jkl=-1/2(ϕ^i ψ_(jψ_kψ_l)) , (r_16^(0,3))_i =39/4( ψ_i {ψ_b_1ψ_b_2, ϕ^b_1ϕ^b_2}) +2( ψ_i ψ_b_1ϕ^b_1ψ_b_2ϕ^b_2) - 61/4( ψ_i ψ_b_1ϕ^b_2ψ_b_2ϕ^b_1) + 97/4( ψ_iϕ^b_1ψ_b_1ψ_b_2ϕ^b_2) -41/4( ψ_iϕ^b_2ψ_b_1ψ_b_2ϕ^b_1) -5 ( ψ_iψ_b_1ϕ^b_1ϕ^b_2ψ_b_2)-25/2( ψ_iψ_b_1ϕ^b_2ϕ^b_1ψ_b_2)+2( ψ_iϕ^b_1ψ_b_1ϕ^b_2ψ_b_2)- 61/4( ψ_iϕ^b_2ψ_b_1ϕ^b_1ψ_b_2) - 11/4(ϕ^b_1ϕ^b_2) (ψ_i ψ_b_1ψ_b_2) - 27/2( ψ_b_1ψ_b_2) (ψ_i ϕ^b_1ϕ^b_2)+ 29/4(ϕ^b_2ψ_b_2) (ψ_i [ψ_b_1,ϕ^b_1] ) , (r_16^(0,3))^i_jk = 2 ( ψ_(jψ_k)ψ_b ϕ^b ϕ^i ) -4( ψ_(jψ_k)ψ_b ϕ^i ϕ^b ) - ( ψ_(j|ψ_b ψ_|k){ϕ^b ,ϕ^i}) -4 ( ψ_(jψ_k)ϕ^(bψ_bϕ^i)) +7( ψ_(j|{ψ_b, ϕ^b}ψ_|k)ϕ^i) -11( ψ_(j|{ψ_b, ϕ^i}ψ_|k)ϕ^b) -4( ψ_(jψ_k)ϕ^b ϕ^i ψ_b)+2( ψ_(jψ_k)ϕ^i ϕ^b ψ_b)+3 ( ψ_(j|ψ_b ) ( ψ_|k) [ϕ^b, ϕ^i] ) +6 ( ψ_(jϕ^[b) ( {ψ_k),ψ_b}ϕ^i]).For the relations involving f's in (<ref>), we find(r_14^(1,1))_ij = 5 ϵ_a_1a_2(i(f ϕ^a_1ψ_j)ϕ^a_2) + ( ϕ^a {ψ_a , ψ_(iψ_j)}) -4(ϕ^a ψ_(i|ψ_aψ_|j)), (r_14^(1,1))^ij_k = 3(fϕ^(iϕ^j)ψ_k) - 3(fψ_kϕ^(iϕ^j)) + ϵ^a_1 a_2 (i( ϕ^j)ψ_k ψ_a_1ψ_a_2) - ϵ^a_1 a_2 (i( ϕ^j)ψ_a_1ψ_a_2ψ_k ), (r_16^(1,1))_i = 13 ϵ_a_1a_2a_3(f ψ_i) (ϕ^a_1ϕ^a_2ϕ^a_3) + 10/3ϵ_a_1a_2a_3(f ϕ^a_1) (ψ_iϕ^a_2ϕ^a_3) + 10/3ϵ_a_1a_2a_3(f ϕ^a_1ϕ^a_2) (ψ_iϕ^a_3) +46 ϵ_ia_1a_2(f ϕ^b) (ψ_bϕ^a_1ϕ^a_2) - 7ϵ_ia_1a_2(f ϕ^a_1) (ψ_bϕ^a_2ϕ^b) -7ϵ_ia_1a_2(f ϕ^bϕ^a_1) (ψ_bϕ^a_2) +6ϵ_ia_1a_2(f ϕ^a_1ϕ^a_2) (ψ_bϕ^b) - 115/3ϵ_a_1a_2a_3(fψ_i ϕ^a_1ϕ^a_2ϕ^a_3) - 95/3ϵ_a_1a_2a_3(fϕ^a_1ψ_iϕ^a_2ϕ^a_3) +5 ϵ_a_1a_2a_3(fϕ^a_1ϕ^a_2ψ_i ϕ^a_3) +36 ϵ_ia_1a_2(fψ_bϕ^a_1ϕ^a_2ϕ^b) -43ϵ_ia_1a_2(fψ_bϕ^a_1ϕ^bϕ^a_2) +39ϵ_ia_1a_2(f ϕ^a_1ψ_bϕ^a_2ϕ^b) -68ϵ_ia_1a_2(f ϕ^a_1ϕ^a_2ψ_bϕ^b) + 39ϵ_ia_1a_2(f ϕ^a_1ϕ^bψ_bϕ^a_2)+ 13( ψ_i{ψ_b_1ψ_b_2,ϕ^b_1ϕ^b_2}) -31( ψ_i{ψ_b_1ψ_b_2,ϕ^b_2ϕ^b_1}) + 14( ψ_iψ_b_1ϕ^b_1ψ_b_2ϕ^b_2) -22( ψ_iψ_b_1ϕ^b_2ϕ^b_1ψ_b_2)+ 14( ψ_iϕ^b_1ψ_b_1ϕ^b_2ψ_b_2), (r_16^(1,1))^i_jk = ϵ_a_1a_2(j[-4 ( f ϕ^i) ( ψ_k)ϕ^a_1ϕ^a_2)- ( ϕ^i ϕ^a_2) ( f [ψ_k), ϕ^a_1]) ] +ϵ_a_1a_2(j[3 ( fϕ^a_1{ψ_k) ,ϕ^i}ϕ^a_2) +5 (f {ψ_k), ϕ^a_1ϕ^i ϕ^a_2}) -4 ( f ψ_k)ϕ^i ϕ^a_1ϕ^a_2)-4 ( f ϕ^a_1ϕ^a_2ϕ^i ψ_k))] +2( ψ_(jψ_k)ψ_b [ϕ^b, ϕ^i] ) -3 ( ψ_(j|ψ_b ψ_|k){ϕ^b, ϕ^i}) +6 ( ψ_(j|{ψ_b, ϕ^b}ψ_|k)ϕ^i) -9 (ψ_(j|{ψ_b, ϕ^i}ψ_|k)ϕ^b) -2( ψ_(jψ_k)[ϕ^b, ϕ^i] ψ_b ) + ( ψ_(j|ψ_b) ( ψ_|k) [ϕ^b, ϕ^i] ) + (ψ_(j|ϕ^b)({ψ_|k),ψ_b}ϕ^i) , (r_16^(1,2))^i_jk = -1/2(fϕ^iψ_(jψ_k))-1/2(fψ_(jϕ^i ψ_k))-1/2(fψ_(jψ_k)ϕ^i ) -1/4ϵ^ia_1a_2(ψ_a_1ψ_a_2ψ_(jψ_k)) , (r_18^(1,2))^i_j = -4(f ϕ^i ϕ^a)(ψ_jψ_a) -5 (f ϕ^a ϕ^i)(ψ_jψ_a) -53/2(f ϕ^i ψ_j)(ϕ^aψ_a) + 7(f ϕ^i ψ_a)(ϕ^aψ_j) + 15/2(f ϕ^a ψ_j)(ϕ^iψ_a) +12 (f ϕ^a ψ_a)(ϕ^iψ_j)+2(f ψ_jϕ^i )(ϕ^aψ_a) -13(f ψ_aϕ^i )(ϕ^aψ_j) +4(f ψ_jϕ^a )(ϕ^iψ_a) +6 (fψ_jψ_a)(ϕ^iϕ^a) + 13/2(fψ_aψ_j)(ϕ^iϕ^a) -4 (f ϕ^i )(ϕ^aψ_jψ_a) + 14(f ϕ^i )(ϕ^aψ_aψ_j)-8(f ϕ^a )(ϕ^iψ_jψ_a) -8 (f ϕ^a )(ϕ^iψ_aψ_j)-4 (f ψ_j )(ψ_aϕ^iϕ^a) -9(f ψ_a )(ψ_jϕ^iϕ^a) +6 (f ψ_a )(ψ_jϕ^aϕ^i)+3 (f ϕ^i ϕ^aψ_jψ_a) - 31/2(f ϕ^i ϕ^aψ_aψ_j) +3(f ϕ^a ϕ^iψ_jψ_a)+5/2(f ϕ^a ϕ^iψ_aψ_j) +12(f ϕ^i ψ_jϕ^aψ_a) -13/2(f ϕ^i ψ_aϕ^aψ_j) -6(f ϕ^a ψ_jϕ^iψ_a) -13/2(f ϕ^a ψ_aϕ^iψ_j) +18 (f ϕ^i ψ_jψ_aϕ^a)-12 (f ψ_jϕ^i ϕ^a ψ_a) +17/2(f ψ_aϕ^i ϕ^a ψ_j) -43/2(f ψ_aϕ^a ϕ^i ψ_j)+1/3ϵ^a_1a_2a_3( ϕ^iψ_j) ( ψ_a_1ψ_a_2ψ_a_3)-2ϵ^a_1a_2i( ϕ^bψ_a_1) ( ψ_bψ_jψ_a_2) - 10ϵ^a_1a_2a_3( ϕ^i ψ_j ψ_a_1ψ_a_2ψ_a_3) + 8ϵ^a_1a_2a_3( ϕ^iψ_a_1ψ_j ψ_a_2ψ_a_3)-2ϵ^a_1a_2a_3( ϕ^iψ_a_1ψ_a_2ψ_jψ_a_3), (r_18^(1,2))_ijk=-ϵ_a_1a_2(i[ (f ϕ^a_1) ( ϕ^a_2ψ_j ψ_k)) -3/2(f ψ_j ) (ψ_k)ϕ^a_1ϕ^a_2) +3(f ϕ^a_1ψ_j ϕ^a_2ψ_k)) -3 (f ψ_j ϕ^a_1ψ_k)ϕ^a_2) ] -1/2(ϕ^a ψ_a) (ψ_(iψ_jψ_k))+3/2(ϕ^a ψ_(i|) (ψ_aψ_|jψ_k)) + 1/2(ϕ^a ψ_(iψ_j|) (ψ_a ψ_|k)) +3/2(ϕ^a ψ_(i|ψ_aψ_|jψ_k))- 3/2(ϕ^a ψ_(iψ_j|ψ_aψ_|k)) ,(r_20^(2,1))_ij= -ϵ_a_1a_2(i[( ff ) ( ϕ^a_1ϕ^a_2ψ_j)) +1/2( f ψ_j)) ( fϕ^a_1ϕ^a_2)+2( f ϕ^a_1) ( f[ϕ^a_2, ψ_j) ])] +ϵ_a_1a_2(i[ 4 ( ffϕ^a_1ϕ^a_2ψ_j)) -( fϕ^a_1ϕ^a_2f ψ_j)) ] +2( f ϕ^a ψ_(i) ( ψ_j)ψ_a) -4( fψ_(iϕ^a ) ( ψ_j)ψ_a)-1/2(f ψ_(i) ( ϕ^a ψ_j)ψ_a ) - 5/2(f ψ_(i|) ( ϕ^a ψ_aψ_|j)) + 2(f ψ_a) ( ϕ^a ψ_(iψ_j)) -4(f ψ_(i|ψ_a) ( ϕ^aψ_|j)) +2( f ϕ^a ψ_(i [ψ_j), ψ_a]) +4( f ϕ^a ψ_aψ_(iψ_j)) +4 ( f ψ_(iϕ^aψ_j)ψ_a)-3( f ψ_(i|ϕ^aψ_a ψ_|j)) -2 ( f ψ_aϕ^aψ_(iψ_j)) - ( f ψ_(i|ψ_a ϕ^a ψ_|j)) +4 ( fψ_a ψ_(iϕ^a ψ_j)) +2/5ϵ^a_1a_2a_3[2 (ψ_a_1ψ_a_2)(ψ_a_3ψ_(iψ_j)) -3(ψ_(i|ψ_a_1ψ_|j)ψ_a_2ψ_a_3)] . Finally, we construct relations of these trace relations. Consider a linear combination of the trace relations with coefficients being the graviton cohomologies. If it vanishes identically, we call it a relation of relations. While the trace relations are identities that can be seen at thelevel of `gluons' ϕ,ψ,f, the relations of relations are the identitiesof mesons u_2, u_3, v_2, v_3, w_2, w_3. We do not need to know how u_2, u_3, v_2, v_3, w_2, w_3 are made of ϕ,ψ,f to obtain the relations of relations.After constructing relations of relations, one can write them as the Q-action on certain operators using (<ref>), (<ref>). They are the Q-closed operators since their Q-actions vanish due to the relations of relations. This is the way we obtain the Q-closed operators in section 4.1. They can be either Q-exact or not and there is no trivial way to judge it easily.If they are not Q-exact, they are the non-graviton cohomologies since they are made of the linear combinations of r_I's, which vanish with diagonal ϕ,ψ,f. For the check of the (non-)Q-exactness, refer to section 4.2.Now we will construct relations of relations at the threshold level j=24 which are singlets under SU(3) ⊂ SU(4)_R, from the trace relations (<ref>), (<ref>).There are 5 choices of (R,J) in this sector in which relations of relations exist.i) (R,J) = (2,2) Let us first enumerate all SU(3) ⊂ SU(4)_R singlets in this sector made by the product of the trace relations in (<ref>), (<ref>) and the graviton cohomologies. There are following 6 singlets:s_1^(2,0) = u^ij(R_20^(2,0))_ij , s_2^(2,0) = w^ij(R_14^(1,0))_ij , s_3^(2,0)= w^i(R_16^(1,0))_i, s_1^(1,2) = v^jk_i(R_16^(1,1))^i_jk ,s_2^(1,2)= v^j_i (R_18^(1,1))^i_j ,s_3^(1,2)= w^i(R_16^(0,2))_i .The superscripts denote (n_f, n_ψ) of the terms with maximal n_f in the operator, as before. There is one relation of these relations given byi QO^(2,1)≡ 65s_1^(2,0) -39s_2^(2,0) +5s_3^(2,0) -312s_1^(1,2) -26s_2^(1,2) +6s_3^(1,2) = 0 .This is the Q-action on the Q-closed operator (<ref>).ii) (R,J) = (5/2,3/2) There exist 12 SU(3) singlets in this sector given bys_1^(1,1) = u^a(iv^j)_a(R_14^(1,0))_ij ,s_2^(1,1)= ϵ_a_1a_2(iu^a_1kv^a_2_j)(R_14^(1,0))^ij_k ,s_3^(1,1) = v^jk_i(R_16^(1,0))^i_jk , s_4^(1,1) = u^ijk(R_18^(1,1))_ijk ,s_5^(1,1) = v^(j_i w^k)(R_10^(0,0))^i_jk ,s_6^(1,1) = u^(ijw^k)(R_12^(0,1))_ijk , s_7^(1,1) = ϵ_a_1a_2i u^a_1j w^a_2(R_12^(0,1))^i_j ,s_8^(1,1) = w^ij(R_14^(0,1))_ij , s_1^(0,3) = ϵ^a_1a_2(iv^j_a_1v^k)_a_2 (R_12^(0,1))_ijk , s_2^(0,3) = v^j_a v^a_i(R_12^(0,1))^i_j , s_3^(0,3) = u^(jkv^k)_i(R_14^(0,2))^i_jkl ,s_4^(0,3) = v^jk_i(R_16^(0,2))^i_jk .There are 4 relations of these relations, given byi Q O_1^(1,2)≡ 3s_5^(1,1) -3s_6^(1,1) +s_7^(1,1)= 0 , i Q O_2^(1,2)≡ 9s_1^(1,1) -10s_2^(1,1) - 30 s_5^(1,1)-60s_3^(0,3)= 0 , i Q O_3^(1,2)≡ 3s_1^(1,1) -6s_2^(1,1)+4s_4^(1,1) -14s_5^(1,1)-6s_8^(1,1) -12s_1^(0,3) -4s_2^(0,3)= 0 , i Q O_4^(1,2)≡ 3s_1^(1,1) -14s_2^(1,1)-8s_3^(1,1) -42s_5^(1,1) +12 s_6^(1,1) -24 s_8^(1,1) -36s_1^(0,3) + 8 s_4^(0,3)= 0 .They are the Q-action on (<ref>).iii) (R,J) = (3,1) There exist 16 SU(3) singlets in this sector given bys_1^(1,0) = ϵ_a_1a_2iϵ_b_1b_2j u^a_1b_1u^a_2b_2k (R_14^(1,0))^ij_k ,s_2^(1,0) = ϵ_a_1a_2 iu^a_1 (j w^k) a_2 (R_10^(0,0))^i_jk , s_3^(1,0)= ϵ_a_1a_2 iu^a_1 jk w^a_2(R_10^(0,0))^i_jk , s_1^(0,2)= v^a_i v^jk_a(R_10^(0,0))^i_jk , s_2^(0,2)= v^(j_a v^k)a_i(R_10^(0,0))^i_jk , s_3^(0,2)= u^a(iv^jk)_a (R_12^(0,1))_ijk ,s_4^(0,2)= u^a(ijv^k)_a (R_12^(0,1))_ijk , s_5^(0,2)= ϵ_a_1a_2 i u^a_1 bv^a_2 j_b (R_12^(0,1))^i_j, s_6^(0,2)= ϵ_a_1a_2 i u^a_1 bjv^a_2_b(R_12^(0,1))^i_j,s_7^(0,2)= ϵ_a_1a_2(i u^a_1 (kv^l)a_2_j) (R_12^(0,1))^ij_kl , s_8^(0,2)= ϵ_a_1a_2(i u^a_1 klv^a_2_j)(R_12^(0,1))^ij_kl ,s_9^(0,2)= ϵ_a_1 a_2 i u^a_1 (ju^kl) a_2(R_14^(0,2))^i_jkl , s_10^(0,2)= ϵ_a_1 a_2 i u^a_1 bv^a_2_b(R_14^(0,1))^i , s_11^(0,2)= u^a(iv^j)_a (R_14^(0,1))_ij ,s_12^(0,2)= ϵ_a_1a_2(i u^a_1 kv^a_2_j) (R_14^(0,1))^ij_k,s_13^(0,2)= u^(jkv^l)_i(R_14^(0,1))^i_jkl .There are 13 relations of these relations, given byi Q O_1^(1,1)≡ s_2^(1,0) =0 , i Q O_2^(1,1)≡ s_3^(1,0) =0 , i Q O_3^(1,1)≡ s_1^(1,0) +5s_1^(0,2) -2s_2^(0,2) =0 , iQO_1^(0,3)≡4s_5^(0,2)+3s_6^(0,2) = (R_12^(0,1))^i_j (R_12^(0,1))^j_i = iQ[1/2 iQ ((r_12^(0,2))^i_j (r_12^(0,2))^j_i) ] = 0 , iQO_2^(0,3)≡s_7^(0,2)+s_8^(0,2)= (R_12^(0,1))^ij_kl(R_12^(0,1))^kl_ij= iQ[1/2 iQ ((r_12^(0,2))^ij_kl (r_12^(0,2))^kl_ij) ]= 0 , iQO_3^(0,3)≡ s_3^(0,2) = 0 , iQO_4^(0,3)≡ s_10^(0,2)= 0 , iQO_5^(0,3)≡ 6s_1^(0,2)-6s_4^(0,2)-s_6^(0,2)= 0 ,iQO_6^(0,3)≡ 24 s_2^(0,2) -6 s_11^(0,2) + s_12^(0,2)= 0 ,iQO_7^(0,3)≡ s_1^(0,2) -10s_2^(0,2) -6s_4^(0,2) -10 s_8^(0,2) = 0 ,iQO_8^(0,3)≡5s_1^(0,2) -2s_2^(0,2) -9s_4^(0,2) +6s_9^(0,2)= 0 ,iQO_9^(0,3)≡ 6s_1^(0,2) +12s_2^(0,2) -18s_4^(0,2) +s_12^(0,2)= 0 ,iQO_10^(0,3)≡ 38s_1^(0,2) +4s_2^(0,2) -24s_4^(0,2) -5s_13^(0,2)= 0 . They are the Q-action on (<ref>).Here, O^(0,3)_1 and O^(0,3)_2 are explicitly shown to be Q-exact.iv) (R,J) = (7/2,1/2) There exist 8 SU(3) singlets in this sector given bys^(0,1)_1=ϵ_a_1a_2i u^a_1 b u^jkv^a_2_b(R_10^(0,0))^i_jk ,s^(0,1)_2= ϵ_a_1a_2 iu^a_1 b u^a_2 (jv^k)_b(R_10^(0,0))^i_jk , s^(0,1)_3= ϵ_a_1a_2i u^a_1b(jv^k)a_2_b(R_10^(0,0))^i_jk ,s^(0,1)_4= ϵ_a_1a_2(i u^a_1(kv^l)a_2_j) (R_12^(0,0))^ij_kl , s^(0,1)_5= ϵ_a_1a_2(i u^a_1klv^a_2_j)(R_12^(0,0))^ij_kl ,s^(0,1)_6= ϵ_a_1a_2(iϵ_j)b_1b_2 u^a_1b_1 u^a_2b_2 u^kl(R_12^(0,1))^ij_kl , s^(0,1)_7= ϵ_a_1a_2(iϵ_j)b_1b_2 u^a_1b_1 u^a_2b_2k (R_14^(0,1))^ij_k,s^(0,1)_8= ϵ_a_1a_2i u^a_1(j u^kl)a_2(R_14^(0,1))^i_jkl .There are 6 relations of these relations, given byiQO^(0,2)_1 ≡ s_1^(0,1)-2s_2^(0,1) = 0 , iQO^(0,2)_2 ≡ 6s_3^(0,1) + s_4^(0,1) = 0 , iQO^(0,2)_3 ≡ s_1^(0,1) + s_5^(0,1) = 0 , iQO^(0,2)_4 ≡ s_1^(0,1) + s_6^(0,1) = 0 , iQO^(0,2)_5 ≡ 4s_1^(0,1) + 24s_3^(0,1) - s_7^(0,1) = 0 , iQO^(0,2)_6 ≡ s_1^(0,1) - 12s_3^(0,1) + 3s_8^(0,1) = 0 .They are the Q-action on (<ref>). v) (R,J) = (4,0) There exist 4 SU(3) singlets in this sector given bys^(0,0)_1= ϵ_a_1a_2a_3ϵ_b_1b_2 i u^a_1b_1 u^a_2b_2 u^a_3jk(R_10^(0,0))^i_jk , s^(0,0)_2= R_12^(0,0) R_12^(0,0) , s^(0,0)_3= ϵ_a_1a_2(iϵ_j)b_1b_2 u^a_1b_1 u^a_2b_2 u^kl(R_12^(0,0))^ij_kl ,s^(0,0)_4= ϵ_a_1 a_2 (iϵ_j) b_1 b_2u^a_1 b_1 (k u^l) a_2 b_2(R_12^(0,0))^ij_kl .There is 1 relation of these relations, given byi QO^(0,1)≡ 36s_1^(0,0) +5s_2^(0,0) -6s_3^(0,0) = 0 .This is the Q-action on (<ref>). It is straightforward to generate relations of relations in other charge sectors.We present relations of relations at j=30 and (R,J) = (3,2)which are singlets under SU(3) global symmetry and do not involve f's. They will yield the fermionic Q-closed operators with (R,J) = (5/2,5/2), whichmay be good ansätze for the non-graviton cohomology detected by the indexat j=30. The SU(3) singlets in this sector are given byp_1= ϵ^a_1a_2a_3v^(i_a_1v^j_a_2v^k)_a_3 (R_12^(0,1))_ijk , p_2 = v^j_av^a_bv^b_i (R_12^(0,1))^i_j, p_3 = v^(k_av^l)_(iv^a_j) (R_12^(0,1))^ij_kl ,p_4= u^(jkv^l)_av^a_i (R_14^(0,2))^i_jkl , p_5 = u^a(jv^k_av^l)_i (R_14^(0,2))^i_jkl , p_6 = v^(jk_av^l)a_i (R_14^(0,2))^i_jkl ,p_7= v^(k_(iv^lm)_j)(R_16^(0,2))^ij_klm , p_8 = v^(j_av^k)a_i (R_16^(0,2))^i_jk , p_9 = v^a_iv^jk_a (R_16^(0,2))^i_jk , p_10 = v^ia_bv^b_a (R_16^(0,2))_i, p_11 = ϵ_ia_1a_2u^a_1bv^a_2_b (R_20^(0,3))^i .The relations of relations of the above singlets are given as follows:5 p_1 - 10 p_2 - 30 p_4 + 8 p_11 = 0 ,15 p_1 + 6 p_2 - 40 p_3 - 30 p_4 + 90 p_5 = 0 , 105 p_1 - 336 p_2 + 140 p_3 - 1050 p_4 + 10080 p_6 - 900 p_7 = 0 , 15 p_1 - 138 p_2 - 160 p_3 - 570 p_4 + 864 p_6 - 48 p_8 = 0 , 375 p_1 + 6 p_2 + 1760 p_3 + 150 p_4 + 4320 p_6 - 120 p_9 = 0 , 55 p_1 - 266 p_2 - 160 p_3 - 1050 p_4 + 2880 p_6 + 40 p_10 = 0 . 12345 Strominger:1996sh A. Strominger and C. Vafa,Phys. Lett. B 379, 99-104 (1996) doi:10.1016/0370-2693(96)00345-0 [arXiv:hep-th/9601029 [hep-th]]. Witten:1998zw E. Witten,Adv. Theor. Math. Phys. 2, 505-532 (1998) doi:10.4310/ATMP.1998.v2.n3.a3 [arXiv:hep-th/9803131 [hep-th]].Sundborg:1999ue B. Sundborg,Nucl. Phys. B 573, 349-363 (2000) doi:10.1016/S0550-3213(00)00044-4 [arXiv:hep-th/9908001 [hep-th]].Aharony:2003sx O. Aharony, J. Marsano, S. Minwalla, K. Papadodimas and M. Van Raamsdonk,Adv. Theor. Math. Phys. 8, 603-696 (2004) doi:10.4310/ATMP.2004.v8.n4.a1 [arXiv:hep-th/0310285 [hep-th]]. Kinney:2005ej J. Kinney, J. M. Maldacena, S. Minwalla and S. Raju,Commun. Math. Phys. 275, 209-254 (2007) doi:10.1007/s00220-007-0258-7 [arXiv:hep-th/0510251 [hep-th]]. Cabo-Bizet:2018ehj A. Cabo-Bizet, D. Cassani, D. Martelli and S. Murthy,JHEP 10, 062 (2019) doi:10.1007/JHEP10(2019)062 [arXiv:1810.11442 [hep-th]].Choi:2018hmj S. Choi, J. Kim, S. Kim and J. Nahmgoong,[arXiv:1810.12067 [hep-th]].Benini:2018ywd F. Benini and E. Milan,Phys. Rev. X 10, no.2, 021037 (2020) doi:10.1103/PhysRevX.10.021037 [arXiv:1812.09613 [hep-th]]. Romelsberger:2005eg C. Romelsberger,Nucl. Phys. B 747, 329-353 (2006) doi:10.1016/j.nuclphysb.2006.03.037 [arXiv:hep-th/0510060 [hep-th]]. Berkooz:2006wc M. Berkooz, D. Reichmann and J. Simon,JHEP 01, 048 (2007) doi:10.1088/1126-6708/2007/01/048 [arXiv:hep-th/0604023 [hep-th]].minwalla S. Minwalla, Supersymmetric States in 𝒩=4 Yang Mills,talk given at Strings 2006, Beijing.Janik:2007pm R. A. Janik and M. Trzetrzelewski,Phys. Rev. D 77, 085024 (2008) doi:10.1103/PhysRevD.77.085024 [arXiv:0712.2714 [hep-th]]. Grant:2008sk L. Grant, P. A. Grassi, S. Kim and S. Minwalla,JHEP 05, 049 (2008) doi:10.1088/1126-6708/2008/05/049 [arXiv:0803.4183 [hep-th]]. Chang:2013fba C. M. Chang and X. Yin,Phys. Rev. D 88, no.10, 106005 (2013) doi:10.1103/PhysRevD.88.106005 [arXiv:1305.6314 [hep-th]]. Chang:2022mjp C. M. Chang and Y. H. Lin,JHEP 02, 109 (2023) doi:10.1007/JHEP02(2023)109 [arXiv:2209.06728 [hep-th]]. Choi:2022caq S. Choi, S. Kim, E. Lee and J. Park,[arXiv:2209.12696 [hep-th]].Choi:2023znd S. Choi, S. Kim, E. Lee, S. Lee and J. Park,[arXiv:2304.10155 [hep-th]].Chang:2023zqk C. M. Chang, L. Feng, Y. H. Lin and Y. X. Tao,[arXiv:2306.04673 [hep-th]].Budzik:2023vtr K. Budzik, H. Murali and P. Vieira,[arXiv:2306.04693 [hep-th]].Budzik:2023xbr K. Budzik, D. Gaiotto, J. Kulp, B. R. Williams, J. Wu and M. Yu,[arXiv:2306.01039 [hep-th]]. Berenstein:2002jq D. E. Berenstein, J. M. Maldacena and H. S. Nastase,JHEP 04, 013 (2002) doi:10.1088/1126-6708/2002/04/013 [arXiv:hep-th/0202021 [hep-th]].Kim:2003rza N. Kim, T. Klose and J. Plefka,Nucl. Phys. B 671, 359-382 (2003) doi:10.1016/j.nuclphysb.2003.08.019 [arXiv:hep-th/0306054 [hep-th]].Imamura:2021ytr Y. Imamura,PTEP 2021, no.12, 123B05 (2021) doi:10.1093/ptep/ptab141 [arXiv:2108.12090 [hep-th]].Gaiotto:2021xce D. Gaiotto and J. H. Lee,[arXiv:2109.02545 [hep-th]].Murthy:2022ien S. Murthy,Pure Appl. Math. Quart. 19, no.1, 299-340 (2023) doi:10.4310/PAMQ.2023.v19.n1.a12 [arXiv:2202.06897 [hep-th]].Lee:2022vig J. H. Lee,JHEP 11, 137 (2022) doi:10.1007/JHEP11(2022)137 [arXiv:2204.09286 [hep-th]].Bhattacharyya:2010yg S. Bhattacharyya, S. Minwalla and K. Papadodimas,JHEP 11, 035 (2011) doi:10.1007/JHEP11(2011)035 [arXiv:1005.1287 [hep-th]]. Choi:2021lbk S. Choi, S. Jeong and S. Kim,[arXiv:2103.01401 [hep-th]]. Honda:2019cio M. Honda,Phys. Rev. D 100, no.2, 026008 (2019) doi:10.1103/PhysRevD.100.026008 [arXiv:1901.08091 [hep-th]].ArabiArdehali:2019tdm A. Arabi Ardehali,JHEP 06, 134 (2019) doi:10.1007/JHEP06(2019)134 [arXiv:1902.06619 [hep-th]].Maldacena:1998bw J. M. Maldacena and A. Strominger,JHEP 12, 005 (1998) doi:10.1088/1126-6708/1998/12/005 [arXiv:hep-th/9804085 [hep-th]]. McGreevy:2000cw J. McGreevy, L. Susskind and N. Toumbas,JHEP 06, 008 (2000) doi:10.1088/1126-6708/2000/06/008 [arXiv:hep-th/0003075 [hep-th]].Grisaru:2000zn M. T. Grisaru, R. C. Myers and O. Tafjord,JHEP 08, 040 (2000) doi:10.1088/1126-6708/2000/08/040 [arXiv:hep-th/0008015 [hep-th]].Hashimoto:2000zp A. Hashimoto, S. Hirano and N. Itzhaki,JHEP 08, 051 (2000) doi:10.1088/1126-6708/2000/08/051 [arXiv:hep-th/0008016 [hep-th]].Cox_2015 D. A. Cox, J. Little and D. O'Shea, “Ideals, Varieties, and Algorithms,”Springer (2015) doi:10.1007/978-3-319-16721-3 . DGPS Decker, W.; Greuel, G.-M.; Pfister, G.; Schönemann, H.:Singular 4-3-0 — A computer algebra system for polynomial computations. https://www.singular.uni-kl.de (2022).Cordova:2016emh C. Cordova, T. T. Dumitrescu and K. Intriligator,JHEP 03, 163 (2019) doi:10.1007/JHEP03(2019)163 [arXiv:1612.00809 [hep-th]]. Markeviciute:2018yal J. Markeviciute and J. E. Santos,Class. Quant. Grav. 36, no.2, 02LT01 (2019) doi:10.1088/1361-6382/aaf680 [arXiv:1806.01849 [hep-th]].Markeviciute:2018cqs J. Markeviciute,JHEP 03, 110 (2019) doi:10.1007/JHEP03(2019)110 [arXiv:1809.04084 [hep-th]]. Mathur:2005zp S. D. Mathur,Fortsch. Phys. 53, 793-827 (2005) doi:10.1002/prop.200410203 [arXiv:hep-th/0502050 [hep-th]].Choi:2022ovw S. Choi, S. Kim, E. Lee and J. Lee,JHEP 11, 086 (2023) doi:10.1007/JHEP11(2023)086 [arXiv:2207.05172 [hep-th]].Beccaria:2023hip M. Beccaria and A. Cabo-Bizet,[arXiv:2308.05191 [hep-th]].Ebertshauser:2001nj T. Ebertshauser, H. W. Fearing and S. Scherer,Phys. Rev. D 65, 054033 (2002) doi:10.1103/PhysRevD.65.054033 [arXiv:hep-ph/0110261 [hep-ph]].Dempsey:2022uie R. Dempsey, I. R. Klebanov, L. L. Lin and S. S. Pufu,JHEP 04, 107 (2023) doi:10.1007/JHEP04(2023)107 [arXiv:2210.10895 [hep-th]]. Beisert:2004ry N. Beisert,Phys. Rept. 405, 1-202 (2004) doi:10.1016/j.physrep.2004.09.007 [arXiv:hep-th/0407277 [hep-th]].Boruch:2022tno J. Boruch, M. T. Heydeman, L. V. Iliesiu and G. J. Turiaci,[arXiv:2203.01331 [hep-th]].
http://arxiv.org/abs/2312.16443v1
{ "authors": [ "Jaehyeok Choi", "Sunjin Choi", "Seok Kim", "Jehyun Lee", "Siyul Lee" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20231227071507", "title": "Finite $N$ black hole cohomologies" }
Evaluating the security ofin the quantum random oracle model [ ================================================================The Mixture-of-Experts (MoE) approach has demonstrated outstanding scalability in multi-task learning including low-level upstream tasks such as concurrent removal of multiple adverse weather effects. However, the conventional MoE architecture with parallel Feed Forward Network (FFN) experts leads to significant parameter and computational overheads that hinder its efficient deployment. In addition, the naïveMoE linear router is suboptimal in assigning task-specific features to multiple experts which limits its further scalability.In this work, we propose an efficient MoE architecture with weight sharing across the experts. Inspired by the idea of linear feature modulation (FM), our architecture implicitly instantiates multiple experts via learnable activation modulations on a single shared expert block. The proposed Feature Modulated Expert () serves as a building block for the novel Mixture-of-Feature-Modulation-Experts () architecture, which can scale up the number of experts with low overhead.We further propose an Uncertainty-aware Router () to assign task-specific features to different FM modules with well-calibrated weights. This enables MoFME to effectively learn diverse expert functions for multiple tasks. The conducted experiments on the multi-deweather task show that ouroutperforms the baselines in the image restoration quality by 0.1-0.2 dB and achieves SOTA-compatible performance while saving more than 72% of parameters and 39% inference time over the conventional MoE counterpart. Experiments on the downstream segmentation and classification tasks further demonstrate the generalizability ofto real open-world applications. § INTRODUCTION There is a growing interest in low-level upstream tasks such as adverse weather removal (deweather) <cit.>. It intends to eliminate the impact of weather-induced noise on decision-critical downstream tasks such as detection and segmentation <cit.>. Previous methods <cit.> approach each type of weather effect independently, yet multiple effects can appear simultaneously in the real world. Moreover, such methods mainly focus on the deweathering performance metrics rather than an efficient deployment. One promising way to address several weather effects concurrently is the conditional computation paradigm <cit.>, where a model can selectively activate certain parts of architecture, i.e. the task-specific experts, depending on the input. In particular, the sparse Mixture-of-Experts (MoE) <cit.> with parallel Feed Forward Network (FFN) experts rely on a router to activate a subset of FFNs for each weather-specific input image.Figure <ref> shows a pipeline with an upstream MoE model to overcome a number of weather effects. For example, <cit.> propose the DAN-Net method that estimates gated attention maps for inputs and uses them to properly dispatch images to task-specific experts. Similarly, <cit.> develop a weather-aware router to assign an input image to a relevant expert without a weather-type label at test time.Meanwhile, challenges exist in building a practical MoE-based model for deweather applications: 202 Efficient deployment. Conventional MoE-based models with multiple parallel FFN experts require a significant amount of memory and computing. For example, MoWE <cit.> architecture contains up to hundreds of experts with billions of parameters. Hence, it is infeasible to apply such architectures to edge devices with limited resources for practical upstream tasks, e.g. to increase the safety of autonomous driving<cit.>. Previous attempts to reduce memory and computation overheads inevitably sacrifice model performance <cit.>. 203 Diverse feature calibration. Existing MoE networks typically use naïve linear routers for expert selection. This leads to poor calibration of router weights with diverse input features. Multi-gate MoE <cit.> overcomes this challenge by designing an additional gating network to distinguish task-specific features. However, this introduces additional computation costs.Therefore, we are motivated by the following objective: is it possible to design a computationally-efficient MoE model while improving its deweathering metrics for real-world applications?To approach this objective, we start by analyzing redundancies in the conventional MoE architecture. The main one comes from multiple parallel experts containing independently learned weights.Meanwhile, previous research shows a possibility to simultaneously learn multiple objectives with diverse features using a mostly shared architecture and weights. For example, feature modulation (FM) <cit.> performs an input-dependent affine transformation of intermediate features with only two additional feature map parameters. Hence, the FM method allows decoupling multiple tasks simultaneously and implicitly represents ensemble models <cit.> with low parameter overhead. Inspired by the FM method, we develop an efficient MoE architecture with feature-wise linear modulation for open-world scenarios. In particular, we propose Mixture-of-Feature-Modulation-Experts (MoFME) framework with two novel components: Feature Modulated Expert and Uncertainty-aware Router. FME adopts FM into the MoE network via a single shared expert block. This block learns a diverse set of activation modulations with a minor overhead on the weight count. In particular, FME performs a feature-wise affine transformation on the model’s intermediate features that is conditioned on the task-specific inputs.Next, it fuses task-specific modulated features with a single shared FFN expert, which allows it to efficiently learn a set of input-conditioned models. Thus, FME increases generalization to a wider range of substantially different tasks during training. As the T-SNE visualization shown in Figure <ref>, MoFME can better correlate the features with clearer partitions and boundaries.The conventional MoE router adopts the top-K mechanism, which introduces non-differentiable operations into the computational graph and complicates the router optimization process. Previous research has found that such MoE router is prone to mode collapse, where it tends to direct all inputs to a limited number of experts <cit.>. At the same time, <cit.> shows that uncertainty captures the relative confidence between tasks in the multi-task setting. Therefore, we propose our UaR router that estimates uncertainty using MC dropout <cit.>. The estimated uncertainty is used to weigh modulated features and, therefore, route them to the relevant experts. We verify the proposed MoFME method by conducting experiments on the deweather task. For instance, evaluation results with All-weather <cit.> and RainCityscapes <cit.> datasets show that the proposed MoFME outperforms prior MoE-based model in the image restoration quality with less than 30% of network parameters. In addition, quantitative results on the downstream segmentation and classification tasks after applying the proposed MoFME further demonstrate the benefits of our pipeline with the upstream pre-processing. Our main contributions are summarized as:* We introduce Mixture-of-Feature-Modulation-Experts (MoFME) framework with two novel components to improve upstream deweathering performance while saving a significant number of parameters. * We develop Feature Modulation Expert (FME), a novel MoE layer to replace the standard FFN layers, which leads to improved performance and parameter efficiency. * We devise an Uncertainty-aware Router (UaR) to enhance the assignment of task-specific inputs to the subset of experts in our multi-task deweathering setting. * Experimental results demonstrate that the proposed MoFME can achieve consistent performance gains on both low-level upstream and high-level downstream tasks: our method achieves 0.1-0.2 dB PSNR gain in image restoration compared to prior MoE-based model and outperforms SOTA baselines in segmentation and classification tasks while saving more than 72% parameters and 39% inference time. § RELATED WORKMixture-of-Experts (MoE). Sub-model assembling is a typical way to scale up model size and improve performance in deep learning. MoE is a special case of assembling with a series of sub-models that are called the experts. It performs conditional computation using an input-dependent scheme to improve sub-model efficiency <cit.>.Specifically, <cit.> assemble mixture-of-experts models into an architectural block that is known as the MoE layer. This enables more expressive modeling and decreases computation costs.Another solution is to sparsely activate only a few task-corresponding experts during training and inference. <cit.> propose M^3ViT, which sparsely chooses the experts by using the transformer's token embeddings for router guidance. This helps the router to assign features to a selected expert during training and inference and to reduce computational costs.Our proposed MoFME is orthogonal to these MoE designs With the same goal of saving computational cost, our method instead proposes MoFME to substitute the over-parameterized parallel FFN experts with a lightweight feature modulation module followed by a single shared FFN expert. Efficient MoE. Though MoE shows advantages in many popular tasks, its conventional architectures cannot meet requirements for practical real-world applications due to large model sizes. With many repetitive structures, pruning is the most common way to increase parameter efficiency. <cit.> formulate channels and kernels as experts and introduce the task-specific gating network to filter out some parameters for each individual task. Several recent works <cit.> also consider applying knowledge distillation to obtain a lightweight student model for inference only. However, the above methods sacrifice model performance. Besides, <cit.> study how to efficiently adapt MoE networks to hardware devices while saving communication and computational costs. Instead, our MoFME aims to decrease computational costs and targets the redundancies in conventional over-parameterized FFN experts without a drop in performance by learning lightweight feature-modulated layers.Adverse Weather Removal. Adverse weather removal has been explored in many aspects. For example, MPRNet <cit.>, SwinIR <cit.>, and Restormer <cit.> are architectures for general image restoration. Some methods can remove multiple adverse weathers at once. All-in-One <cit.> uses neural architecture search (NAS) to discriminate between different tasks. TransWeather <cit.> uses learnable weather-type embeddings in the decoder. Transformer is also applied in this task. UFormer <cit.> and Restormer <cit.> construct pyramidal network structures for image restoration based on locally-enhanced windows and channel-wise self-attention, respectively.§ PROPOSED METHODS §.§ Feature Modulated Expert We consider a common Mixture-of-Experts setting with the Vision Transformer (ViT) architecture <cit.>, where the dense FFN in each transformer block is replaced by a Mixture-of-Experts layer. The MoE layer inputs are N tokensx∈ℝ^D from the Multi-head Attention layer.Each token x is assigned by an input-dependent router into a set of E experts with router weight r(x). In a typical MoE design with a linear router, the functionality of the router can be formulated asr(x) = TopK(softmax(W_rx)), TopK(v) ={[ v, if v is in the top K elements; 0, otherwise ].where W_r∈ℝ^E× D is a trainable parameter, which maps input token into E router logits for experts selection.To reduce the computation cost, the experts in the model are sparsely activated, with TopK(·) setting all elements of the router weight to zero except the elements with the largest K values. For clarity in the rest of the paper, we denote the router weight of the i^th expert as r_i(x).The output of the MoE layer is therefore formulated as the weighted combination of the experts' output on the input token x <cit.> asMoE(x) = ∑_i r_i(x)e_i(x),where e_i(·) denotes a functionality of the i^th expert, typically designed as a FFN in the context of vision transformers. This process is illustrated in Figure <ref>(a). In this work, we employ the technique of Linear Feature Modulation <cit.> into the design of MoE to propose the efficient Feature Modulated Expert block, as illustrated in Figure <ref>(b). Specifically, the diverse task-specific features, i.e. tokens, are first modulated with a dynamic feature modulation unit, where the tokens are directed to different learned affine transformations based on an input-dependent router. The modulated features are then fused by a single shared FFN expert. In this way, we implicitly represent each expert in the MoE architecture as the cascading modules of a lightweight affine feature modulation transformation and a shared FFN, significantly reducing the parameter and computation overhead for adding additional experts. First we formulate a single Feature Modulation (FM) block <cit.>. We obtain input-dependent feature modulation parameters γ∈ℝ^D and β∈ℝ^D with two functions g:ℝ^D→ℝ^D and b:ℝ^D→ℝ^D respectively according to an input token x asγ= g(x) β=b(x),where g and b can take arbitrary learnable functions. In practice, those functions are implemented with lightweight 1×1 convolutions.The input token is then modulated asFM(x) = γ∘x+β,where ∘ is the Hadamard (element-wise) product taken w.r.t. the feature dimension.To combine the FM module with MoE, we instantiate E independent FM modules to modulate diverse task-specific features, each parameterized withγ^(i) and β^(i), where i∈{1,...,E}. Adapting from the traditional MoE formulation, we let the router select which FM module to apply on the input token, rather than which FFN to be used. Specifically, our FME module is formulated asFME(x|γ,β) =FFN {∑_i r_i(x)· [γ^(i)∘x+β^(i)] },where a single shared FFN module can process the mixture of multi-task features by the diverse feature modulations.§.§ Uncertainty-aware Router To improve the FME performance, we propose Uncertainty-aware Router (UaR), which performs implicit uncertainty estimation on the router weights according to MC dropout <cit.>. Model uncertainty <cit.> measures if the model knows what it knows. Although there exists ensemble-based uncertainty estimation methods <cit.> that often achieve the best calibration and predictive accuracy, the high computational complexity and storage cost motivates us to use the more efficient MC dropout <cit.>. Specifically, we can regard the output of a certain router r(x) as a Gaussian distribution to calibrate its uncertainty. The mean and covariance of such distribution can be estimated via a “router ensemble”, where we pass the token representation x to get r(x) with the router for M times according to MC dropout. We denote the resulted ensemble as r^m(x)={r^1(x),r^2(x),...,r^M(x)}, and the mean and covariance of the router weights in the ensemble as μ and Σ respectively.We calibrate and normalize the router's logits according to <cit.> asr(x) = Σ^-1 [ r(x)-μ] / || Σ^-1 [ r(x)-μ]||_2,where r(x) is used in the forward and backward pass during the training. The mean μ and inverse covariance Σ^-1 are both formulated as zero-padded diagonal matrices in the computation. The detailed structure is shown in Figure <ref>. §.§ Optimization Objective MoE-based model would suffer from performance degradation if most inputs are assigned to only a small subset of experts <cit.>. A load balance loss ℒ_lb <cit.> is therefore proposed for MoE to penalize the number of inputs dispatched to each router:ℒ_lb = E/N∑_n=1^N∑_i=1^Ev_i(x_n) r_i(x_n),where x_n is the n-th input token, and v_i(x_n) is 1 if the i-th expert is selected for x_n by the top-k function, otherwise 0. The combined MoE training loss therefore becomesℒ_MoE = ℒ_ts + λ_1ℒ_lb,where λ_1 is empirically set to 1e^-2 and ℒ_ts indicates the task-specific loss computed by model outputs and corresponding labels, e.g., MSE loss for image restoration task.Following <cit.>, we further leverage the covariance Σ of r^m(x) to penalize the updating of UaR and MoFME and formulate the uncertainty loss ℒ_uc asℒ_uc = E/N∑_n=1^N∑_i=1^EΣ_i· v_i(x_n),where v is defined the same as in Equation (<ref>). ℒ_uc can further reduce the model uncertainty when optimized together with other losses, where the final MoFME objective isℒ_MoFME = ℒ_ts + λ_1ℒ_lb + λ_2ℒ_uc,where λ_2 is empirically set to 5e^-3. § EXPERIMENTSWe evaluate our MoFME against several recent methods on the adverse weather removal task. We presume a test-time setup, where a model shall remove multiple types of weather effects with the same parameters. In addition, we further demonstrate the applicability of our upstream processing to downstream segmentation and classification tasks. Ablation study of MoFME architecture shows the contribution of each component. In total, MoFME achieves up to 0.1-0.2 dB performance improvement in PSNR, while saving more than 72% of parameters and 39% of inference time. §.§ Experimental SetupImplementation details. We implement our method with the PyTorch framework using 4×NVIDIA A100 GPUs. We train the network for 200 epochs with a batch size of 64. The initial learning rate of the AdamW optimizer and Cosine LR scheduler is set to 0.5×10^-4 and is gradually reduced to 10^-6. We use a warm-up stage with three epochs. Input images are randomly cropped to 256×256 size for training, and non-overlap crops of the same size are used at test time. We randomly flip and rotate images for data augmentation. The scaling factor for traditional MoE model is set to 4. Metrics, datasets, and baselines. We select widely-used PSNR and SSIM metrics as performance measures for upstream image restoration. All-weather <cit.> and Rain/HazeCityscapes <cit.> datasets are used to evaluate deweathering and downstream segmentation. CIFAR-10 datasets is for the downstream image classification task.The comparison baselines include three CNN-based models RESCAN <cit.>, PRNet <cit.>, and FFA-Net <cit.> that employ task-specific weather removal. Also, we experiment with recent transformer-based models: Restormer <cit.> with a general multi-task image restoration objective, TransWeather <cit.> with learnable weather embeddings in the decoder to remove multiple adverse effects simultaneously, conventional MoE <cit.>, MMoE <cit.>, M^3ViT <cit.>, and MoWE <cit.> for multi-task learning, as well as efficient MoE methods such as OneS <cit.>, which fuses the experts' weight and adopt knowledge distillation for better performance and PR-MoE <cit.>, which propose a pyramid residual MoE architecture to demonstrate the superiority of our proposed MoFME to handle multiple tasks in both effectiveness and efficiency. We take Vision Transformer as the backbone for MoE-based methods. §.§ Ablation studyWe conduct ablation experiments to analyze how each proposed module contributes to the MoE performance in Table <ref>. Starting from a traditional MoE design (base model), we replace the parallel FFN experts with FME, and examine the effectiveness of UaR by introducing the MC dropout into the router. The results suggest that FME alone can achieve significant parameter efficiency with a small performance drop, while UaR can enhance model performance by over 0.05 dB. We also apply our method on different base model including traditional MoE, M^3ViT, and MoWE, the results prove that combining the two techniques leads to improvements in both efficiency and performance for all base model.One key property of the MoE model is its scalability while increasing the number of experts. In Figure <ref> and Table <ref>, we show that the efficiency of our proposed MoFME is consistently maintained as the number of experts scales to hundreds with only 1/4-th of the parameters and over 0.1 dB improvement on All-Weather when compared to the conventional MoE. The inference time is significantly reduced by nearly 40% when utilizing 128 experts. as shown in Table <ref>. §.§ Quantitative analysis Upstream tasks In Table <ref> and <ref>, we report the PSNR and SSIM of each type of weather and the average scores for each baseline and MoFME on All-Weather <cit.> and RainCityscapes <cit.> after training for 200 epochs. We denote the best results in bold, and the second-best results in italics. It should be noted that all the experiments are trained with a mixture of weather data and inference with a specific type of weather. The results of Table <ref> and <ref> reveal the advantage of MoE networks to deal with multi-task inputs compared with previous naïve transformer-based and CNN-based methods. However, as it is specifically designed for high-level tasks, M^3ViT fails to exert good performance on deweather tasks on both datasets. Furthermore, current efficient MoE methods like OneS and PR-MoE cannot achieve comparable performance compared with SOTA MoE networks, while MoFME can achieve 29.09 dB average PSNR score and 0.9272 average SSIM on All-Weather, and 32.11 dB PSNR and 0.9691 SSIM on RainCityscapes. While the MoWE model attains superior performance metrics, it is worth noting that both the model's size and its computational complexity, as quantified by the FLOPs, are substantially greater when compared to our traditional MoE-based approach.We also provide the FLOPs and the number of parameters for each baseline on RainCityscapes in Table <ref>. The MoE-based methods can achieve very satisfying scores on PSNR and SSIM. However, the heavy network structure prevents them from practical applications. The two efficient MoE baselines exert their advantages in computational costs as PR-MoE can save about 50% parameters, and OneS merges its parameters to become a lightweight dense model. However, the certain model performance of the two methods is also sacrificed as OneS decrease almost 0.2 dB in PSNR. Our proposed MoFME takes a step forward by realizing a satisfied trade-off as it achieves compatible results compared to other SOTA baselines while saving up to 72% parameters.Downstream task 202 Semantic segmentation: Although our proposed methods exert satisfying performance with efficiency on low-level image restoration tasks, however, as has been questioned by <cit.>, will images optimized for better human perception can be accurately recognized by machines?We provide the quantitative comparison on Cityscapes for downstream segmentation tasks based on mIoU and mAcc in Table <ref>. We can find that other efficient MoE baselines fail to make satisfying predictions on the downstream task.On the other hand, our proposed MoFME exerts satisfying performance on both the upstream deweather task and downstream task by outperforming 2% mIoU and 2.5% mAcc compared with other efficient MoE baselines. We also provide the visualization results in Figure <ref>. 203 Image classification: To further prove the generality of our methods, we perform image classification task on CIFAR-10 with ImageNet pre-training. The top-1 accuracy is reported in Table <ref> which shows that MoE models lead to performance gain with parameter costs, while MoFME outperforms other similar size baselines by 0.2% on CIFAR-10.§.§ Qualitative analysisVisual results in Figure <ref> show the qualitative comparison of our method against the other methods. As shown in the top three rows, MoFME can achieve better visual results compared with previous methods, which recovers sharper information of the original image, especially in the defog setting. The visual results also demonstrate that our method can further recover downstream task-friendly images with better semantic segmentation outcomes. Our proposed MoFME is able to segment out clearer boundaries while maintaining consistency in color and texture. § CONCLUSIONIn this work, we proposed Mixutre-of-Feature-Modulation-Experts (MoFME) approach with novel Feature Modulation Expert (FME) and Uncertainty-aware Router (UaR).Extensive experiments on deweathering task demonstrated that MoFME can handle multiple tasks simultaneously, as it outperformed prior MoE-based baselines by 0.1-0.2 dB while saving more than 72% of parameters and 39% inference time. Downstream classification and segmentation results proved MoFME generalization to real-world applications.§ ACKNOWLEDGMENTSShanghang Zhang is supported by the National Key Research and Development Project of China (No.2022ZD0117801). The authors would like to express their sincere gratitude to the Interdisciplinary Research Center for Future Intelligent Chips (Chip-X) and Yachen Foundationfor their invaluable support.
http://arxiv.org/abs/2312.16610v1
{ "authors": [ "Rongyu Zhang", "Yulin Luo", "Jiaming Liu", "Huanrui Yang", "Zhen Dong", "Denis Gudovskiy", "Tomoyuki Okuno", "Yohei Nakata", "Kurt Keutzer", "Yuan Du", "Shanghang Zhang" ], "categories": [ "cs.CV", "cs.LG" ], "primary_category": "cs.CV", "published": "20231227152337", "title": "Efficient Deweather Mixture-of-Experts with Uncertainty-aware Feature-wise Linear Modulation" }
Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India [email protected], [email protected], [email protected] In this work, we have systematically investigated the structural, electronic, vibrational and optical properties of the edge-functionalized hg-C_3N_4 quantum dots with the aim of exploring their possible applications in solar cells and other optoelectronic devices such as light-emitting diodes. The functional groups considered in this work are methyl (-CH_3), fluorine (-F), and oxygenated groups such as aldehyde (-CHO), carboxyl (-COOH), ketone (-COCH_3), and hydroxyl (-OH) groups. The edge-functionalization resulted in significant tuning of the electronic, vibrational, and optical properties. Thus, their structural fingerprints are present in both their vibrational and optical properties, thereby allowing their detection both in the Raman as well as optical spectroscopies. It is observed that edge functionalization broadens the energy range of optical absorption, leading to coverage of most of the ultraviolet and visible regions. This implies that the edge-functionalization of hg-C_3N_4 quantum dots can be used in a variety of optoelectronic devices such as solar cells and light emitting diodes.Tuning the electronic and optical properties of hg-C_3N_4 quantum dots with edge-functionalization: A computational perspectiveKhushboo Dange, Vaishali Roondhe, and Alok Shukla ================================================================================================================================Keywords: hg-C_3N_4 quantum dots; density functional theory; edge-functionalization; UV-vis absorption spectra§ INTRODUCTION Two-dimensional (2D) materials have attracted a lot of attention from the research community since the successful exfoliation of graphene <cit.>. Also, the synthesis of other 2D materials such as silicene <cit.>, phosphorene <cit.>, boron nitride <cit.>, carbon nitride <cit.>, silicon carbide <cit.>, etc., has opened a novel field of research in materials with useful and promising applications in nanotechnology <cit.>. Among these, π-conjugated 2D materials are of particular interest because of their unique properties which make them better candidates for multiple applications such as sensors, energy storage, electronic devices, etc. <cit.>. One such material is graphitic carbon nitride (g-C_3N_4) with a delocalized π-conjugated structure, with weak interlayer van der Waal interactions, and strong covalent intralayer bonds <cit.>, similar to graphene. Among all carbon nitride structures, g-C_3N_4 is the most stable allotrope as compared to its other phases such as the cubic, semi-cubic, α, and β phases <cit.>. Furthermore, it possesses the smallest band gap as compared to its other allotropes<cit.>. The heptazine phase has an indirect band gap of 2.7 eV, while triazine phase possess a direct band gap of 2.9 eV <cit.>. Additionally, an atomically thin infinite 2D monolayer of g-C_3N_4 also exists <cit.>, and its band gap computed using the first-principles density functional theory (DFT) is reported to be 2.1 eV <cit.>. This material has gained widespread attention because of several reasons such as its direct band gap in the visible region, low cost, earth abundance, easy synthesis, metal-free nature, physiochemical stability, and good thermal stability <cit.>. 2D g-C_3N_4 covers a wide range of possible applications including photocatalytic properties both for the hydrogen evolution reaction (HER) as well as oxygen evolution reaction (OER), bioimaging, and in photoelectronic devices<cit.>. In spite of all these fascinating properties, pure g-C_3N_4 has its own limitations as a photocatalytic material, low electrical conductivity, and inefficient utilization of solar energy due to its wide band gap, thereby restricting its applicability <cit.>. Patnaik et al. <cit.> recently reviewed the advances in designing Ag modified g-C_3N_4 based nanocomposites to enhance its photocatalytic activity. Tian et al. <cit.> synthesized a g-C_3N_4–BiVO_4 heterojunction which delivers high photocatalytic performance. Chemical functionalization such as carboxylation, sulfonation, amidation, and phosphorylation as well as substitutional doping with B, C, N, S, and O resulted in significant modifications of the electronic and optical properties of g-C_3N_4 <cit.>. g-C_3N_4 has been widely used in dye-sensitized solar cells (DSSC) as a photoanode to act as blocking layer to prevent charge recombination leading to improved efficiency ranging from 2.01–8.07 % <cit.>. It has also been added in various perovskite solar cells with the purpose of improving the coupling between the pervoskite layer and the hole transport material leading to increased efficiency in the range 12.85 – 20.3 % <cit.>.Nowadays, research in the field of quantum dots (QDs) has received extra attention as confinement in all dimensions (0D materials) leads to effective tuning of electronic, optical, physical, and chemical properties <cit.>. Therefore, quantum confinement by constructing finite structures, i.e., quantum dots of 2D g-C_3N_4 may provide us with an effective way of tuning its material properties. Since the g-C_3N_4 sheet is a periodic arrangement of two different types of unit cells, i.e., s-triazine and tri-s-triazine (also known as heptazine) <cit.>, we can obtain two types of quantum dots from it <cit.>. Tri-s-triazine, as the name suggests, consists of three s-triazine rings <cit.>, and it is attractive from the point-of-view of doping, adsorption, and tunable optoelectronic properties because extra nitrogen atoms contribute more lone pairs as compared to s-triazine-based QDs <cit.>. Ghashghaee et al. <cit.> compared the geometric structure and electronic properties of heptazine-based g-C_3N_4 (hg-C_3N_4) QDs and 2D g-C_3N_4. Olademehin et al. <cit.> reported the electronic and optical properties of triangular shaped g-C_3N_4 QDs of increasing sizes, designed using melamine (triazine) and heptazine units. Their study also demonstrated that the carbon and nitrogen sites would be more favorable for HER and OER, respectively. Also, the computational approach of Ullah et al. <cit.> gave an in-depth explanation of the better spatial confinement of frontier orbitals and charge transfer in CNQDs than in GQDs, leading to their enhanced photocatalytic activity. Their investigations were also based on the triangular QDs and reported that tuning of the optical absorption and emission depends mostly on the size, and not on the shape. Though pure g-C_3N_4 QDs are found to be promising for solar cell devices and photocatalytic activity, but their wide band gap limits their energy harvesting efficiency. Both theoretical and experimental studies have shown that the HOMO-LUMO gap of g-C_3N_4 QDs can be tuned by doping with non-metal atoms <cit.>. Zhai et al. <cit.> have performed a DFT-based computational study of pristine g-C_3N_4 QDs with the aim of understanding the evolution of their electronic structure and optical properties as functions of shapes and sizes. Their studies revealed that triangular lamellar structures are more suitable candidates for superior photophysical/optical properties, however, they did not consider functionalization. Bandhopadhyay et al. <cit.> have examined the effect of functionalization on the heterostructures composed of g-C_3N_4 QDs stacked with graphene QD, with a single electron acceptor (carboxyl) or electron donor (amine and hydroxyl) group. Functionalization of g-C_3N_4 QDs with the carboxyl (-COOH) and hydroxyl (-OH) groups has already been achieved experimentally, and tunable emission properties were obtained <cit.>. To the best of our knowledge, no theoretical study has been reported that shows the effect of functionalization on g-C_3N_4 QDs with different electron acceptor and electron donor groups. Also, it is reported that the synthesis of g-C_3N_4 QDs leads to a high percentage of amine edges as well as oxygenated groups which get introduced inevitably <cit.>. Thus, it is important to study the effect of such functional groups on the structural, electronic, and optical properties of g-C_3N_4 QDs. In the present work, the smallest unit of heptazine and triangular shaped hg-C_3N_4 QDs comprising of three to six heptazine units are considered to investigate the effect of chemical functionalization on them. The functional groups considered in the present work include methyl (-CH_3), fluorine (-F), and oxygenated groups such as aldehyde (-CHO), carboxyl (-COOH), ketone (-COCH_3) and hydroxyl (-OH) groups. The motivation for this study comes from the work of Yunhai and collaborators on the edge-functionalized GQDs <cit.>. Their study revealed that the functional groups containing C=O double bond are comparatively more effective in tuning the electronic and optical properties of GQDs.The remainder of this article is organized as follows. In the next section, we address our computational methodology in brief, followed by a detailed discussion of our results in section <ref>. Finally, we conclude our work in section <ref> by summarizing the key findings.§ COMPUTATIONAL DETAILS All the calculations presented in this work were performed within the framework of density functional theory (DFT) <cit.> as implemented in the Gaussian16 package <cit.>. The B3LYP hybrid functional <cit.> was employed to account for the exchange and correlation effects, coupled with the Gaussian-type valence triple zeta 6-311G <cit.> basis set containing two polarization functions (d, p). Some of the electronic and optical properties calculations were also performed using the HSE06 functional for comparative study. The convergence criteria was set to 10^-8 Hartree to solve the Kohn-Sham equations <cit.> self-consistently. The geometry optimization iterations of all the considered structures were carried out until the gradient forces on each constituent atoms reached a minimum value of 4.5×10^-4 Hartree/Bohr. In addition, the RMS force, maximum displacement, and RMS displacement conditions were set at 3.0×10^-4 Hartree/Bohr, 1.8×10^-3 Bohr, and 1.2×10^-3 Bohr, respectively. The vibrational frequencies were also calculated to ensure the stability of the optimized structures and for none of the considered structures any imaginary frequencies were found. GaussView6 software <cit.> was employed for the visualization of the optimized structures and their frontier molecular orbitals (MOs) such as the HOMO and LUMO. The partial and total density of states were generated using the Multiwfn software <cit.>. After the study of the electronic properties, optical properties were investigated using the time-dependent density functional theory (TD-DFT) as developed by Runge and Gross <cit.>. The IEFPCM model <cit.> was used for the calculation of UV-visible absorption spectra <cit.>. § RESULTS AND DISCUSSION§.§ Structural and Vibrational Properties §.§.§ Optimized geometries The optimized geometries of all the considered pristine hg-C_3N_4 QDs are shown in Fig. <ref>. The different functionalized single heptazine units appear to remain planar (Fig. <ref>(a)). The calculated C–N bond lengths are in the range 1.32–1.35 , consistent with the previously reported work <cit.>. We note that with the increase in size of the hg-C_3N_4 QDs, optimized structures no longer remain planar and instead acquire buckled geometries. The reason for buckling can be related to the deformation of C–N bonds due to repulsive interactions between lone pairs present on the nitrogen atoms <cit.>. As the size of the hg-C_3N_4 QDs increases, the number of nitrogen atoms and the corresponding lone pairs present in the structure increases, which results in an increased order of buckling with the size of the hg-C_3N_4 QDs. The lateral sizes of all the optimized QDs are indicated in Fig. <ref>, and their values are presented in Table <ref>. The calculated size of the single heptazine unit is 0.69 nm, consistent with the previously reported value <cit.>. Interestingly, our calculated sizes of the QDs comprising 4-6 units of heptazine are well within the reported experimental values <cit.>. The QDs are relaxed again after attaching each of the functional groups by replacing a hydrogen atom from an edge of their optimized structures. The qualitative behavior of the optimized geometries of all the considered edge-functionalized hg-C_3N_4 QDs resembles their pristine counterpart, therefore they are shown in the same figure with one hydrogen atom being replaced by X in Fig. <ref>. For convenience, the notation “n–X” is used to represent the studied edge-functionalized hg-C_3N_4 QDs throughout the paper, where n denotes the number of heptazine units present in the structure, and X represents the attached functional groups, i.e., —CH_3, —CHO, —COCH_3, —COOH, —OH, —F, and Pris (pristine structure). The detailed analysis of the optimized structures suggests that all the carbon and nitrogen atoms present in all the pristine structures are largely sp^2 hybridized, in spite of buckling. In the functionalized cases, the carbon atom of oxygenated groups that contains a carbon–oxygen double bond (—CHO, —COCH_3, —COOH) and is attached to the nitrogen atom of the pristine QD moieties becomes sp^2 hybridized. Whereas, the carbon, oxygen, and fluorine atoms of—CH_3, —OH, and —F groups, respectively, become sp^3 hybridized while forming a bond with a nitrogen atom of the pristine structures. Table S1 of the supporting information (SI) presents the calculated bond lengths and bond angles for atoms that are near the attached functional groups. The changes in the C–N bond lengths for all the functionalized hg-C_3N_4 QDs are in the range 0.02–0.05 , with the maximum bond lengths for the oxygenated groups (1.38 ) and the flourine group (1.39 ). The reason for such small distortions in bond length is that only a single functional group is attached at the edge of hg-C_3N_4 QDs. Due to such small distortions, the contribution of structural influences on the electronic and optical properties of edge-functionalized hg-C_3N_4 QDs should be minimal. §.§.§ Vibrational Properties In order to confirm the stability of the optimized structures, vibrational frequency calculations are performed. Total 3n-6 vibrational modes are obtained for each structure, where n represents the total number of atoms present in that structure. The lack of imaginary frequencies confirms that the structural minima are likely to have been obtained. For all the structures considered, the minimum vibrational frequencies are reported in Table <ref>, which in all the cases correspond to an out-of-plane vibrational mode. For further investigation of the vibrational properties, we have also calculated the corresponding Raman spectra, presented in Fig. S1 of the SI. A brief description related to some of the unique vibrational modes is also included in SI. §.§ Electronic Properties After confirming the stability of the edge-functionalized hg-C_3N_4 QDs, next, we investigate their electronic properties using B3LYP functional. In Table <ref> and Table <ref> we present highest occupied molecular orbital (HOMO) energy E_HOMO, the lowest unoccupied molecular orbital (LUMO) energy E_LUMO, the HOMO-LUMO energy gap (E_g), and the charge transfer.We note that E_g for the pristine hg-C_3N_4 QDs decreases with the increasing size from 4.99 eV to 2.83 eV, clearly due to quantum confinement.The modification in E_g for the different sizes is related to the variation of E_HOMO and E_LUMO values with the increasing size of the QD. The HOMO and LUMO levels represent the electron donor (nucleophilic) and electron acceptor (electrophilic) properties of the system, respectively. As the sizes of the pristine QDs increase, LUMO energy levels get lowered continuously as is clear from the values presented in Table <ref> and Table <ref>, while for E_HOMO values, non-monotonic decrease is observed. After the pristine structures, next we discuss the electronic properties of the functionalized QDs. From Table <ref> and Table <ref> it is obvious that edge-functionalization causes significant changes in the values of HOMO/LUMO energies and consequently in the HOMO-LUMO gaps of the QDs. It is observed that for all the edge-functionalized hg-C_3N_4 QDs (except 6-X QDs), the E_HOMO values of those functionalized with the -CH_3 group increase with respect to their pristine counterparts, which implies increment in their electron-donor ability. However, functionalization with other groups leads to reduced E_HOMO values, indicating reduced electron-donor abilities of the corresponding QDs. Similarly, E_LUMO values of the QDs functionalized with the -CH_3 are seen to increase with respect to the pristine QDs, suggesting their reduced electron-acceptor ability. However, in the case of other functional groups, E_LUMO values are seen to decrease with respect to the corresponding pristine QDs implying increased electron-acceptor ability. For the case of 6-X QDs, we observe a different behavior; E_HOMO values get lowered and E_LUMO values are increased for all the groups compared to their pristine counterparts. Thus, the uneven shifting of the HOMO and LUMO levels resulted in the tuning of E_g which depends on the following two factors: (a) frontier orbital interaction (FOI), and (b) charge transfer. According to frontier molecular orbital (FMO) theory <cit.>, interaction between frontier orbitals (HOMO and LUMO) leads to hybridization and reduces the energy gap between them. However, charge transfer from the QD moiety to the attached functional group leads to a reduction in the screening of electrons. The enhancement of electronic screening with the increase in electron density and vice versa has already been reported previously in the literature <cit.>. The reduction in electronic screening will increase the electron-electron interaction, which in turn increases the energy gap. Therefore, the tuned E_g depends on the competition between FOI and the charge transfer <cit.>. We have performed the Mulliken charge analysis and then calculated the amount of charge transfer from the hg-C_3N_4 QD moiety to the attached functional group or vice-versa. The calculated charge transfers for all the pristine and functionalized structures are presented in Table <ref> and Table <ref>. We note that for the pristine QDs, the charge transfer is between the QD moiety and that H-atom which is replaced by a functional group in the case of functionalized QDs. If the charge is transferred from the QD moiety to the attached functional group or the H atom, i.e., there is an electron transfer from the H atom or the functional group to the QD moiety, the charge transfer is assigned a positive sign. However, if there is a net electron transfer in the opposite direction, i.e., from the QD moiety to the H atom or the functional group, the charge transfer is assigned a negative sign. We note that the charge transfer as defined above is positive in all the cases except for the functional groups -OH and -F, for which it is negative. The -OH and -F groups because of their high electronegativities gain electrons from the QD moiety, which justifies their electron withdrawing nature, also reported for the GQDs edge-functionalized with these two groups <cit.>. Also, the amount of electron transfer to the -OH group is less than that to the -F group because comparatively speaking –F group is more electronegative than the -OH group.Further, the positive value of charge transfer in the case of -CH_3 group is in accordance with its electron donating nature, reported for the -CH_3 functionalized GQDs also <cit.>. The amount of charge transfer reveals the extent to which E_g increases. Considering the case of 6-CHO and 6-COCH_3, charge transfer is more in case of -CHO group, which leads to a large E_g. As stated earlier, the effects of structural distortions are minimal in this work; the increment or decrement of E_g is induced by the FOI and charge transfer. The resultant E_g for 1-CH_3 QD gets increased as compared to its pristine counterpart, while it gets lowered in the case of other 1-X structures. The opposite trend is noticed in 5-X QDs, as E_g is reduced only for 5-CH_3 QD. In cases of 3-X and 4-X QDs, E_g is reduced compared to their pristine counterparts, except for -OH functionalized cases. Also, for 3-CH_3 QD, no change is noticed, which implies that the effective contribution of charge transfer and FOI after functionalization is equal, and thus the cancellation of their effects takes place. In case of 6-X QDs, edge-functionalization resulted in increased E_g for all the considered functional groups. The reason for larger E_g than their pristine counterpart in some of the cases implies that the influence of the charge transfer is greater than the effect of FOI. However, reduced E_g in other cases reveals that the effective contribution of FOI and charge transfer is such that the FOI dominates.Furthermore, we have also calculated E_HOMO, E_LUMO, and E_g values using the HSE06 functional (Tables <ref> and <ref>) for comparision. In this case also, uneven shifting of E_HOMO and E_LUMO values is observed as depicted in Tables <ref> and <ref>. Compared to the B3LYP results, the HSE06 functional based E_g values are lower for all the considered structures. The E_g obtained for 1-Pris (4.67 eV) and 3-Pris (3.55 eV) QDs using HSE06 functional are relatively closer to the values reported in the literature <cit.>. The trends observed in the shifting of the HOMO and LUMO levels for the pristine and corresponding functionalized cases are similar to those of the B3LYP based results.To further investigate the electronic properties, both the total density of states (TDOS) and partial density of states (PDOS) are calculated for each of the edge-functionalized hg-C_3N_4 QDs using the B3LYP functional, and the results are plotted in Fig. <ref> for the 3–X QDs, and in Figs. S2—S5 of the SI for 1–X, 4–X, 5–X, and 6–X QDs, respectively. As shown in Fig. <ref>, the TDOS plots for 3-X QDs have two common features, i.e., (a) there are five peaks visible in the occupied-orbital region, and (b) the three peaks (the highest one is hidden in the green region) in the unoccupied-orbital region. As is obvious from Figs. S3—S5 of the SI, that the similar trends in TDOS are obtained for 4-X, 5-X, and 6-X QDs also. However, for 1-X QDs a shoulder peak corresponding to the third peak of the occupied region is clearly visible as depicted in Fig. S2 of the SI. The PDOS provides the contribution of each constituent atom individually to the TDOS. It is clear from Fig. <ref> that in the case of 3-X QDs, in the occupied region, the maximum contribution to TDOS is from the hydrogen atoms, followed by nitrogen and carbon atoms whereas in the unoccupied region, nitrogen atoms contribute the most for all the QDs, while H atoms make the next most important contributions for 3–Pris and 3–CH_3. For the QDs functionalized with the O-based groups or F atom, O and F atoms also contribute significantly to the TDOS in the unoccupied region for the 3–X QDs (see Figs. <ref>(c)—(g)). Similar behavior of PDOS is obtained for 4-X, 5-X, and 6-X QDs, presented in Fig. S3–S5 of the SI, respectively. In the case of 1-X QDs (Fig. S2 of the SI), in addition to the above behavior, a minor contribution of carbon atoms (hidden in the magenta region in the case of oxygenated and fluorinated groups) is also present in the unoccupied region, which is missing in other structures. It is also noted that both the TDOS and PDOS plots are quite similar for the pristine and corresponding functionalized structures, which indicates that the edge-functionalization of hg-C_3N_4 QDs with a single functional group has a minimal influence on the electronic structure of hg-C_3N_4 QDs. The isosurfaces of the HOMO and LUMO corresponding to 3-X structures are depicted in the insets of Figs. <ref>, and corresponding to 1—X, 4-X, 5-X, and 6-X structures are illustrated in the inset of Figs. S2—S5 of the SI, respectively. For the single heptazine unit (1-Pris) structure [Fig. S2(a) of the SI], it is clear that the HOMO is localized on the nitrogen atoms, whereas the LUMO is delocalized and mainly distributed over the C–N bonds and located on the nitrogen atoms present at the boundary, in agreement with the literature <cit.>. After functionalization of 1-Pris QD, the type of spatial distribution of both the HOMO (localized) and LUMO (delocalized) remains unaffected. In addition to this, LUMO also gets distributed over the atoms of the attached functional groups except in the case of -CH_3 group. When we examine the HOMO and LUMO plots of the larger QDs, we find that the spatial distribution of both the HOMO and LUMO shows behavior similar to that of the 1-X structures. We have also plotted the isosurfaces of the HOMO and LUMO levels using the HSE06 functional for the 1-X structures and shown in Fig. S6 of the SI. The spatial distribution of these orbitals is similar to that obtained using the B3LYP functional.§.§ Optical Absorption Spectra In this section we present and discuss the optical absorption spectra of the edge-functionalized hg-C_3N_4 QDs computed using the TD-DFT approach <cit.> under different conditions, and compare them to those of the corresponding pristine QDs. The absorption spectra are computed for 20 excited states. First, we calculated the absorption spectra using B3LYP with water as the solvent (B3LYP+water), as these QDs are found to be soluble in water <cit.>. The calculated spectra of all the pristine and functionalized structures using B3LYP+water are plotted in Fig. <ref>, while their computed optical gaps are presented in Table <ref>. In addition, the calculated UV-vis absorption spectra of only the pristine hg-C_3N_4 QDs are separately presented in Fig. S7 of the SI. First we note that our spectra of the pristine QDs are in good agreement with the previously reported calculations of Zhai et al. <cit.>. In our calculations, the strongest absorption peaks in case of 1-Pris and 3-Pris structures are at 205 nm (6.03 eV) and 284 nm (4.36 eV), respectively, in agreement with their results <cit.>. As far as the size dependence of the spectra of the pristine QDs is concerned, we note that the absorption energy range gets extended with the increase in the size of the hg-C_3N_4 QDs. With the increase in the size of the QD from 1-Pris to 6-Pris, the optical gap (E_g^op) gets reduced from 5.10 eV to 2.44 eV, leading to significant red shift also in the corresponding absorption energy ranges. As a result, the most prominent or strongest absorption peak red shifts from 205 nm (6.03 eV) (1-Pris) to 455 nm (2.72 eV) (6-Pris). Thus, taking into account the calculated absorption spectra of all the pristine QDs (1-Pris — 6-Pris), their combined absorption range (200–550 nm or 2.25–6.20 eV) covers most of the UV-Vis region of the spectrum, and also lies within the range measured experimentally <cit.>. From the UV-vis absorption spectra of the edge-functionalized hg-C_3N_4 QDs along with their pristine counterparts (see Fig. <ref>), it is evident that the chemical functionalization at an edge of a given QD alters both the location as well intensity of the most intense peak. Consequently, some of the functionalized structures undergo a red shift, while some others experience a blue shift in the location of the most intense peak compared to their pristine counterpart. In addition to this, a slight variation in the total number of peaks is also observed clearly due to the emergence of new energy levels due to functionalization. However, the qualitative behavior of all the edge-functionalized hg-C_3N_4 QDs resembles quite well that of their pristine counterparts. In case of -COOH and -OH groups, our calculated absorption ranges lie within the ranges reported experimentally <cit.>. As mentioned above, the functionalization of all the considered hg-C_3N_4 QDs extended the absorption range covered as compared to the pristine structures. For example, the absorption range of the 1-Pris structure 200–250 nm gets extended to 200–300 nm when functionalized with the -OH group (Fig. <ref>(a)). Thus, by functionalizing the hg-C_3N_4 QDs in a controlled manner we can tune their absorption ranges to make them suitable for effective utilization of solar energy. Thus appropriately functionalized hg-C_3N_4 QDs can be useful in solar cells. The optical absorption spectra are again simulated using the B3LYP functional, but this time without including any solvent (B3LYP+vacuum), i.e., in the gas phase. The resultant plots are illustrated in Fig. <ref>, from which it is observed that the qualitative nature of the overall spectra along with the peak positions are quite similar to that obtained using B3LYP+water condition (Fig. <ref>). However, compared to the B3LYP+water-based spectra, a slight reduction in the absorbance is observed for the 1-X QDs, while it is reduced significantly for other structures. Interestingly, we found that the most intense peak position corresponding to 3-Pris QD (281.64 nm) using B3LYP+vacuum is relatively close to that reported in literature <cit.>. In addition, the maximum absorbance peak position for 5-Pris QD (325.42 nm) also matches quite well with the literature <cit.>. We have also calculated the E_g^op corresponding to each of the edge-functionalized structures, as listed in Table <ref>. In the case of B3LYP+vacuum, E_g^op values range from 2.34 eV – 5.05 eV, and compared to the earlier results (B3LYP+water), some changes are observed. However, the total absorption range covered by all the QDs (1-X to 6-X) in vacuum is consistent with that obtained using water as the solvent (200 nm – 550 nm). In Tables S2—S6 of the SI, we provide information related to the excited states of the considered QDs which contribute to the most intense peaks in the absorption spectra obtained using B3LYP+water. This information includes the peak locations (excitation energies), oscillator strengths (f), and the excited state wave functions. In addition, Table S7 of the SI contains the same information for the first excited state of each structure. In the TD-DFT approach, every excited state wave function is a linear combination of several configurations each of which corresponds to a single excitation from an occupied orbital to a virtual one. The TD-DFT wave function of the first excited state of each QD (pristine or functionalized) is dominated by the configuration corresponding to the excitation of an electron from HOMO to LUMO, denoted as H→ L. However, f corresponding to the first excited state for all the pristine QDs is negligible, and even after functionalization exhibits no significant increase, as shown in Table S7 of the SI. Therefore, the optical gap (Table <ref>) is larger than the excitation energies of the first excited states for all the QDs considered in this work. Next we consider the optical excitations of the 4-X QDs, presented in Table S4 of the SI. It is observed that the functional groups containing the C=O bond and the -F group resulted in the red shift of the absorption peaks, whereas the -CH_3 and -OH groups resulted in the blue shift compared to the pristine QD. In agreement with our calculations, the red shift of the peaks is observed in the experiment performed on the QDs functionalized with the -COOH group <cit.>. The excited state leading to the most intense peak of 4-Pris QD is written as 16^1A, which signifies the 16^th TD-DFT singlet excited state. As is obvious from the table, the wave function of this excited state derives important contributions from the configurations H-6→ L+1, H-6→ L, and H-7→ L. The excited state corresponding to most intense peak in case of 4-CH_3 QD is 18^1A, with the wave function composed predominantly of configurations H-7→ L, H-8→ L, and H-5→ L. In a similar manner, the optical transitions corresponding to the most intense peak are presented for each of the considered pristine and edge-functionalized hg-C_3N_4 QD in those tables. In case of 6-X QDs (Table S6), edge-functionalization with all the considered groups resulted not only in an abrupt blue-shift of the most intense peak position, but also in the range of the absorption spectrum from the visible to the UV region after functionalization. In addition to analyzing the orbitals involved in the transitions corresponding to various absorption peaks, we have also studied the spatial distribution of electrons and holes in the first excited state, and the one giving rise to the most intense peak (the excited state with the maximum f value). For the purpose, we have used the approach of Liu et al. <cit.> in which the hole and electron spatial distributions are defined in terms of corresponding densities ρ^hole( r)=i→ a∑(w_i^a)^2ϕ_i( r)ϕ_i( r)+i→ a∑j≠ i→ a∑w_i^aw_j^aϕ_i( r)ϕ_j( r)ρ^electron( r)=i→ a∑(w_i^a)^2ϕ_a( r)ϕ_a( r)+i→ a∑i→ b≠ a∑w_i^aw_i^bϕ_a( r)ϕ_b( r), where i,j and a,b are the indices denoting to the occupied (hole) and virtual (electron) MOs, respectively. The numbers w_i^a are obtained from the TD-DFT calculations, and represent the coefficients of the singly-excited configurations in which an electron is promoted from the occupied MO ϕ_i(r⃗) to the virtual MO ϕ_a(r⃗). We calculated the densities ρ^hole(r⃗)/ρ^electron(r⃗) using the Multiwfn software <cit.>, and the results are presented in Figs. S8 – Fig. S12 of the SI for all the considered edge-functionalized hg-C_3N_4 QDs. After the study of optical properties using the B3LYP functional, the UV-vis optical absorption spectra were also computed using a range separated hybrid functional HSE06. These calculations were performed to explore the influence of: (a) a modern range-separated hybrid functional, and (b) crystalline environment. The crystalline form is considered because the g-C_3N_4 QDs also exist in the crystalline structures <cit.>. Therefore, during the simulation of optical absorption spectra using the HSE06 functional, a dielectric medium relevant to crystalline hg-C_3N_4 QDs is taken into consideration <cit.>. For simulating the dielectric medium within the IEFPCM model, dielectric constant is taken to be 7.5 which is the value corresponding to the crystalline carbon nitrides <cit.>. The UV-vis optical absorption spectra computed in the crystalline environment using HSE06 functional are presented in Fig. <ref>. It is observed that the qualitative behavior of these spectra is similar to those simulated using B3LYP+water (Fig. <ref>). Except for the 5-X QDs, slight variations in the maximum absorbance values are observed. In the case of 1-X structures, chemical functionalization resulted in the red shifting of peaks corresponding to maximum absorbance compared to their pristine counterpart. However, opposite trend, i.e., blue shift is noticed after functionalization of the 6-Pris QD. Chemical functionalization of 5-Pris QD also leads to blue shift of the peaks except 5-OH structure for which red shift is observed. In the case of 3-X and 4-X QDs, except -CH_3, all other functional groups lead to a red shift of the most intense peak. Furthermore, again the complete absorption range covered (200 nm – 550 nm) is same as that obtained using the B3LYP functional both with water and vacuum. Next, the E_g^op values are also calculated for the HSE06+crystal condition, and the values are reported in Table <ref>. It is to be noticed that for most of the structures (E_g^op)_HSE06+crystal>(E_g^op)_B3LYP+water, while the opposite results are obtained for some of the structures. After the calculation of optical properties using three different conditions, we tried to compared our obtained E_g^op values with some experimental results. The two different studies reported that the peaks in the photoluminescence spectra of hg-C_3N_4 QDs lie at 367 nm (3.39 eV)<cit.> and 467 nm (2.65 eV) <cit.>. The size distribution of their hg-C_3N_4 QDs is within the range of 2 nm - 4 nm. Compared to these experimental findings, the optical gaps obtained in our work are 2.34 eV, 2.44 eV, and 2.47 eV using B3LYP+water, B3LYP+vacuum, and HSE06+crystal parameters, respectively, for the biggest size (2.76 nm) pristine quantum dot (6-Pris). In brief, the triangular shaped hg-C_3N_4 QD structures are designed using a bottom-up approach. Starting from a single heptazine unit, we combined more such units at the edges (by replacing one of the edge hydrogen atoms) in a way to form larger triangular structures. Our calculations of the optical properties of the considered triangular shaped hg-C3N4 QDs suggest them to be size dependent. The UV-vis absorption spectra get red shifted with the increase in the size of the QDs due to the decrease in the optical gap. Chemical functionalization of the hg-C3N4 QDs resulted in the shifts of the most intense peaks in the absorption spectra compared to their pristine counterpart. Some of the functionalized structures undergo a red shift, while some others experience a blue shift. In addition, the functionalization of all the considered hg-C3N4 QDs extended the absorption range covered by their corresponding pristine structures. Therefore, edge-functionalization is an effective approach to enhance the photophysical properties of hg-C3N4 QDs by tuning their absorption ranges to make them more suitable for utilization of the solar energy. The significant reduction in band gap with the increasing size and a better optical response also make them potential candidate for photocatalytic applications.§ CONCLUSION To summarize, in this work we have presented an exhaustive first-principles DFT-based study of the electronic, vibrational, and optical properties of pristine and functionalized quantum dots of a novel 2D material g-C_3N_4. Triangular structures of increasing sizes derived from heptazine were considered, and their geometries were optimized, followed by a check of their dynamic stability by performing a detailed vibrational frequency analysis. Additionally, the Raman spectrum of each considered structure was computed and analyzed. Electronic properties, such as the HOMO-LUMO energy gap, and the charge transfer were also studied, and it was observed that the edge-functionalization is an effective way of tuning the electronic properties of the hg-C_3N_4 QDs. Further, the influence of functionalization was also studied by comparing both the partial and total density of states of the functionalized and pristine structures. Using the TD-DFT methodology, the UV-vis absorption spectra of all the structures were computed and analyzed in detail using two different hybrid functionals with different conditions. We found that most of the ultraviolet region gets covered in the pristine structures itself, which gets shifted to the visible region in the case of 6-Pris QD due to the increase in size. Edge functionalization further extended the absorption range covered by the corresponding pristine structures suggesting that the edge-functionalized hg-C_3N_4 QDs will operate in a wide energy range and will be effective in enhancing the efficiency of solar cells. Moreover, their excellent optical properties make them a potential candidate for other optoelectronic devices such as light-emitting diodes operating both in the visible and ultraviolet range. Our results for the UV-vis spectra obtained in the case of carboxylic and hydroxyl groups are consistent with those obtained experimentally. Consequently, we hope that this theoretical study of the absorption spectra of edge functionalized hg-C_3N_4 QDs using other functional groups will guide future experimental endeavors. Also our idea of using two different functionals and three different environmental parameters (vacuum, water, and crystalline environment) to explore their optical properties will be helpful for various experimental studies. § CONFLICTS OF INTEREST There are no conflicts of interest to declare.One of the authors, K.D. acknowledges financial assistance from Prime Minister Research Fellowship (PMRF award ID-1302054). V.R. acknowledges the support through the Institute Post-Doctoral Fellowship (IPDF) of Indian Institute of Technology Bombay.ieeetr< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >< g r a p h i c s >
http://arxiv.org/abs/2312.15984v1
{ "authors": [ "Khushboo Dange", "Vaishali Roondhe", "Alok Shukla" ], "categories": [ "cond-mat.mtrl-sci", "physics.chem-ph" ], "primary_category": "cond-mat.mtrl-sci", "published": "20231226102922", "title": "Tuning the electronic and optical properties of hg-C$_3$N$_4$, quantum dots with edge-functionalization: A computational perspective" }
Andreev bound states in superconductor-barrier-superconductor junctions of Rarita-Schwinger-Weyl semimetals Ipsita Mandal   =========================================================================================================== Quantifying the uncertainty of predictions is a core problem in modern statistics. Methods for predictive inference have been developed under a variety of assumptions, often—for instance, in standard conformal prediction—relying on the invariance of the distribution of the data under special groups of transformations such as permutation groups. Moreover, many existing methods for predictive inference aim to predict unobserved outcomes in sequences of feature-outcome observations. Meanwhile, there is interest in predictive inference under more general observation models (e.g., for partially observed features) and for data satisfying more general distributional symmetries (e.g., rotationally invariant or coordinate-independent observations in physics). Here we proposeSymmPI, a methodology for predictive inferencewhen data distributions have general group symmetries in arbitrary observation models. Our methods leverage the novel notion of distributional equivariant transformations, which process the data while preserving their distributional invariances. We show that SymmPI has valid coverage under distributional invariance and characterize its performance under distribution shift, recovering recent results as special cases. We apply SymmPI to predict unobserved values associated to vertices in a network, where the distribution is unchanged under relabelings that keep the network structure unchanged. In several simulations in a two-layer hierarchical model, and in an empirical data analysis example, SymmPI performs favorably compared to existing methods.§ INTRODUCTION Prediction is one of the most important problems in modern statistical learning. Since unobserved data cannot always be predicted with certainty,quantifying the uncertainty of predictions is a crucial statistical problem, studied in the areas ofpredictive inference and conformal prediction <cit.>. Numerous predictive inference methods have been developed under both parametric and nonparametric conditions <cit.>; see the related work section for more examples.Among these, conformal prediction (or inference) has been gaining increasing attention recently because it can lead to prediction sets with finite-sample coverage guarantees under reasonable conditions on the data, such as the exchangeability of the datapoints. Moreover, this exchangeability condition is preserved under natural permutation-equivariant maps <cit.>.This implies that residuals constructed from statistical learning methods that are invariant with respect to the data—such as M-estimators—remain exchangeable, and can be used for conformal inference.Conformal prediction has been applied and extended to a wide range of statistical machine learning problems, includingnon-parametric density estimation and regression<cit.>, quantile regression <cit.>, survival analysis <cit.>, etc.At the same time, predictive inference methods have been developed under assumptions different from exchangeable independent and identically distributed data,including for datapoints Z_n in sequential observation models called online compression modelsleading to data Z_1,…, Z_n-1∈_0 for some space _0, and for non-sequential models called one-off-structures <cit.>.These are closely related to the classical statistical notion of conditional ancillarity <cit.>. Methods have also been developed under more concrete assumptions such ashierarchical exchangeability <cit.>,exchangeable network data <cit.>, and invariance under a finite subgroup of the permutations of the datapoints <cit.>.However, at the moment thereare no predictive inference methods(1) for arbitrary unobserved functions of arbitrary data Z (e.g., a network, a function, etc.) whose distributional symmetries are characterized by anarbitrary—possibly infinite and continuous—group (e.g., a rotation group that arises for coordinate-independent data);and(2) that enable processing the data in flexible ways to keep the distributional symmetries, similar to what is possible in the special case of conformal prediction. In this paper, we develop such methods. More specifically:* We consider datasets whose distributions are invariant under general—compact Hausdorff topological—groups. We argue that invariance under such groups is of broad interest, and includes in particular invariance under all finite groups (e.g., exchangeability, hierarchical exchangeability, cyclic shifts).It also includes invariance under continuous groups such as rotations (and combinations such as rotations and translations), which are of broad interest in the physical sciences.For example many quantities are coordinate-independent and thus rotation-invariant in physics <cit.>. Continuous (e.g., rotational) invariances are also common for image data. The datasets we consider are not restricted in any other way, and in particular we are not limited to sequential observations Z_1,…, Z_n. * We introduce the key notion of distributional equivariance of transformations of the data,and show that it is enough to preserve distributional invariance. We explain how this allows us to process data to extract meaningful features that enables constructing accurate prediction sets, and—for instance—adapting to data heterogeneity in atwo-layer hierarchical model example. We also allow arbitrary group actions on the input and output spaces, not limited e.g., to permutation actions. We allow the observed component of the data bedetermined by an arbitrary functionthat we call the observation function. For instance, this can include any part of the features in a supervised learning setting. In particular, we are not limited to predicting an outcome Y_n after observingfeature-label pairs(X_1,Y_1), …, (X_n-1,Y_n-1), and a feature X_n. * We propose , a method for predictive inference for data with distributional symmetries in the above setting. We show that  has coverage greater than or equal to the nominal level, and not much more than that, under distributional invariance. We bound the over-coverage in terms of a group-theoretical quantity (the number of orbits of the action of the group). We further bound the impactof distribution shift, i.e., the lack of distributional invariance,on the coverage. Finally, we introduce a non-symmetric version offor thedistribution shift case where the processing algorithm is not distributionally equivariant, and provide associated coverage guarantees. In the special case of conformal prediction, we recover recent results of <cit.>. * As an illustration of , we study the example ofprediction sets on networks, where random variables associated with a network are assumed to have a distribution invariant under any transformation that keeps the network structure unchanged (i.e., to the automorphism group of the graph). We study in detail the example of hierarchical two-layer models with several sub-populations <cit.> (also known as meta-learning with several tasks <cit.>).We design adata processing architecture based on a fixed message-passing graph neural network. We show that  with this architecture adapts toheterogeneity over sub-populations or tasks,and performs favorably compared to prior methods, including standard conformal prediction and the algorithm from <cit.>. Our paper is structured as follows: In <Ref>, we introduce preliminaries from group theory used in our work, and notions of distributional equivariance and invariance. In <Ref>, we provide a detailed review of previous research relevant to our study. In <Ref>, we introduce our novel approach, referred to as , and discuss its underlying theoretical principles and guarantees. Additionally, in <Ref>, we illustrate the practical application of  through a two-layer hierarchical model and substantiate its effectiveness through numerical experiments. A software implementation of the methods used in this paper, along with the code necessary to reproduce our numerical results, is available at <https://github.com/MaxineYu/Codes_SymmPI>.Notation. For a positive integer m≥ 1, the m-dimensional all-ones vector is denoted as 1_m = (1,1,…,1)^⊤∈^m and the all-zeros vector is0_m = (0,…,0)^⊤∈^m.We denote [m]:={1,2,…,m}, and for j∈ [m], the j-th standard basis vector by e_j = (0,…, 1, …, 0)^⊤, where only the j-th entry equals unity, and all other entries equal zero.For two random objects X,Y, we denote by X=_dY that they have the same distribution. For a probability distribution Γ and a random variable X∼Γ, we may write the probability that X belongs to a measurable set A as P(X∈ A), P_X(A), P_Γ(X∈ A), P_X∼Γ(X∈ A), Γ(X∈ A), or Γ(A).For a cumulative distribution function (c.d.f.) F on , and α∈ [0,1], the 1-α-th population quantile is q_1-α(F) = F^-1(1-α) = inf{x : F(x) ≥1-α}, with q_1-α(F) = ∞ if the set is empty. The 1-α-quantile of the random variable X, for α∈[0,1], is Q_1-α(X). Forc∈^k, let δ_c be the point mass at c. For a vector v∈^m,Q_1-α(v)=Q_1-α(v_1,…,v_m) denotes Q_1-α(∑_i=1^m δ_v_i/m). § PRELIMINARIES We introduce our predictive inference method based on the notions of distributional equivariance and invariance. §.§ Review of Group Theoretic Background First we provide a self-contained review of some basic material from group theory that is required in our work. We refer to e.g., <cit.> for additional details. Readers may skip ahead to<Ref>and refer back to this section as needed.A groupis a set endowed with a binary operation “·" which[The sign “·" is dropped for brevity when no confusion arises, and we writefor g,g'∈,g g' = g· g'.]is associative, in the sense that for all g,g',g”∈,(g· g') · g” = g· (g' · g”). Further, a group has an identity element (or unit, or neutral element) denoted as 1_ ore_,such thatfor all g ∈,e_ g = g e_ = g. The subscript of the identity is dropped if no confusion can arise. Finally, each group element g has an inverse g^-1 such that g· g^-1 = 1_.A key example is the symmetric group _n of permutations of n≥ 1 elements _n={π:[n]→ [n]|πpermutation},where the multiplication π·π' corresponds to the composition π∘π' of the permutations. Moreover the identity element 1_ is the identity map with1_(x) = x for all x∈[n],and the group inverse of any permutation π isits functional inverse π^-1. Other important groups are the group Ø(n) of orthogonal rotations and reflections of ^n and the special orthogonal group (n) of rotations of ^n.For a group , the map ρ: ×→is an action ofon,iffor all g,g'∈ and z∈, ρ(gg',z)=ρ(g,ρ(g',z));and iffor all z∈, ρ(e,z)=z. We denote ρ(g,z):=ρ(g)z for both non-linear and linear actions ρ.This notation takes special meaning when ρ acts linearly, in which case ρ is called a representation and we think of ρ(g):→ as a linear map.[Permutation action] For any space _0, the symmetric group _n acts on _0^n by the permutation action ρ, whichpermutes the coordinates of the input,such that for all g∈_n and z∈_0^n, g · x:=ρ(g) x:= (z_g^-1(1),…,z_g^-1(n))^⊤. For a general group, the orbit of z under ρ is the set O_z= {ρ(g)z, g∈},and includes the subset ofthat can be reached by the action ofon z. For instance the orbit of (1,2)^⊤ under _2 is {(1,2)^⊤,(2,1)^⊤}, while that of (1,1)^⊤ is {(1,1)^⊤}. Certain groups are also topological spaces <cit.>, with associated open sets. In that case, one can construct the Borel sigma algebra generated by the open sets. For certain groups—Specifically, for compact Hausdorff topological groups—there is a “uniform" (Haar) probability measure Uover the group endowed with the Borel sigma algebra <cit.>.For a finite group such as _n, this is the discrete uniform measure.In general, the Haar probability measure satisfies that for any g∈, and G∼ U,we have gG ∼ U. We will only consider groups that have a Haar probability measure. §.§ Distributional Equivariance and InvarianceWe consider a dataset Z,belonging to a measurable space[All spaces, sets, and functions will be measurable with respect to appropriate sigma-algebras, which will be kept implicit for simplicity.] , such as an Euclidean space.This will also be referred to as the complete dataset,because we observe only part of Z; as explained later in <Ref>. This dataset is completely general: as special cases, it canrepresent labeled observations in a supervised learning setting, i.e.,Z = ((X_1,Y_1), …, (X_n,Y_n))^⊤,or unlabeled observations in unsupervised learning, i.e., Z = (X_1 …, X_n)^⊤.The complete data Z has an unknown distribution P belonging to a setof probability distributions. Consider a measurable map f:→ for some measurable space , which we can think of as a transformation of the data. This transformation can either be designed by hand, or learned in appropriate ways. For instance, in a supervised learning setting where Z = ((X_1,Y_1), …, (X_n,Y_n))^⊤∈ (×)^n,and for a predictor :→ learned based on Z,we may have f(Z) = (|Y_1-(X_1)|, …, |Y_n-(X_n)|)^⊤. Further examples and discussion will be provided in our illustrations in <Ref>.We consider a known groupthat acts on the complete data spaceby the action ρ of ,and on the transformed data space by the actionof . When Z = ((X_1,Y_1), …, (X_n,Y_n))^⊤, these can be the standard permutation action of the symmetric group _n from Example <ref>, such thatρ(g) Z = ((X_g^-1(1),Y_g^-1(1)), …, (X_g^-1(n),Y_g^-1(n)))^⊤, and ρ(g) f(Z) = (|Y_g^-1(1)-(X_g^-1(1))|, …, |Y_g^-1(n)-(X_g^-1(n))|)^⊤. A key property of the map fthat we will use to construct prediction regions is that f respects the symmetry of the group, in a distributional sense. This is formalized in our definition of distributional equivariance given below. For two random objects X,Y, recall that we denote by X=_dY that they have the same distribution. We say that the map f:→ is -distributionally equivariant (with respect to theactions ρ, on , respectively,and over the classof probabilities),when forall P∈,for Z∼ P and foranindependently drawn group elementG∼ U from the uniform probability distribution U over , we have the equality in distributionf(Z)=_d f(Z). This means that the distribution of a randomly chosen action ofon Z, transformed by f, is equal tothe distribution found by firsttransforming Z by f,and then randomly acting on it by . In a distributional sense, the random action ofand the deterministic transform f “commute".This definition generalizes the classical notion of deterministic -equivariance, which requires thatfor all z∈ and all g∈,f(z)=f(z).Deterministic equivariance is widely studied in the mathematical area of representation theory <cit.>. Distributional equivariance only requires the equality of the distributions off( Z) and f(Z) for random Z,G,whereasdeterministic equivariance requires (<ref>) to hold for for all z∈ and all g∈. Deterministic equivariance clearly implies the distributional version.We characterize these conditions in <Ref>,showingthat distributional equivariance is a strictly more general condition than the deterministic version. Moreover, in Figure <ref>, we use a toy example to illustrate the difference between distributionallyand deterministically equivariant maps. For some positive integer M, we consider the action of the group of cyclic shifts z↦ z+a modulo M on itself. We show in <Ref> that deterministic equivariance requires affine maps z↦ z+amodulo M, for any a, while distributional equivariance is satisfied by all maps. A key example of distributional equivariance is distributional invariance, namely f( Z) =_d f(Z). This condition states that after applying the map f, the distributions of the original data Zand the data Z acted upon by the group are equal. This is a special case of Definition <ref>for an identity output representation (g) = 1 for all g∈.For the identity map f, taking any g' ∈, we can deduce from it that Z =_dZ =_dZ =_d Z.Hence for the identity map f, distributional equivariance implies that Z=_dZ for all g' ∈.This latter condition has been widely studied; for instance inanalyzing randomization tests <cit.> and data augmentation <cit.>. We analyze this condition further in <Ref>.Consider a fixed z∈ and let O_z = { z: g∈} be the orbit of z under . The distribution of z when G∼ U can be viewed asa uniform distribution over the orbit O_z with the sigma-algebra generated by the intersection of O_z with the sigma-algebra over . Since this distribution is the same regardless of the distribution P of Z, the data Z is conditionally ancillary given the orbits, see also <cit.>.This shows that distributional invariance as inZ =_d Z is a form of conditional ancillarity.Conditional ancillarity is one of the most general conditions under which finite-sample valid predictive inference methods have been designed (see <Ref>). §.§.§ ExamplesWe give a few examples of distributional equivariance and invariance.Since we will later consider only part ofZ as observed, we will here let Z contain n+1 observations,where the (n+1)st will later not be fully observed.* Exchangeable data. Take = _0^n+1, for some space _0, and the groupas the permutation group _n+1, actingonby permuting the coordinates asρ(g)Z :=gZ = (Z_g^-1(1), …,Z_g^-1(n+1))^⊤ for Z=(Z_1,…,Z_n+1)^⊤. Thenfor the identity map f:→,with f(z)=z for all z∈, the distributional invariance condition reduces to the vector Z having exchangeable components. * Network-structured data. We take the data Z = (Z_1,…, Z_n+1)^⊤∈_0^n+1as before,but we only assume it has a limited set of symmetries, associated to a network or graph. Specifically, we consider an undirected—possibly weighted—graph with vertex set [n+1] and adjacency matrix A ∈ [0,∞)^(n+1)×(n+1), a symmetric (n+1)× (n+1) matrix with non-negative entries.For each i∈[n+1], we associate the random variable Z_i to thei-th vertex of the graph, and we assume thatthe distribution of the random vector Z is unchanged after relabeling the vertices subject to keeping its structure—as captured by A—unchanged. Then we have the following:* Distributional Invariance:Consider the graph's automorphism group = Aut(A) ⊂_n+1, whose elements are permutations—or, re-labelings—of the vertices [n+1] leaving the graph structure unchanged. Recall that the elements g∈ of the automorphism group arepermutations g such that, when viewed as linear maps ^n+1→^n+1, we have g A g^⊤ = A. Let ρ be the same action as for exchangeable data, permuting the coordinates of Z. Then, under the distributional invarianceg Z =_d Z, for all g∈, the distribution of Z is unchanged for all re-labelings keeping the graph structure identical.For examples and discussion,see Section <ref>.* Distributional equivariance:For some space , consider f:→, such that for some action ρ̃ of , f is distributionally equivariant. Then, based on Proposition <ref> in the Appendix,for allin the image of f,we must have the following equality of the sizes of setsfor all g: |f^-1()|=|f^-1()|; where f^-1(c) denotes the preimage of the element c∈ under f. This condition states that the number of elements mapping to any specificis the same as the number mapping to any other elementin the orbit of .See <Ref> for an illustration where graph is a cycle.In contrast, deterministic equivariance requires that for all z and g ∈Aut(A), f(g z) =f(z).In machine learning, many graph neural net (GNN) architectures satisfying deterministic equivariance have been developed.A prominent example are message-passing graph neural networks (MPGNNs), see e.g., <cit.>. Here,for some depth L, we define layers z^0 := z, and z^1, …, z^L sequentially. For any ℓ≥0 and any i∈[n+1], the i-th coordinate of z^ℓ+1 is defined by summingthe values of a function λ_1 over the neighborhoodN(i) of node i in the adjacency matrix of the initial graph, and applying another function λ_0 as:z_i^ℓ+1=λ_0(z_i^ℓ,∑_j∈ N(i)λ_1(z_i^ℓ,z_j^ℓ)):=F_ℓ(z^ℓ)_i, The message passing neural network is MPGNN_L(z):=F_L∘ F_L-1∘…∘ F_1(z) for all z∈. It is well known that any MPGNN is deterministically -equivariant for =ρ being the permutation action, namely MPGNN_L(g· z)= g·MPGNN_L(z); and hence also distributionally -equivariant. * Coordinate-independent data. Consider a dataset Z_1, …, Z_n+1∈^p, for some positive integer p, of observations that are exchangeable and have a jointly rotation-invariant distribution. Specifically, (Z_1^⊤, …, Z_n+1^⊤)^⊤ =_d (Z_π^-1(1)^⊤, …, Z_π^-1(n+1)^⊤)^⊤ for any permutation π∈_n+1,and moreover(Z_1^⊤, …, Z_n+1^⊤)^⊤ =_d (OZ_1^⊤, …, OZ_n+1^⊤)^⊤for all orthogonal matrices O ∈Ø(p). This can occur when the observations refer to coordinate-independent quantities that are made in a particular coordinate system. Many physical quantities are coordinate-independent.In fact, many of the fundamental laws of physics can derived from the principle that those laws are independent of coordinate systems, see e.g., <cit.>. For instance,the Lorentz group contains transformationsbetween frames of references that respect the postulates of Einstein'sspecial relativity.To give a simpler and concrete example, detailed in Example <ref> later, consider two-dimensional observations of celestial objects (e.g., coordinates of asteroids).The system of coordinates used to represent the data can be centered at the Earth, but the rotation of the system is arbitrary. If we are interested to predict the position of the (n+1)st object based on the positions of the first n,leveraging the inherent rotational invariance may increase precision. We emphasize that these examplesinclude continuous groups, which are qualitatively different from discrete groups; and in our view include examples that are far beyond the reach of current conformal prediction-type methodology.§ RELATED WORKS There is a great deal of related work, and we can only review the most closely related ones. The idea of prediction sets dates back at least to the pioneering works of <cit.>, <cit.>, <cit.>, and <cit.>. More recently conformal prediction has emerged as a prominent methodology for constructing prediction sets <cit.>.Predictive inference methods <cit.> have been developed under various assumptions <cit.>.There are many works on predictive inference going beyong exchangeability. Some of these involve invariance under specific permutation groups <cit.>, and some are designed to work under various forms of distribution shift <cit.>. Online compression models <cit.>are a weaker condition thanexchangeability,and enable a generalization of conformal prediction.In online compression models, a sequence of observations σ_1,…,σ_n,… is made of datapoints Z_1,…,Z_n,…, wherefor some space _0and for all n, Z_n∈_0. It is assumed that for all n, the conditional distribution of (σ_n-1,Z_n)given σ_n is known. A one-off structure is the special case of this in a non-sequential setting, and is closely related to the statistical concept of conditional ancillarity.Compared to this, our work focuses on the special case of distributional invariance under a group,for which the summary statistics are the orbits of the group action. As we discuss below and in <Ref>,distributional invariance has the crucial advantage that there is a broad class of maps—distributionally equivariant ones, including equivariant neural nets—that preserve it; which enables processing the data in a flexible way.This does not generally hold under conditional ancillarity.Moreover, our work allows a more general observation model (described in <Ref>), not assuming that there are n datapoints from the same space; nor that the first n-1 are observed. Another contribution is that we give bounds on the coverage of our methods under distribution shift, and develop flexible non-symmetric versions of our method.<cit.> develop predictive inference methods assuming invariance under subgroups of permutation groups. Compared to this, our work handles the broader class of compact topological groups, which are both technically more challenging, and are of interest in a broader class of applications. Moreover, we have a more general observation model, focus on the notion of distributional equivariance to enable flexible data processing, and provide methods and guarantees under distribution shift.Joint coverage regions <cit.> are a methodology aiming to unify prediction sets and confidence regions. They have been developed for general observation models under generalconditional ancillarity. Our focus here differs, as we introduce the notion of distributional equivariance to enable flexible data processing, as well as methods and guarantees applicable to distribution shift.In a different line of work, invariance and equivariance have been widely studied in other aspects of statistics machine learning. In statistics, this dates back at least to permutation tests <cit.>. Other key early work with general groups includes <cit.>. For more general discussions of invariance in statistics see <cit.>. In machine learning, work with invariances dates back at least to <cit.> with the development of convolutional neural nets (CNNs), which build translation equivariant layers via convolutions.These have been extended to discreteand continuous rotation invariance<cit.> and to more general Lie groups <cit.>.Alternative approaches include those based on invariant theory <cit.> and data augmentation<cit.>. § : PREDICTIVE INFERENCE WITH GROUP SYMMETRIES§.§ Constructing Prediction Regions Here we introduce our  method for predictive inference when the data has distributional symmetry or invariance. Our key principle in constructing prediction regions is to leverage the interactions between distributional invariance and equivariance. Specifically,if the full data satisfies the distributional invariance property f(Z)=_d f( Z) when G∼ U andif f is distributionally equivariant with respect to ρ, as per Definition <ref>,we havef(Z)=_d f( Z)=_df(Z).Thus,f(Z) is also distributionally invariant, and so distributional invariance is preserved by distributionally equivariant maps. In the special case of permutation symmetry, and for the special case of deterministic equivariance, this simple and key observation has often been usedin conformal prediction[For the special case of the symmetric group where = _n, and for the permutation actions ρ,, <cit.> have provided a sufficient condition for a transform f to preserve exchangeability; distributional equivariance is equivalent to their condition in this special case, see Section <ref>.]<cit.>.Here, we aim to vastly extend its reach in order to be able to construct prediction sets for data with invariance under arbitrary compact Hausdorff topological groups; motivated by the examples described above.A bit more generally,distributional equivariance is preserved by composition. Suppose that for some spaceand action ρ̅ on , a map h:→ is distributionally equivariant with respect to input and output actions ,ρ̅.This implies that when G∼ U,and for the random variable Z̃ = f(Z) over , we have h(Z̃)=_d ρ̅(G)h(Z̃).Hence, we findh(f( Z))=_d h( f(Z)) =_dρ̅(G)h(f(Z)),and thus h∘ f is -distributionally equivariant.It follows that we can compose arbitrary -distributionally equivariant maps and preserve distributional invariance.[This is the key reason for which we focus on distributional invariance, as opposed to other forms of conditional ancillarity, to construct prediction sets. For more general conditional ancillarity, this property does not need to hold, and this limits the types of data processing maps f we can use; see <Ref>.] This property enables us to construct prediction sets based on processing the data in several equivariant steps, for instance via equivariant neural nets. We will argue that compositionality helps with expressivity,and will leverage this todesign predictive inference methods that can adapt to heterogeneity, see Section <ref>. Thus, we let Z satisfy distributional invariance, and let f be distributionally equivariant. We do not observe z, but insteadobserve some function ø(z) of z, where ø:→ is an observation function for a space . For instance, when z = (z_1, …,z_n, z_n+1)^⊤ consists of n+1 datapoints,in an unsupervised case, for any j∈[n] wecan take the observation functionø(z) = (z_1, …,z_j)^⊤ to be the first j observations. In a supervised case wherez = ((x_1,y_1), …, (x_n+1,y_n+1))^⊤, we can take for any j∈[n]the observation functionø(z) = ((x_1,y_1), …, (x_j,y_j), x_j+1, …, x_n+1)^⊤ to be the first j labeled observations and the remaining features. We are interested to predict the unobserved part of Z.Since the observed part does not necessarily uniquely determine the unobserved part, we aim to predict a set of possible values.We consider a map ψ:→, such that we want to include small values of ψ(f(Z)) in our prediction set.This map generalizes the standard idea of a non-conformity score from conformal prediction <cit.>.For instance, in a supervised learning setting where f(Z) = (|Y_1-(X_1)|, …, |Y_n+1-(X_n+1)|)^⊤ and :→ is a predictor,we can takeψ(f(Z)) = |Y_n+1-(X_n+1)| aiming to predict unobserved outcomes Y_n+1 that are close to the values (X_n+1) predicted via . Ifis an accurate predictor and Y_n+1 is tightly centered around (X_n+1), this may lead to informative prediction sets.Given some coverage target 1-α∈[0,1], intuitively, we may want to choose a fixed threshold—or, critical value—t such that we have the coverage bound P(ψ(f(Z)) ≤ t) ≥ 1-α, and then set {z:ψ(f(z)) ≤ t} as our prediction set. However, it is not generally clear how to find a fixed threshold t. Instead, we use the distributional equivariance of f(Z), which implies thatfor any function ψ:→, and any deterministically -invariant t:→, for which t_ = t_ for all g∈ and ∈,P_Z(ψ(f(Z))≤ t_f(Z))= P_G,Z(ψ( f(Z))≤ t_ f(Z) ) = P_G,Z(ψ( f(Z))≤ t_f(Z) ).Motivated by this observation,for all ∈, we set t_ as the 1-α-quantile of the random variable ψ(), where G∼ U:t_= Q_1-α (ψ(),G∼U ).Again, this generalizes the standard approach from conformal prediction, where the quantile is computed for the uniform distribution over the permutation group <cit.>. By definition,P_G(ψ()≤ t_) ≥ 1-α holds for any . To take into account theobservation function ø:→, we can simply intersect the prediction region with theset of valid observations {z: ø(z)= }, defining the prediction setT^() = {z∈: ψ(f(z))≤t_f(z),ø(z)= } . This predictive inference method is applicable when the data has distributional invariance or symmetry, thus we call it .See <Ref> for an illustration. This method predicts a set of plausible values for the full data z. However, we are of course interested in a prediction set for the unobserved component of z.Usually, we can write this unobserved component as some function m(z) of the data z,where m:→ for some space ;and moreover such thatz is in a one-to-one correspondence with (,m(z)). In that case, T^ is equivalent to a prediction set for the unobserved component m(z) of z. For instance, when z = (z_1, …,z_n, z_n+1)^⊤ consists of n+1 datapoints,and the observation function takes values ø(z) = (z_1, …,z_n)^⊤, then we can take m(z) = z_n+1.§.§ Theoretical Properties In this section we study the theoretical properties of our method. §.§.§ Coverage GuaranteeWe aim to control the coverage probability P(Z∈ T^( )), ensuring it is at least 1-α. In order to achieve exact coverage 1-α,it is well-known that one may in general need to add a bit of randomization for discrete-valued data <cit.>. We now show how this idea can be generalized to our setting. For ∈,let F_ be the cumulative distribution function (c.d.f.) of the random variable ψ(),G∼ U,and F'_ be the probability it places on individual points, i.e., for x∈,F'_(x) = F_(x)-F^-_(x),where F^-_(x) = lim_y→ x, y<x F_(y) ≥ 0. Let Δ_ = 0 if F_'(t_) = 0, and otherwise let Δ_∈ (0,1] be Δ_ =1-α-F_^-(t_)/F_'(t_) . Consider a random variable V∼Unif[0,1] independent of Z, and the randomized  prediction setT_r() = ({z: ψ(f(z)) < t_f(z)}∪{z:ψ(f(z))=t_f(z),V< Δ_f(z)}) ∩{z:ø(z)= }. Clearly, T_r() ⊂ T^(). Our first result,proved in <Ref>, showsthat the randomized prediction set T_r has coverage exactly 1-α,and the deterministic prediction set T^ has at least 1-α and at most a bit higher, depending on the “jumps" F'_ in the distribution of ψ(),G∼ U; generalizing results from conformal prediction <cit.>.For some groupwith a uniform probability measure U, let the full data Z∈satisfy the distributional invariance property Z=_dZ when G∼ U, for some action ρ of the groupon .Consider α∈[0,1], a space ,a -distributionally equivariant function f: → as per Definition <ref>, and a map ψ: →. Let the observed data be = ø(Z),for an observation function ø:→ and some space . Then the prediction region from (<ref>),and the randomized prediction region from (<ref>) have valid coverage, lower bounded by 1-α, and also—withF' from Definition <ref>—upper bounded as1-α = P(Z∈ T_r( )) ≤P(Z∈ T^( )) ≤ 1-α+[F'_f(Z)(t_f(Z))]. There are various conditions under which we can upper bound the slack [F'_f(Z)(t_f(Z))] in the coverage error. For instance, ifF'_(x)≤τ for all x∈ and ∈,then the coverage is at most 1-α+τ. To be more concrete, consider the set _={g ∈: ψ()=ψ()}consisting of the group elements that fix ψ() under the action . As we show, the size of this set controls the jumps in F_, under the algebraic condition that _ is a subgroup of . Recall that a set ⊂ is a subgroup of , ifis also a group; this is denoted by ≤.In particular,we will see in examples that often there is a set Ω⊂ such thatP(f(Z)∈Ω) = 1 and such that for every ',”∈Ω, _'=_”.Thenfor ={g: ψ()=ψ(),for all ∈Ω}, we have_'=. It readily follows that _'= is a subgroup of .Recalling that for a finite set A, we let |A| be the number of elements—cardinality—of A, we have the following result:Ifis finite, andif for all g∈ the set _={g: ψ()=ψ()} is a subgroup of , thenP(Z∈ T^( ))≤ 1-α+|_f(Z)|/||. In particular, if there is a set Ω⊂ such thatP(f(Z)∈Ω) = 1 and such that for every ',”∈Ω, _'=_”, thenfor ={g: ψ()=ψ(),for all ∈Ω}, we have P(Z∈ T^( ))≤ 1-α+||/||. See <Ref> for the proof. As we will see below, in many applications of interest, _ are indeed subgroups offor all , and often _does not depend on . In particular, in this case ||/|| is the number of cosets of the subgroupin . Thus the above general result gives a group-theoretic characterization of the slack in the coverage error. We now give examples of our framework.[Conformal prediction]Continuing the example of exchangeable data from<Ref>, we can recover resultsfrom conformal prediction <cit.>, by takingsome space _0, the n+1-fold product _0^n+1, = _0^n+1,=_n+1 and ρ as the permutation action. Further, we can let= ^n+1, and let _n+1 also act onby the permutation action. In an unsupervised case, we can take the observation functionø(z) = (z_1, …,z_n)^⊤, for all z. In a supervised case wherez = ((x_1,y_1), …, (x_n+1,y_n+1))^⊤, we can take the observation functionø(z) = ((x_1,y_1), …, (x_n,y_n), x_n+1)^⊤, for all z.We setf:→ as any permutation-equivariant map with respect to the permutation actions. Heres(z_j):=s(z_j;z):= [f(z)]_j are the non-conformity scores. Considering the supervised case for concreteness, we can take ψ()=_n+1 to be the last coordinate. Sincez decomposes asz = (ø(z),y_n+1), a prediction set for z is equivalent to a prediction set for y_n+1. Clearly, T^ from (<ref>)reduces tos(Z_n+1)≤ Q_β'(s(Z_1),…,s(Z_n)), where β' = ⌈ (n+1)(1-α) ⌉/n.This is identicalto a standard conformal prediction set with non-conformity score s. If Z has exchangeable coordinates, Theorem <ref> recovers the classical conformal coverage lower bound P(s(Z_n+1)≤ Q_1-α(s(Z_1),…,s(Z_n+1))) ≥ 1-α from <cit.>. Then, note thatfor g ∈_n+1, ()_n+1 = _g^-1(n+1). Hence,_={g: _g^-1(n+1) =_n+1}. If all coordinates ofare distinct—which holds with probability one if f is injective and (Z_1,…, Z_n+1) has a continuous distribution—then _= = {g: g(n+1) =n+1} is the stabilizer of the (n+1)st element, the subgroup of permutations fixing the last coordinate. In this case,||/|| = n!/(n+1)! = 1/(n+1), and we recover the classical result that the over-coverage of conformal prediction is at most 1/(n+1) <cit.>.[Coordinate-independent data] Continuing the example of coordinate-independent data from <Ref>, we can work in the same setting as above, except letting _0 = ^p,while the group is the direct product = _n+1×Ø(p),acting on z= (z_1^⊤,…, z_n+1^⊤)^⊤∈ (^p)^n+1 via the actionρ(π,O)z = ((Oz_π^-1(1))^⊤,…, (Oz_π^-1(n+1))^⊤)^⊤. For the non-conformity score s:_0→, we may aim to predicthow close a new object could come to the the trajectory of another celestial body of interest;say, the path of a rocket.For instance,consider the locus z_n+1,1=0and suppose we aim to predict a region|z_n+1,1|≥ C that contains the next observation at least 1-α of the time. Then we can take s(z_n+1)=-|z_n+1,1|,and the prediction set for the (n+1)st observation becomesT^(z_1:n) = {z_n+1: -|z_n+1,1|≤Q_1-α (-|(Oz_π^-1(n+1))_1|,O∼U(Ø(p)), π∼U(_n+1))}. This can be further simplified since for any v, (Ov)_1 = _d W^⊤ v/W_2, where W∼(0,I_p) and W_2 is the Euclidean norm of W.For illustration, we present a two-dimensional toy example in Figure <ref> (right). In this case, the prediction set can be interpreted as“we are 95% sure that a new observation Z_n+1 will have |Z_n+1,1| at least this large". Such a region may be useful to determine the allowable range of motion of a rocket moving along the vertical axis—there is 95% chance that the next celestial body outside of the vertical strip.We also mention that a split, or split-data,version of —inspired by inductive or split conformal prediction <cit.>—is a special case of our method.Let the full data be given by Z = (Z_tr, Z')^⊤, where Z_tr consists of training data, and Z' consists of calibration and test data. Suppose that for some group _0,the distribution of Z' conditional on Z_tr is _0-invariant. Then, we can use the methodology described above, applied to Z' and _0 instead of Z and .This procedure has valid coverage even whenthe_0-distributionally equivariant map f is not fixed as above,but islearned using Z_tr.The key advantage compared to fullis that we can fit f once on Z_tr, and then use it as a fixed predictor on Z'; which can improve computational efficiency compared to . For f to be a useful predictor, it is beneficial if Z_tr and Z' have a similar structure.For instance, this could be the case ifZ_tr also satisfies 𝒢_0-equivariance.Finally, we mention that, while our results only require f to be distributionally equivariant, in practice there are often more known examples of deterministicallyequivariant functions, and so we will typically still take f to be deterministically equivariant.However, we believe that our theoretical contribution of introducing distributional equivariance is fundamental, because it reflects a broad condition under which distributional invariance is preserved under compositions. Thus it is a crucial notion for predictive inference methods based on symmetry.§.§.§ Extension: Distribution ShiftNext, we present an extension of our coverage result for the case of distribution shift.To present this result, we need to recall some additional notions. For the subgroup ≤ the setg = {gh: h∈} is called a (left) coset ofin . The set of cosets is denoted as /:={g: g∈}. Thenis partitioned into cosets g, and we obtain a set Sof representatives of cosets in / by choosing an element of each coset. First, we allow for a distribution shift away from distributional invariance, i.e., Z≠ _d ρ(G)Z.We provide a general coverage bound for this scenario. For g ∈, define the map ℓ_g:→ such that for all ∈, ℓ_g()=ψ().Let ={ℓ_g,g∈} be the set of all maps ℓ_g, g∈.Now, ={g: ℓ_g =ℓ_e} is clearly a subgroup of . Hence, is partitioned into cosets g,corresponding to distinct values of ℓ_g. Let U(/) be the invariant probability measure over the cosets <cit.>;and identify / with a measurably chosen set Sof representatives. Let G'∼ U(/), identified with a random variable over S. For instance, for a finite ,we can identifyU(/) with the uniform distribution over a set of representatives{g_1, …, g_||} of the cosets /.For any ∈,define ν() = ψ()-t_ and, with TV denoting total variation distance,Δ= _G'∼U(/) TV_Z(ν(f(Z)),ν(ρ̃(G')f(Z))). See <Ref> for the proof of the following result. Under the conditions of Theorem <ref>,even if Zdoes not necessarily satisfy distributional invariance, we havewith Δ from (<ref>) that -Δ≤ P(Z∈ T^( ))- (1-α) ≤Δ+[F'_f(Z)(t_f(Z))].Theorem <ref> establishes that,even if Z ≠_d ρ(G) Z,we can derive coverage bounds similar to those found in Theorem <ref>, up to a margin of Δ as given in equation (<ref>). This gap reduces to zero when the distributional invariance property holds (i.e., Z=_dρ(G)Z), in which case Δ = 0. The aforementioned result relies on symmetry in two ways.First, the quantile t_z̃ in equation (<ref>) is chosen for the uniform probability distribution over the group; or equivalently over representatives of cosets. Second, the function f is required to be distributionally equivariant with respect to 𝒢.In <Ref>,we introduce a novel algorithm for scenarios where the aforementioned two symmetry properties do not hold.We also provide theoretical coverage guarantees for this algorithm. Notably, our framework recovers the results presented in Theorems 2 and 3 of <cit.> as a special case, in the context of studying conformal prediction with the group 𝒢=S_n+1 and the function ψ with ψ(z)=z_n+1 for all z. For more details, we refer to <Ref>. §.§ Computational ConsiderationsIn this section, we discuss computational considerations for our prediction regions.Given f and ψ, we need to compute the set in (<ref>), i.e.,^-1() ∩{z: ψ(f(z))≤ t_f(z)}, where ^-1() denotes the preimage ofunder the map ,for a given . Often, the preimage of the observation map can be characterized in a convenient way; in many cases the observation map selects a subset of the coordinates of z, and so its preimage includes the set of observations with all possible values of the missing coordinates.Moreover, we can write the second set of the above intersection as a preimage under f in the form f^-1 (B), where B ={∈: ψ() ≤ t_}. A key step to compute B is to compute the quantiles t_ over the randomness of G∼ U, or—following notation from Section <ref>—G'∼ U(/). When the number of equivalence classes / is not large, we can calculate the quantile by enumeration. However, if the number of equivalence classes is large, a practical approach is to sample from each equivalence class to approximate the quantileSpecifically, we can definet̃_:= Q_1-α (ψ(ρ̃(G_1) ),ψ(ρ̃(G_2)),⋯,ψ(ρ̃(G_M))),where G_1,⋯,G_M are sampled independently from U(/). We then define the prediction setT̃^() := {z∈: ψ(f(z))≤t̃_f(z), ø(z)= }.This has the following coverage guarantee, with a proof deferred to <Ref>.Under the conditions of Theorem <ref>, we haveP(Z∈T̃^())≥ 1-α. For the remaining problems, i.e., computing A and finding its preimage under f, at the most general level, various approaches can be employed to generate suitable approximate solutions.At the highest level of generality, similar computational problems arise in standard conformal prediction, and no general computational approaches are known. Our setting is similar, and thus computational approaches must be designed on a case by case basis.In many cases of interest, including all those presented in the paper, the computational problem simplifies and can be solved conveniently. One approximate method involves systematically examining a grid of candidate values of , and retaining those for which ψ() ≤ t_.Further, if we can choose f to be an invertible function whose inverse is convenient to compute, then the preimage under f can be convenient to find. Otherwise, in general, one may need to search over a grid on(instead of ) to approximate f^-1(A).§.§ Illustration Revisited: GraphsIn this section, we discuss an illustration of our methods for random variables whose symmetries are captured by a graph.We discuss properties for prediction sets and finally use a simple tree example to present these concepts.We focus on a dataset denoted as Z=(Z_1,…,Z_n+1)^⊤∈_0^n+1,for some set _0, typically ^p for some positive integer p. The symmetries of Z are described by the automorphism group = Aut(A) of agraph, as described in <Ref>.If it is feasible to enumerate the automorphism group, then we can use the prediction set from (<ref>). However, in general determining the automorphism of a graph is a hard problem; see e.g., <cit.> for the related graph automorphism problem. We discuss a coarsening approach to partially overcome this in Section <ref>.§.§.§ Properties of Prediction Sets for Graphs, and Tree-structured Graphical ModelIn this section, we briefly discuss the properties of prediction sets for graphs.In the prediction set from (<ref>),weconsider some space _0, the n+1-fold product _0^n+1, take== _0^n+1, f:→ as the identity, andρ, to be permutation actions.We can takeψ such thatψ(z)= z_n+1 for all z, in which case we are predicting z_n+1 in a graph.Let _n+1be the stabilizer subgroup of n+1 in , i.e., _n+1 = {g∈: g(n+1)=n+1}.Let b = |/_n+1| be the number of equivalence classes of the quotient/_n+1 andtake a collection g_1, …, g_b ∈ of representatives of the equivalence classes.By the orbit-stabilizer theorem <cit.>, B = {g_1(n+1), …, g_b(n+1)}⊂[n+1] is the orbit of n+1 underacting on [n+1]. Then,for ∈_0^n+1, the quantile t_z̃ is presented asQ_1-α({z̃_j, j∈ B}). As a special case, when 𝒢=_n+1, the orbit of n+1 is all of [n+1]. Then the quantile reduces to the one used in standard conformal prediction.There are many other choices of ψ.Suppose for simplicity that _0 =. Then, we can takefor example ψ(z) = (e_n+1 - 1_n+1/(n+1))^⊤ z for all z∈, which measures the difference between the unknown element z_n+1 and the grand mean. Alternatively,ψ(x) = (e_n+1 - 1_B/b)^⊤ x for all xmeasures the difference between the unknown value and the average of its orbit. Next, we use a simple tree-structured graphical model example to showcase the above properties; see <Ref>. The rooted tree Γ has a root with an associated random variable R.The root has K≥ 1 children, which are associated with random variables C_1…,C_K at its first layer;these define K branches.Each of the nodes C_k, k∈[K] in the first layerhas M≥ 1 children with associated random variablesZ_1^(k),…,Z_M^(k) in the second layer. We assume that in the associated graph describing the symmetry,each node is connected precisely to its children. Then, we assume thatthe joint distribution of the random vector T=(R,C_1…,C_K,Z_1^(1),…,Z_M^(K))^⊤∈𝒵associated withthis depth-two tree satisfies g·Γ=_d Γ, where g is any element in the automorphism group 𝒢, represented as a subgroup of _n+1.We can consider the settingwhen Z_M^(K), the last node of the last branch, is unobserved, and suppose for simplicity that _0 =. We can let ψ(z)=|e_1+K+K· M^⊤ z|, with z∈𝒵.The orbit of e_1+K+K· M^⊤is { e_1+K+i^⊤,i∈ [K· M]}.Therefore,the quantilesfrom (<ref>) are t^(1)_z:=Q_1-α( |z_1^(1)|,…,|z_M^(K)|), and the prediction set with 1-α probability coverage reduces to T^() = {z: |z_M^(K)|≤t^(1)_z,ø(z)= } . We can also aim to predict at a cluster after coarsening the graph, and we describe this in Section <ref>. § TWO-LAYER HIERARCHICAL MODEL This section is devoted to studying data with a two-layer hierarchical structure.Different from the tree-structured graphical model example from <Ref>,here the zeroth- and first-layer nodes are not observed. Such a model can be useful in many applications, such as meta-learning <cit.>, sketching <cit.>,and clustered data <cit.>.§.§ Problem Setting In a two-layer hierarchical model,for the first-layer nodes, we drawdistributions P_k∼𝒫,k∈ [K] independently from a distribution 𝒫. These can be viewed as specifying distinct sub-populations from which data is collected. From the perspective of meta-learning, they can be viewed as distinct but related tasks (e.g., prediction in various environments). The second-layer nodes (or leaves for simplicity) in the k-th branch are random variablesZ_i^(k), i∈ [M] drawn exchangeablyfrom the distribution P_k,k∈ [K] <cit.>. An illustration is presented in the right panel of<Ref>.Our goal is to construct prediction setsfor both unsupervised and supervised settings, given as follows: [Unsupervised Learning] We let Z_i^(k)∈, i∈ [M], k∈ [K]. [Supervised Learning]For some space 𝒵_0, we let Z_i^(k)=(X_i^(k),Y_i^(k)) ∈𝒵_0, i∈ [M], k∈[K], and suppose thatY_i^(k)= μ_P_k(X_i^(k))+ϵ_i^(k),i∈ [M], whereϵ_i^(k),i∈ [M] are i.i.d. zero-mean random variables whose distributions maydepend on P_k,k∈ [K].Let us consider the set [KM] = {1,…, KM} andfor each a∈[KM], wherea = (k-1)· M + i for a unique i∈ [M], k∈ [K], associate with a the random variable Z_i^(k)∈. Thus the random variablesZ_i^(k)∈, i∈ [M] in the k-th branch are associated with the block b_k={(k-1)· M + i, i∈ [M]}. We let Λ_K,M⊂_[KM] be the group of KM-permutations that map each block b_k into some other block b_k' in a bijective way. Then for both the unsupervised and supervised cases, the distribution of the data is Λ_K,M-invariant. We aim to build prediction sets for some unknown components of the last branch,hoping to improve prediction bypooling information both within and across branches. We explain our steps next.§.§ Methodological Considerations for Unsupervised Learning In this section, we develop our methodology for building prediction sets in the unsupervised case. For all k∈[K], define Z̅_k = ∑_j=1^M Z_j^(k)/M,σ̂_k^2=1 if M=1,σ̂_k^2= ∑_j=1^M(Z_j^(k)-Z̅_k )^2/M-1 otherwise, andZ̅ = ∑_k=1^K Z̅_k/K. For some constant c≥ 0, and for all k∈[K], define the events A_k,c = {|Z̅_k-Z̅|≤ cσ̂_k/√(M)}. These capture the events that the means of the elements within the k-th branch are close to the grand mean. Let A_k,c^∁ be the complement of A_k,c, and I(A) be the indicator function of an event A, which equals I(A)=1 if A happens, and I(A)=0 otherwise. We also define, for all k∈[K],i∈[M],Z̃_i^(k):= |Z_i^(k)-[Z̅I(A_k,c)+Z̅_kI(A_k,c^∁)]|/σ̂_k, which are the standardized absolute deviations of Z_i^(k) from the grand mean if the branch mean is close to that, or to branch mean itself otherwise. Here c is an absolute constant that can beset as c=2, as an approximate 97.5% quantile of the standard Gaussian distribution.As explained later, itcan potentially also be optimized by minimizing a prediction loss.In Section <ref> of the Appendix, we explain how (z̃_i^(k))_k∈[K],i∈[M]=: f(z) can be obtained as functions of z via a fixed message-passing graph neural network f on a graph obtained by constructing proxy statistics for thenodesin the zeroth- and first-layer nodes (also detailed below for the supervised case). We construct the prediction region (<ref>)for Z_M^(K) by using thatis Λ_K,M-distributionally invariant. To understand our procedure,suppose for a moment thatP_k,k∈ [K]have finite expectations μ_k,k∈ [K]; but we emphasize that our method does not require this condition. When some branch means Z̅_k≈μ_k,k∈ [K] and Z̅≈∑_j=1^Kμ_j/K are very different—i.e.,on the event A_k,c^∁—our procedure centers observations within those branches by estimating the within-branch means μ_k,k∈ [K].On the other hand, when Z̅_k≈Z̅ holds for all branches k∈ [K]—i.e., on A_k,c—we pool all observations and mimic standard conformal inference. Therefore, our procedure interpolates prediction sets built using each individual branch andprediction sets built using full standard conformal inference. We let J_K,M = [M]× [K]∖{(M,K)} denote theindices of thefully observed datapoints. The following result formalizes the coverage guarantee of our result:If the data Zfollows the two-layer hierarchical model introduced at the beginning of <Ref>, and the observation function has values (z)=(z_i^(k))_(i,k)∈ J_K,M for all z,then the prediction set for Z_M^(K)from(<ref>)with ψ()=_M^(K) and t_z̃=Q_1-α((z̃_i^(k))_k=1,i=1^K,M) has coverage at least 1-α. Moreover, ifhas a continuous distribution, then the coverage is at most 1-α+1/(KM). The proof of Proposition <ref> is deferred to <Ref>. We compare our method with alternatives in Section <ref>. §.§ Methodology Considerations for Supervised LearningIn this section,we consider the two-layer hierarchical model in a supervised learning setting.To ease the computational burden,we adopt split predictive inference.For every branch k∈ [K], we set the first M'—approximately half—datapoints to be the training sample, andlet Z_ be the training data gathered from all branches.Wefit μ̂(·;Z_):→based on Z_, such that x↦μ̂(x;Z_) is an estimator of the regression function in the pooled data. Further, for all branches,we fit[We first train μ̂(·;Z_) using all training data and train μ̃_k(·;Z_),k∈ [K] to approximate the regression function of the residuals (X_k,i,Y_k,i-μ̂(X_k,i);Z_),i∈ [M] from the k-th tree, k∈[K]. We finally let μ̂_k(·;Z_)=μ̂(·;Z_)+μ̃_k(·;Z_).] μ̂_k(·;Z_),k∈ [K] based on Z_, as estimators of the within-branch regression functions μ_P_k from Example <ref>.Using Z_, we also fitpointwise confidence bands x↦σ̂_k(x;Z_),k∈ [K] by estimating the pointwisestandard error of x↦μ̂_k(x;Z_),k∈ [K] over the randomness of the fitting process.Many classical estimators μ̂_k(·;Z_) have explicit expressions for standard error curves σ̂_k(·;Z_), k∈ [K]including parametric models, non-parametric kernel regression, splines, etc.We emphasize that our method does not require any coverage properties for _k. Suppose that there are M remaining datapoints in each branch, and without loss of generality, call them (X_i^(k),Y_i^(k)), k∈[K], i∈ [M]. For k∈[K], i∈ [M], letZ̅_i^(k) = (Y_i^(k)-μ̂_k(X_i^(k); Z_), Y_i^(k)-μ̂(X_i^(k); Z_),σ̂_k(X_i^(k); Z_)).We require thatZ̅ = (Z̅_i^(k))_k∈[K], i∈ [M] isΛ_K,M-distributionally invariant. This follows if Z=(Z_, (Z_i^(k))_k∈[K], i∈ [M]) is Λ_K,M+M'-distributionally invariant and the mapZ↦Z̅ isΛ_K,M+M'-distributionally equivariant. To ensure this, we use the same algorithm for training μ̂_k(·;Z_), σ̂_k(·;Z_) in all branches k∈ [K], and ensure they are all invariant to the order of the data within branch k. In what follows, we suppress the dependence ofμ̂, μ̂_k, σ̂_k on the training data.For all k∈[K],i∈[M], define Z̅_i^'(k)=[Y_i^(k)-μ̂(X_i^(k))I(|μ̂_k(X_i^(k))-μ̂(X_i^(k))/σ̂_k(X_i^(k))|≤ c)-μ̂_k(X_i^(k))I(|μ̂_k(X_i^(k))-μ̂(X_i^(k))/σ̂_k(X_i^(k))|>c)].We set ϵ̂_k=1, for all k∈ [K] when M=1 and ϵ̂_k=√(∑_i=1^M(Z̅_i^'(k))^2/(M-1)) otherwise. Let =(_1,…, _K)^⊤,where for all k∈[K], _k=(_1^(k),…,_M^(k))^⊤ and for all k∈[K],i∈[M], we define the map Z̅↦ f(Z̅) = (f_i^(k)(Z̅))_i=1,k=1^M,K via _i^(k) := f_i^(k)(Z̅) :=Z̅_i^'(k)/ϵ̂_k.We now argue that if the dataZ̅_i^(k), i∈ [M], k∈[K] are Λ_K,M-distributionally invariant,thenalso satisfies this property. To see this, we define a special fixed message-passing graph neural net computing f onthe two-layer tree from <Ref> (right),which captures the invariances of Z̅, mapping Z̅↦=f(Z̅). Since message-passing GNNs are equivariant, it will follow thatis also Λ_K,M-distributionally invariant. In fact, our MPGNN operates only on the subgraph of the two-layer tree excluding the zeroth-layer node. The message passing GNN is defined in five steps: * Step 1: Weinitialize the leaves z_i^(k,0),i∈ [M],k∈ [K], to have three channels: z_i^(k,0)=(z_i,1^(k,0),z_i,2^(k,0),z_i,3^(k,0))^⊤, where z_i,1^(k,0)=y_i^(k)-μ̂_k(x_i^(k)), z_i,2^(k,0)=y_i^(k)-μ̂(x_i^(k)), z_i,3^(k,0)=σ̂_k(x_i^(k)).We initialize thefirst-layer nodes as all ones vectors with three channels:p^(k,0)=(p_1^(k,0),p_2^(k,0), p_3^(k,0))^⊤ =(1,1,1)^⊤, k∈[K].* Step 2:Let the kernel 𝕂 be defined by 𝕂(x)=I(|x|≤ c) for all x. We updateeach leaf individuallyas z_i^(k,1)=f_1(z_i^(k,0))=(z_i,1^(k,0),z_i,2^(k,0),(z_i,2^(k,0)-z_i,1^(k,0))·𝕂(|(z_i,2^(k,0)-z_i,1^(k,0))/z_i,3^(k,0)|))^⊤,i∈[M],k∈ [K].This corresponds to taking the map λ_0 in (<ref>) to only depend on its first input.The updated third coordinate becomes(z_i,2^(k,0)-z_i,1^(k,0))·𝕂(|(z_i,2^(k,0)-z_i,1^(k,0))/z_i,3^(k,0)|)=[μ̂(x_i^(k))-μ̂_k(x_i^(k))]· I(|μ̂(x_i^(k))-μ̂_k(x_i^(k))|/σ̂_k(x_i^(k))≤ c).We keep the values of the first-layer nodes unchanged:p^(k,1)= p^(k,0), k∈[K]. * Step 3: We update the leaves as z_i^(k,2)=f_2(z_i^(k,1))=(|y_i^(k)-μ̂_k(x_i^(k))-z_i,3^(k,1)|,z_i,2^(k,1),z_i,3^(k,1))^⊤,i∈[M],k∈ [K]. This also corresponds to a map λ_0 that only depends on its first input.* Step 4: We update the first layer nodesasp^(k,2)=(p_1^(k,2),1,1)^⊤,k∈[K], wherep_1^(k,2) =f_32(p^(k,1),∑_i=1^Mf_31(z_i^(k,2),p^(k,1))), and we have f_31(x,y)=x^2/(M-1) and f_32(x,y)=√(y).This corresponds to a standard message passing update in (<ref>). Thus, after the update, we have p^(k,2)=(ϵ̂_k,1,1)^⊤,where ϵ̂_k =√(∑_i=1^M(z_i,1^(k,2))^2/M-1)for allk∈ [K]. In this step, we fix the values of the leaves. * Step 5: We update the leaves z_i^(k,3)=f_41(z_i^(k,2),f_42(p^(k,2),z_i^(k,2))), where f_42(p^(k,2),z_i^(k,2)) = p^(k,2) andf_41(z_i^(k,2),p^(k,2)) = (z_i,1^(k,2)/p_1^(k,2), z_i,2^(k,2), z_i,3^(k,2))^⊤. Thus, z_i^(k,3)=(|y_i^(k)-μ̂_k(x_i^(k))-z_i,3^(k,1)|/p_1^(k,2), z_i,2^(k,2), z_i,3^(k,2))^⊤, i∈[M],k∈ [K].The first entry, z_i,1^(k,2), becomes our statistic (<ref>).We have the following coverage guarantee.If the dataZ̅_i^(k), i∈ [M], k∈[K] are Λ_K,M-distributionally invariant,and the observation function has values (z)=((z_i^(k))_(i,k)∈ J_K,M,x_M^(K)) for all z, then the prediction set for y_M^(K)from(<ref>)with ψ()=z̃_M,1^(K)and t_=Q_1-α((z̃_i,1^(k))_k=1,i=1^K,M) has coverage at least 1-α. Moreover, if Z̃ has a continuous distribution, then the coverage is at most 1-α+1/(KM).The proof of Theorem <ref> is deferred to <Ref>. Remark: In practice, similar to the unsupervised case, one could choose c as a quantile of the standard normal distribution, or by minimizing a loss.For example, we could computethe residual standard errors σ̃_k,ϵ^2 of μ̂_k on the training data, and thenminimize over cthe following empirical loss:∑_k,i=1^K,M[Y_i^(k)-μ̂(X_i^(k))I(|μ̂_k(X_i^(k))-μ̂(X_i^(k))/σ̂_k(X_i^(k))|≤ c)-μ̂_k(X_i^(k))I(|μ̂_k(X_i^(k))-μ̂(X_i^(k))/σ̂_k(X_i^(k))|>c)]^2/σ̃_k,ϵ^2. Since the objective function is Λ_K,M-invariant, it is not hard to choose an approximate minimizer in an Λ_K,M-invariant way; by simple one-dimensional optimization.By our general theory, Proposition <ref> will still hold for this choice of c.§.§ Comparison with Other MethodsIn this section, we discuss the performance of several alternative benchmark methodsand compare them with our proposed method.For simplicity, we assumethat in the unsupervised case,Z_i^(k) have equal variances for all i∈ [M], k∈ [K] and in the supervised case, the same holds forϵ_i^(k), i∈ [M], k∈ [K]. We provide a brief discussion of the scenario where the variances are different at the end of this section. Benchmark 1: Single Tree. Since the random variables on the leaves are exchangeable within the same branch, one possible way to construct a prediction set isto use the M-1 observed calibration datapoints together with the M-th unobserved test datapointin the last branch to construct a classical conformal prediction setT_1={z_M^(K):s(z_M^(K))≤Q_1-α(s(z_1^(K)),s(z_2^(K)),…, s(z_M^(K)))}.Usually, for unsupervised learning, one wouldset s(z)=|z| for all z, and for supervised learning, one wouldset s(z)=|y-μ̂_K(z)|, where μ̂_K(·) is a functionfit to the training data within the last branch.Although this method yields a prediction set with valid coverage, it may also exhibit a degree of conservatism due to not using information from other branches. This can result in the prediction set being large (or even including the entire space) when the sample size within the branch is small. Benchmark 2: Split conformal prediction.We compare our method with classical split conformal prediction.Denote by aver(S) the average of a finite set S⊂. In unsupervised learning,one version of the standard conformal prediction set contains z_M^(K) such that|z_M^(K)-aver(z_i^(k),i∈[M],k∈ [K])|≤Q_1-α(|z_i^(k)-aver(z_i^(k),i∈[M],k∈ [K])|,i∈[M],k∈ [K]).In a supervised learning setting, a classical conformal prediction set is given by y_M^(K) such that|y_M^(K)-μ̂(x_M^(K))|≤ Q_1-α(|y_i^(k)-μ̂(x_i^(k))|,i∈[M],k∈ [K]).where μ̂(·) is a regression function based on the training sample, as described in <Ref>.Observe thatin the unsupervised caseone can view the problem aspredicting the next observation from the distribution P_K.Defining m = aver(z_i^(k),i∈[M],k∈ [K]),i∈[M],k∈ [K], the length of the prediction set in unsupervised learningwill be 2· Q_1-α(|z_i^(k)-m|), which—for large M—is close to the difference between the upper and lower α/2-quantiles of the mixture distribution 1/K∑_k=1^KP_k (assuming, without loss of generality, that this distribution is symmetric); rather than of P_k.For supervised learning, the length of the prediction set will be 2· Q_1-α(|Y_i^(k)-μ̂(X_i^(k))|,i∈ [M],k∈ [K])=2· Q_1-α(|ϵ_i^(k)+μ_P_k(X_i^(k))-μ̂(X_i^(k))|,i∈ [M],k∈ [K]).When μ_P_k(·),k∈ [K]differ a great deal for different P_k,k∈ [K],training μ̂(·) by mixing their training datapoints will likely lead to a wider prediction set.Benchmark 3: Subsampling.We will also compareto a subsampling method proposed in <cit.>, which consists of uniformly samplingone observation from each of the first K-1 branches. The unobserved Z_M^(K)is exchangeable with the sampled random variables,and thus a standard conformal prediction set can be constructed using the sub-sample.If K is sufficiently large, we expect the length of the prediction set to be closeto that of a set obtained via standard conformal prediction using the full data. Indeed, the latter method fits the quantiles of the mixture of the distributions of all branches.In addition,<cit.> introduced a repeated subsampling approach aimed at improving the stability of the prediction set.Recalling that we aim to predict the next observation from P_K,this method provides a valid prediction set, but it again effectively estimates the quantiles of the mixture distribution of all P_ks, k∈ [K]. We further discuss the advantages of our method comparedwith these benchmarks.Taking the supervised learning setting as example,when the distribution P_K deviates from other distributions, our approachprovides a prediction set tailored to the last distribution P_K that we are interested in—in contrast to benchmarks 2 and 3.In addition, we also leverage data from other branches, furnishing smaller prediction sets than benchmark 1when there are only few observations in the final branch.When all μ_P_k, k∈ [K], are close to each other,including confidence bands enables us toachieve similar sized regions to standard conformal prediction.As a result, our methodology effectively offers the best of conformal prediction within the branch and using the full dataset.Thus, it furnishes anapproach to predictive inference that performs well under the heterogeneity of the distributions P_k, k∈[K].Finally, we mention that our approachcanalso handle thescenario where the residual variances are different across branches. In such cases,even ifstandard conformal prediction (Benchmark 2) achieves satisfactory coverage probability, this coverage may be uneven across the branches due to their different variances.For instance,a branch with high variancerequires wider prediction sets to prevent under-coverage; which is possible with our approach but not straightforward with standard conformal prediction.§.§ Extension to Random Sample Sizes In this section, we study an extension of our methodologyfor settings with imbalanced observations across branches.We begin by introducing the probability models studied in this section.We let a random vector Z=(Z_1^⊤, …, Z_K^⊤)^⊤∈_0^* : = ∪_j≥ 0_0^j be generated from a joint distribution 𝒫.We assume exchangeability across the K components of (Z_1^⊤, …, Z_K^⊤)^⊤. In addition, the dimensions of Z_i,i∈ [K], denoted as N_i,i∈ [K], are also random variables with N_i∈{0,1,…,}, for any i∈ [K].Letting N⃗:=(N_1,…,N_K), n⃗:=(n_1,…,n_K), for any n⃗∈ℕ^K, we assume thatconditional on N⃗=n⃗,and for all k∈ [K],thecoordinates ofZ_k=(Z_1^(k), …, Z_n_k^(k))^⊤ are exchangeable across the n_k observations. The model from <Ref> is a special case where n_k = M for all k∈[K]. For any given n⃗∈ℕ^K, conditional on N⃗= n⃗,we define 𝒢_n⃗=_n_1⊗_n_2⊗…⊗_n_K as the direct product of the permutation groups _n_k of the sets [n_k].For any g_n⃗∈𝒢_n⃗, it holds that g_n⃗=g_n_1⊗ g_n_2…⊗ g_n_K, where g_n_i∈_n_i,i∈ [K].Choose for all k∈[K] the permutation actions g_n_k· Z_k. Thenwe have g_n⃗· Z=(g_n_1· Z_1,…, g_n_K· Z_K).We let U_n⃗ be uniform measure over 𝒢_n⃗ and= _n⃗ =: ∏_k=1^K_0^n_k, =_n⃗ = ∏_k=1^K_0^n_k, and we alsoconsider a 𝒢_n⃗-distributionally equivariant map f:𝒵→𝒵̃, with respect to the actions for which ρ(g)=g and ρ̃(g)=g for all g∈_n⃗, i.e.,f(G_n⃗· Z)=_d G_n⃗· f(Z),ifG_n⃗∼ U_n⃗. We let J_n⃗ = {(i,k): i∈[n_k], k∈ [K]}∖{(n_K,K)}denote theindices of thefully observed datapoints.In the unsupervisedcase, the observation function has values (z)=(z_i^(k))_(i,k)∈ J_n⃗ for all z, while inthe supervisedcase, it has values (z)=((z_i^(k))_(i,k)∈ J_n⃗, x_n_K^(K)) for all z. In the unsupervised case, we aim to predict ψ_n⃗()=_n_K^(K) and define, for all ∈, t_=Q_1-α(1/K∑_k=1^K1/n_k∑_m=1^n_kδ__m^(k)).We also define the prediction setT^(,n⃗) = {z: [f(z)]_n_K^(K)≤t_f(z),ø(z)= }. In the supervised case,we define f similarlyto the definition from <Ref>, and t and T^ as above. Then we have the following coverage guarantees. In the setting described above in this section, we considerα∈[0,1], t_ defined in (<ref>)and the observation function with (z)=((z_i^(k))_(i,k)∈ J_K,M,x_M^(K)) for all z∈.For the prediction region defined in (<ref>), we have P(Z∈ T^(,N⃗ ))≥ 1-α. When all Z_j^(k),k∈ [K], j∈ [N_k], are continuous random variables, we also haveP(Z∈ T^(,N⃗ ))≤ 1-α+max_j∈ [K]1/K· N_j.The proof of this Proposition is deferred to <Ref> of the Appendix.<cit.> consider the closely related problem of constructing a prediction set forthe first observation in a new branch in the supervised learning regimean identical two-layer hierarchical model with random sample sizes. This problem is distinct from the question of predicting the last unobserved outcomeconsidered in <Ref>. However, our general framework includes their problem as a special case. For simplicity, we will explain this in the unsupervised case. We let theobservation function be (z)=((z_i^(k))_k∈ [K-1], i∈ [n_k]).We then set our prediction set in (<ref>)by takingz̃_m^(k) in (<ref>) as in (<ref>) with c=∞ and σ̂_k:=1.Then, our prediction region from(<ref>)is equivalent to one for (z_1^(K), …, z_n_K^(K)),where the number of observations n_K is not known. Thus, our prediction region includes a union over the unknown values of n_K≥ 0. The induced prediction region for z_1^(K) given bythe union of theprojection of these prediction regions into their first coordinates is clearly a valid 1-α-coverage region. Further, it is immediate that the union is included in the one with n_K=1, which becomes{z_1^(K): _1^(K)≤Q_1-α(1/K ∑_k=1^K-11/n_k∑_m=1^n_kδ__m^(k) + 1/K δ__1^(K) )}. Up to changes of notation (such as our K being their K+1), this recovers the HCP method of <cit.>. However, HCP does not aim to form predictions in the settingwhen there are multiple observations in the last branch.In particular, HCP can lead towider prediction sets when μ_P_k differ a great deal across different P_k,k∈ [K],since the algorithm does not take the heterogeneity across different branches into account.We present a detailed simulation to comparethe performance of these methods in <Ref>. §.§ Simulation StudiesIn this section, we provide simulations to corroborate the efficacy of our proposed approach.Specifically, we conduct simulations under the scenarios of both unsupervised and supervised learning with both non-random and random instances of N⃗, respectively.We present the simulation results with a fixed N⃗ here, and we defer the results with random N⃗ to the appendix (<Ref>).Unsupervised Learning: Fixed Sample Size. We now present the simulation resultswith our proposed method in the context of unsupervised learning.We set the number of branches to K=20, where each branch has M=15 observations;but Z_M^(K) is unobserved and needs to be predicted.We let Z_i^(k),i∈ [M] be sampled i.i.d. from (μ_k, 0.5), with k ∈ [K],and where μ_k,k∈ [K] follow a normal distribution (0, σ^2).We consider σ^2∈{10, 2, 0.5, 0}.When σ^2=0, all location parameters are equal, reducing to the special case of full exchangeability.To construct prediction sets, we apply the method described in Section <ref>.In all numerical examples in this paper, we set c= 2, based on the quantiles of the standard Gaussian distribution.The simulation results are presented in Table <ref>. We provide a summary and conclusions in the next subsection. Supervised learning: Fixed sample size. We next study the simulation performance of our proposed method in a simple supervised learning example.Specifically, we let θ_k,k∈ [20] be sampled from (0,σ^2),where we consider σ^2∈{10, 2, 0.5, 0}. We let Y_j^(k),k∈[20], j∈ [30] be sampled from Y_j^(k)=θ_kX_j^(k)+ϵ_j^(k), where X_j^(k)∼Unif(-0.5,0.5) and ϵ_j^(k)∼(0,0.5^2) for all k∈ [20], j∈ [30]. For ease of computation,we conduct split conformal prediction where we split half of the data (15 datapoints)in each branch to fit μ̂_k(·),k∈[20] via linear regression, and thus also obtain a confidence band σ̂_k(·) induced by the linear regression estimator.In addition, we also train μ̂(·) using all training data via linear regression. We present the performance of our constructed prediction sets and of baseline methodsin Table <ref>. Summary of results. We now provide insights into the observed results. In both unsupervised and supervised learning contexts, when σ^2 is large and and the distributions corresponding to different branches are more dispersed,our prediction sets remain close to having optimal length—e.g.,for α=0.05,2· 1/2· 1.96 = 1.96, as determined by the normal quantiles. However, under these circumstances, the conformal prediction and subsamplingbaselines tend to yield significantly wider and less informative prediction sets. Moreover, when α=0.05, the single-tree baselinedoes not yield an informative prediction set—specifically, it returns the entire real line as the prediction set—due to the limited sample size within the branch.Even when σ=0,our method results in prediction sets of comparable length to those generated by standard conformal inference;since our method essentially interpolates between within-branch and global distributions. Furthermore, since we use data from other branches for calibration, the standard error of the length of our prediction set is smaller.§.§ Empirical Data Analysis In this section, weanalyze a sleep deprivation dataset <cit.>, where 18 drivers' reaction times after 0,…,9 nights of three hours of sleep restriction are recorded. This dataset has also been investigated in therelated work by <cit.> studying two-layer hierarchical models,and we follow their approach to define the covariates and responses. Specifically, the response variable Y is the sleep-deprived reaction time, the covariate X_1 represents the number of days of sleep deprivation, and the covariate X_2 denotes the baseline reaction time on day zero,with normal sleep.For each of the 18 individuals,we observe nine triplets (X_1,j^(k),X_2,j^(k),Y_j^(k)), j∈[9],k∈ [18].In our analysis,we model these nine triplets as drawn independently from a distribution P_k,k∈ [18].We discuss the modelling asumptions in Section <ref>. Next, we discuss the experimental setting. We repeat our experiments over 100 independent trials. For each trial, we randomly split the data into training, calibration, and testdatasets independently 500 times, as follows: For each train-calibration-test split, we first randomly select two-thirds of the datapoints from every branch (i.e., six observations) as training data, to fit models μ̂_k(·) using linear regression, and obtain associated confidence bands σ̂_k(·), k∈ [18]. We then pool all the training data from branchesto fit a linear model μ̂(·).Next, we randomly select one datapoint from the remaining 3× 18=54 as a test datapoint, and we use the other 53 datapoints as a calibration set. Following the same procedure as in the simulation studies,we record the averaged coverage indicators and lengths of prediction sets over 500 test points. The box plots of averaged prediction set lengths and coverage probabilities are presented for α=0.10 over the 100 independent trials.The results are shown in Figure <ref>.We obtain slightly more variable results compared with the simulation studies, because the variances within branches are large (therefore the data is more noisy), and we also have less calibration data. Additionally, analogous results obtained at a significance level of α=0.20 are included in the appendix. We also compare the performance of our method on this dataset with the benchmarks from <Ref>, including standard conformal prediction by training only one model μ̂ that applies to every branch, and subsampling <cit.>.The prediction coverage probabilities obtained by our method andstandard conformal prediction are less conservative than those obtained by subsampling, even though all methods have valid coverage. We expect that the repeated subsampling method from <cit.> could improve stability, but will still be conservative.Moreover, our method leads to tighter intervals than the other methods. These results reinforce the advantages of our method compared to alternative approaches. § CONCLUSION AND DISCUSSION We have presented a general methodology for predictive inference in arbitrary observation models satisfying distributional invariance. We have illustrated that our methods have competitive performance in a two layer hierarchical model. There are a number of intriguing directions for further research. In someexamples, the data itself might not satisfy distributional invariance, but some transformation of the data—possibly dependent on unknown parameters—might do so. Can one extend our methods to this setting, possibly leveraging ideas such as joint coverage regions <cit.>? Moreover, is it possible to learnequivariant maps to enable improved predictive inference, as opposed to designing them as we did in this paper? Studying these questions is expected to benefit the broad applicability of rigorous predictive inference methods. § ACKNOWLEDGEMENTSThis work was supported in part by ARO W911NF-20-1-0080, ARO W911NF-23-1-0296, NSF 2031895, NSF 2046874, ONR N00014-21-1-2843, and the Sloan Foundation.We thankYonghoon Lee, Xiao Ma, Matteo Sesia,Yao Xie, Sheng Xu, and Yuling Yan for helpful discussion and feedback on earlier versions of the manuscript. plainnat-abbrev § SUPPLEMENTARY MATERIAL Additional notation and definitions. In the supplementary material we will use the following additional notation and definitions. If two sets A,B are in a bijection, we write A≅ B.For a groupacting on a setby an action ρ,the stabilizer of an element z∈ is the set {g∈: ρ(g)z = z}. The stabilizer is a subgroup of . For two sets S,V, a map f:S→ V, and a subset V' ⊆ V, we denote by f^-1(V') the preimage of V' under f.§.§ Discussion of Equivariance PropertiesIn this section, we discuss the key properties of deterministic equivariance, distributional equivariance, and distributional invariance,which form the building blocks of our theory. We shed light onthe connections between these properties, and on their connections with various classical topics in statistics and the mathematical sciences. These discussions are not strictly needed for understanding the description of our methods.§.§.§ Deterministic EquivarianceRecall that thedeterministicequivariance condition (<ref>) requires thatfor allz∈ and all g∈, f( z)= f(z). Equivalently, one can require that for all -valued random variables, f( Z)= f(Z). Given z ∈, and z̃∈, let _z,_z̃ be their stabilizers with respect to ρ,, respectively. We have the following result: The functions f satisfying the deterministic equivariance condition(<ref>) can be described as follows: partitioninto orbits,collecting representatives in a set R, so that wehave the disjoint union = ∪_z∈ R O_z. For each orbit representative z∈ R,choose f(z) such that_z is a subgroup of _f(z).For allz' ∈ O_z, choose any g∈ such that z' =z, and define f(z') =f(z).First, we show that the procedure from the statement leads to equivariant functions. Indeed, if for some g'∈, z = ρ(g')z,then the definition of f leads to f( z) = f(ρ(g')z), so f(z) = ρ̃(g') f(z). This shows that we must have g(g')^-1∈_f(z).Now, z = ρ(g')z shows that we haveg(g')^-1∈_z; and since by assumption_z is a subgroup of _f(z), the required condition g(g')^-1∈_f(z) follows, showing that the above procedure always leads to equivariant functions.Second, we show that all equivariant functions satisfy these conditions. For any h∈_z, we have f(z) = f( z) =f(z)._z⊂_f(z), and since _z,_f(z) are both subgroups of ,it follows that_z is a subgroup of _f(z). Moreover, ifz' =z, by equivariance, we have f(z') =f(z); showing that equivariant functions satisfy these conditions.§.§.§ Distributional Equivariance We next characterize distributional equivariance.Consider a fixed z∈.Recall thatfor v ∈,O_v = {· v: g∈} is the orbit of v under . Denote the distribution of vover the orbit O_v with the sigma-algebra generated by the intersection of the sigma-algebra overwith O_v when G∼ U by μ_v. Similarly, for v∈,define Õ_v = {· v: g∈}. We have the following result:The functions f satisfying the distributional equivariance condition(<ref>) can be described as follows:for P-almost every z, we have thatfor any measurable set S̃⊂_f(z) and any g∈,μ_z(f^-1(S̃))= μ_z(f^-1(S̃)).Informally, this means that the preimages of the elements of the orbit of f(z) underhave the same density in the original orbit of z. Or, “orbits map to orbits, in the natural measure-preserving way". In comparison, as shown in <Ref>, deterministic -equivariance requires a more restrictive specific one-to-one correspondence within each orbit.If(<ref>) holds, we can condition on Z=zto deduce that f( z)=_df(z) for P-almost every z. Now, f( z) is distributed as the pushforwardf #μ_z of μ_z under f,while f(z) ∼μ̃_f(z),so (<ref>) holds ifff #μ_z = μ̃_f(z). By the definition of a pushforward measure, for any measurable set S̃⊂_z̃,f #μ_z(S̃) = μ_z(f^-1(S̃)). Hence, the above meansμ̃_f(z)(S̃)= μ_z(f^-1(S̃)). Since for G∼ U, and any g∈, f(z) = _df(z), it follows that μ̃_f(z)(S̃)= μ̃_f(z)(S̃), and thus the above implies that for any g∈, (<ref>) holds.For a finite group ,we obtain the following simpler result. Given z ∈, and z̃∈, let _z,_z̃ be their stabilizers with respect to ρ,, respectively. Ifis finite, then given z ∈ and z̃∈, and the actions ρ, ofon ,, the existence of anequivariant map f:O_z→_ such that f(z) = is characterized as follows:* Deterministic equivariance is equivalent to _z being a subgroup of _, _z ≤_. * Distributional equivariance is equivalent tothe size of the group _z dividing the size of the group _, i.e., |_z| | |_|. Clearly, the condition for equivariant maps is stricter.The first part follows from Proposition <ref>. For the second part, since {}∈_ is a measurable subset of _, and since for S⊂ O_v and G distributed uniformly on the finite group , we have μ_v(S) = P(ρ(G)v∈ S) = |S|/|O_v|, the condition (<ref>)characterizing distributional equivariance becomes thatr():=|f^-1(z̃)| ∈ℕ_>0 does not depend on g.Now, from the orbit-stabilizer theorem <cit.>,we have that /_z≅ O_z, /_z≅_. Thus, ||/|_z| = r() ||/|_|, and hence |_| = r() |_z|, and so the size of the group _z divides the size of the group _.As an example for a finite group , consider anysets of representatives ℛ, ℛ̃ of the orbits of the action of ρ on , andon ,an arbitary injective map r:ℛ→ℛ̃, and anycollection of one-to-one maps f_z:O_z→Õ_r(z). Then f defined as f(z') = f_z(z') when z'∈ O_z is distributionally equivariant. For concreteness, let =' = {0,1,…, j} and= ℤ_j+1 =({0,1,…, j},+) acting via additiong· z = g+z modulo j+1. Thendeterministic equivariance requires f(g+z) = g+f(z)modulo j+1, for all g,z. This means f(z+g) = z+f(g), so g+f(z) = z+f(g), so with a = f(0)-0, we have f(z) = z+ a for all z. In contrast, distributional equivariance requires that f(G+z) =_d G+f(z)modulo j+1, when G∼ U, and for all z. It is clear that any function f:→' satisfies this. For the special case of the symmetric group where = _n, and for the permutation actions ρ, acting on = _0^n, <cit.> have provided a sufficient condition for a transform f to preserve equivariance.Their condition states that for anyz∈_0^n, and anyg'∈_n, there is g∈_n such thatg'f(z) = f(gz). Our result recovers theirs in this special case. Indeed, our condition Corollary <ref> in this case states that the preimage of the orbit{gf(z):g∈ S_n}under f equals the orbit {gz:g∈ S_n}, which matches their condition.§.§.§ Distributional Invariance The distributional invariance Z =_dZ, when G∼ U, is equivalent to the distribution of Z|{Z∈ O_z} being μ_z, for P-almost every z∈. Thus, in this case, the orbits (or measurably chosen representatives),are a sufficient statistic for thedistribution of Z. Thus distributional invariance is an example of conditional ancillarity.The key advantage of distributional invariance is that it is preserved under compositions. In contrast, this is not always convenient for conditionally ancillarity. Suppose Z∼ P, p∈, is conditionally ancillary given A, where A:→ is a map,i.e.,for almost every a we have thatZ|A=a has the same distributions for all P∈. Now, for a map f:→, we ask when f(Z) is conditionally ancillary given A. In general, this is not ensured, because f may map different preimages A^-1(a), a∈ to the same value, and hence the resulting conditional distribution may mix the distributions ofZ|A=a with the—possibly P-dependent—distributions of Z and A.In contrast, a key advantage of group invariance is that it does not have this restriction.Different orbits may map into the same orbit, and the resulting distribution is still uniform. In the end, this enables the development of broader classes of architectures that preserve invariance and finally power our methods.§.§ Distribution Shift with Non-symmetric Algorithm This section is devoted to studying predictive inference in cases where a non-symmetric algorithm is employed,even if we have a distribution shift and so Z ≠_d ρ(G) Z. We present a novel algorithm along with theoretical coverage guarantees, wherein possibly distinct weights are assigned to various members within a representative set S, and the function f may not necessarily be distributionally equivariant.For a given group 𝒢 and a function ψ(·), we consider the induced functions ℱ={ℓ_g | g∈𝒢}, where each ℓ_gis defined as ℓ_g(z̃)= ψ(ρ̃(g)z̃) for all z̃. Here we focus on a simplified scenario where the set ℱ is finite and can be represented as ℱ={ψ_1,…,ψ_|ℱ|}.In this case,in our non-symmetric algorithm, we sample the cosets (or, equivalently, a set of representatives denoted as S={g_1,…,g_|ℱ|})from a distribution Γ_Swith probabilities given by (w_1,…,w_|ℱ|).The procedure can also be extended to continuous groups using an approach similar to that discussed earlier. The weights in Γ_S can be arbitrary butaretypicallychosen aiming to minimize the coverage gap due to the distribution shift.For example, when considering the group G=S_n+1 and the function ψ(z)=z_n+1 for all z,the selection of weights is guided by thecharacteristics of the data structures, such as time series analysis and change point detection <cit.>.In these cases, a common strategy is to allocate greater weights to elements that are closer in time to the unobserved data point. Next, we formally define our prediction set when we do not assign equal weights to the representatives of cosets or we do not have a distributionally equivariant map f. We first sample a representative g according to the probability measure Γ_S and we construct our prediction set as follows:T^non-sym()={z:ψ(ρ̃(g)f(ρ^-1(g)z))≤ Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(ρ^-1(g)z))),(z)=}. We next provide a coverage property for this prediction set. We letΔ^w:=∑_i=1^|ℱ|w_i TV(ν_i(f(ρ^-1(g_i)Z)),ν_i(f(Z))), where ν_i(x)=ψ_i(x)-Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(x)) for all x. In addition, we let F^w_ be the c.d.f. of the random variable ψ(ρ(g)),g∼Γ_S. Furthermore, we let F^w'_ be the probability it places on individual points, i.e., for x∈,F^w'_(x) = F^w_(x)-F^w-_(x),where F^w-_(x) = lim_y→ x, y<x F^w_(y) ≥ 0. We then define t^w_z̃:=Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(z̃)). for all . See <Ref> for the proof of the result below.Regardless of the weights(w_1,…,w_|ℱ|),and regardless of whether Z is distributionally invariant and f is distributionally equivariant,the prediction set from (<ref>) satisfies the coverage bound -Δ^w≤|P(Z∈ T^non-sym( ))-(1-α)|≤[F^w'_f(Z)(t^w_f(Z))]+Δ^w.When Z is distributionally invariant over 𝒢, the coverage lower bound reduces to 1-α. If this distributional invariance property does not hold, we see that the prediction set now relies on the set of representatives S that we choose. Therefore, in order to minimize the coverage gap, we suggest choosing each g_i from each coset that minimizes the difference between Z and ρ^-1(g_i)Z. For the case in <cit.>, the suggested g_i∈_n+1 are permuting the ith and the (n+1)st entry of Z. While it would be desirable to compare the coverage of the symmetric and non-symmetric algorithms, in general, this does not seem straightforward. Therefore, we leave this to future work.Our coverage conclusion in <Ref> reduces to the conclusion of non-exchangeable conformal prediction in Theorem 2 and 3 presented in <cit.>when we let 𝒢=_n+1 and ψ(z̃)=e_n+1^⊤z̃ and train a non-symmetric non-conformity score in terms of the input order of the data. We present the relevant details as an example, illustrated below: [Non-exchangeable conformal prediction] To recover the results of<cit.>, we consider Z=(Z_1,…,Z_n+1)^⊤, where we have Z_i=(X_i,Y_i) with X_i being the covariate, Y_i being the response.Let 0≤ w_1≤ w_2≤…≤ w_n≤ w_n+1= 1, and _i=w_i/(∑_j=1^n+1 w_j), i∈ [n+1].Denote by g_j the transposition exchanging j and n+1, keeping other indices fixed, and note that g_j, j∈ [n+1] is a set of representatives of /. Moreover,g_jz:=z^(j)=(z_1,…, z_j-1,z_n+1,z_j+1,…,z_j)^⊤, where we exchange z_n+1↔ z_j. We construct the prediction set in the same way as (<ref>) by letting ψ(x)=e_n+1^⊤ x, 𝒢=S_n+1, and taking Z_i= (X_i,Y_i) for all i∈[n+1], and f(z) = (R(z_1),…, R(z_n+1))^⊤, where R(z_i) = |y_i- ^z(x_i)|,for all i∈[n+1],with ^z:→ being non-symmetric in the sense that ^gz≠^zfor some g∈_n+1. Then,since TV(ν_j(f(Z)),ν_j(f(g_j^-1Z))) ≤TV(f(Z),f(g_j^-1Z)) for all i∈ [n+1], Theorem <ref> implies that the coverage probability is lower bounded asP_Z(Z_n+1∈ T^,n+1(Z_1:n))≥ 1-α-∑_j=1^n_jTV(f(g_j^-1Z),f(Z)).When f is injective and (Z_1,…, Z_n+1) has a continuous distribution, Theorem <ref> implies thatP_Z(Z_n+1∈ T^,n+1(Z_1:n))≤ 1-α+_n+1+∑_j=1^n_jTV(f(g_j^-1Z),f(Z)). Then, these results match with the conclusions in Theorems 2b & 3b of <cit.>. Next we discuss the example of a two-layer tree.[Non-exchangeable conformal prediction for two-layer trees] Continuing the example from <Ref>, we consider predicting Z_M^(K) in the final branch of the two-layer tree depicted in Figure <ref>, left panel.In the following, we will adopt the notations used there.However, we allow that g·Γ≠ _d Γ: for instance, branch growth can be time-ordered and nodes within each branch Γ_k,k∈ [K] can have stronger dependencecompared to nodes across different branches. In such a scenario, it is appropriate to assign greater weights, denoted as w_i^(k),i∈ [M],k∈ [K], to the leaves of the last branch relative to those in earlier branches.For example, we can pick 0≤ w_1^(1)= ⋯= w_1^(K)≤⋯≤ w_M^(1)= ⋯ =w_M^(K) and let w̃_i^(k)=w_i^(k)/(∑_j=1,k=1^M,Kw_j^(k)). We let g_s^(q),s∈ [K],q∈ [M] be the set of representatives of 𝒢/ℋ, and g_s^(q)z:=z^(s,q)=(R,Γ_1,⋯,Γ_q-1,Γ_K^s,Γ_q+1,⋯,Γ_K-1,Γ_q), whereΓ_K^s=(C_K,Z_1^(K),⋯,Z_s-1^(K),Z_M^(K),Z_s+1^(K),⋯,Z_s^(K)). Here we first exchange the q-th branch with the K-th branch and then permute the last entry of the current q-th branch (the original K-th branch) with its s-th entry. We keep f(z)=|z| for all z∈,as we did in <Ref>. Therefore, Theorem <ref> implies that the coverage probability is lower bounded asP_Z(Z_M^(K)∈ T^(Z_i^(k),i∈ [M],k∈ [K] ∖(M,K)))≥ 1-α-∑_i=1,k=1^M,K∖(M,K)_i^(k)TV(|(g_i^(k))^-1Z|,|Z|).WhenZ=(R,Γ_1,⋯,Γ_K) has a continuous distribution, Theorem <ref> implies thatP_Z(Z_M^(K)∈ T^(Z_i^(k),i∈ [M],k∈ [K] ∖(M,K)))≤ 1-α+_M^(K)+∑_i=1,k=1^M,K∖(M,K)_i^(k)TV(|(g_i^(k))^-1Z|,|Z|).§.§ Coarsening Approach for Predictive Inference on GraphsFrom a practical perspective, if the graph is large,we can coarsen it.For instance, we can use a hierarchical graph clustering method to cluster nodes.We keep edges between the clustered nodes with associated multiplicities; thus we obtain a graph with multiple edges allowed between nodes. The automorphism group of the new graph can then be viewed as a subgroup of the original graph that fixes the vertices within each clustered node.We can apply our method to the clustered graphby considering the values of clusters for which node observations are missing as also missing. This construction leads to prediction sets at the cluster-level, which we can view as prediction sets for any symmetric function of the node values, such as for their sum.If there is only one missing observation per cluster, then we can back out prediction sets for the individual nodes;e.g., if using the sum to aggregate, by subtracting the sum of the labeled nodes in the cluster.If there are several nodes with missing values in a cluster, then we obtain a prediction set for their sum. For the tree-structured graphical model from <Ref> we can predict at a cluster after coarsening the graph as follows. We candefine the clusters as the branchesC_k, Z_1^(k),…,Z_M^(k), for k∈ [K]. We can re-write the data as Γ=(R,Γ_1^⊤,…,Γ_K^⊤)^⊤, where we treat every branch Γ_k:=(C_K,Z_1^(K),…,Z_M^(K))^⊤, k∈ [K] as one component.Our goal is to predict the sum of values of the last branch. We let ψ(z)=|(0, 0_(K-1)· (M+1)^⊤,1_M+1^⊤)^⊤ z|, so the quantile is t^(2)_z:=Q_1-α(|1^⊤γ_1|,…,|1^⊤γ_K|), where γ_k are the realized values of Γ_k, k∈[K], and the prediction set is given byT^() = {z: |1^⊤γ_K|≤t^(2)_z̃,ø(z)= } .§.§ Graph Neural Network Construction for Un-supervised LearningHere we give twograph neural network constructionsfor un-supervised learning:a simpler one that is easier to understand, and a more sophisticated one that has bettter performance by interpolating with standard conformal prediction. §.§.§ GNN for Unsupervised LearningWe design a fixed GNN architecture where the neighborhood structure of the graph is as in <Ref>:* We let the sample means over the second-layer nodes be proxy statistics,setting P̃^(k,0) = M^-1∑_i=1^MZ_i^(k),k=1,…, K.More generally, any permutation invariant functions over Z_i^(k),i∈ [M] are a valid choice. We also set the proxy P̃^0 for the root note to be zero, and let Z_i^(k,0) =Z_i^(k).* We apply a modified two-layer MPGNN. In the first layer, we have the following.* We update the leaves as Z_i^(k,1)=|Z_i^(k,0)-P̃^(k,0)|. * We update the proxy variables as P̃^(k,1)= ∑_i∈(k)(P̃^(k,0)- Z_i^(k,0))^2/(M-1), which corresponds to (<ref>) with f_0(x)=y, f_11(x,y)=(x-y)^2/(M-1) and f_12(P̃^(k,0),P̃^0)=0.In the second layer of the MPGNN, we update the leaves as follows:Z_i^(k,2)= Z_i^(k,1)/√(P̃^(k,1)) = |Z_i^(k)-∑_i=1^M Z_i^(k)/M|/√(1/M-1∑_j∈(k)(Z_j^(k)-∑_j=1^M Z_j^(k)/M)^2).This corresponds to (<ref>) with f_1(x,y)=x/√(y),f_0(x,y)=y, * We construct prediction setsfor Z_M^K by following the method in <Ref>.§.§.§ Interpolation with Conformal Inference Here, for every variable we have two channels. * Step 0: (Initialization) * We letZ̅=1/KM∑_k=1^K∑_i=1^M Z_i^(k,0) and σ̂ the sample standard deviation of the full sample. We setthe zeroth-layer nodeP̃^0to be[Alternatively, we can achieve the same effect by initializing zeroth- and first-layer nodes as two-dimensional zero vectors first, and updating them through message passing from the leaves step by step (passing the the sample mean and standard errors).] (Z̅,σ̂)^⊤. * For first-layer nodes P̃^(k,0), we initialize them as (1/M∑_i=1^M Z_i^(k,0),σ̂_k)^⊤, where σ̂_k is the standard error of the {Z_i^(k,0)}_i=1^M.* For leaves, we augment all features Z_i^(k,0) with a constant 1; as the second channel is not used and only kept so that all nodes have two features. We denote X_i^(k,0)=(Z_i^(k,0),1)^⊤. * Step 1: (Update) We update P̃^(k,0) to P̃^(k,1) for all k∈ [K] via P̃^(k,0) and P̃^0 asP̃^(k,1)=f^(1)(P̃^(k,0),P̃^0) =(|P̃^(k,0)(1)-P̃^0(1)/P̃^(k,0)(2)/√(M)|,P̃^(k,0)(2))^⊤ =(|Z̅_k-Z̅/σ̂_k/√(M)|,σ̂_k)^⊤.We keep other variables—the zeroth layer nodeP̃^(1) and leaves X_i^(k,1),∀ k∈[K],i∈ [M]—unchanged: P̃^(1)=P̃^(0) and X_i^(k,1)=X_i^(k,0),∀ k∈[K],i∈ [M].* Step 2: (Update) Again, we only update P̃^(k,1), viaP̃^(k,2)=f^(2)_0(P̃^(k,1),∑_i=1^Mf^(2)_11(P̃^(k,1),X_i^(k,1))+f^(2)_12(P̃^(k,1),P̃^(1))).Here we take f^(2)_11(P̃^(k,1),X_i^(k,1))=Z_i^(k,1)𝕂(P̃^(k,1)(1))/M,for some kernel 𝕂.Therefore, ∑_i=1^Mf^(2)_11(P̃^(k,1),X_i^(k,1)) =Z̅_k𝕂(P̃^(k,1)(1)). We set𝕂(x)=I(|x|≤ c) with some constant c, so that ∑_i=1^Mf_11^(2)(P̃^(k,1),X_i^(k,1))=Z̅_kI(|Z̅_k-Z̅/σ̂_k/√(M)|≤ c).For f_12^(2) we let f_12^(2)(P̃^(k,1),P̃^(1))=Z̅(1-𝕂(P̃^(k,1)(1)))=Z̅I(|Z̅_k-Z̅/σ̂_k/√(M)|>c) Therefore, P̃^(k,2)=(Z̅_kI(|Z̅_k-Z̅/σ̂_k/√(M)|≤ c)+Z̅I(|Z̅_k-Z̅/σ̂_k/√(M)|>c),σ̂_k).We keep X_i^(k,2)=X_i^(k,1) and P̃^(2)=P̃^(1). * Step 3: (Update) We finally update all X_i^(k,3), for all i∈[M],k∈ [K]. We let X_i^(k,3) =f_0^(3)(X_i^(k,2),f_1^(3)(X_i^(k,2),P̃^(k,2)))=(|Z_i^(k)-[Z̅I(|Z̅_k-Z̅|/σ̂_k/√(M)≤ c) +Z̅_kI(|Z̅_k-Z̅|/σ̂_k/√(M)> c)] |/σ̂_k,1)^⊤.We take v=(v_1;…;v_n+1)^⊤∈ℝ^2·(n+1) and let v_n+1=(1,0)^⊤. §.§ Simulation with Random Sample Sizes In this simulation, we let the number of branches be K=20. For every branch, we let N_k∼Unif({10,20}), k∈ [K] independently in the setting of unsupervised learning.The other settings are the same as for the fixed sample size regime presented in the main text. The final statistics are:|Z_i^(k)-[Z̅I(|Z̅_k-Z̅|/σ̂_k/√(M)≤ c) +Z̅_kI(|Z̅_k-Z̅|/σ̂_k/√(M)> c)] |/σ̂_k, for i∈ [M],k∈ [K], where Z̅=1/K∑_k=1^K1/N_k∑_i=1^N_kZ_i^(k), for unsupervised learning.For supervised learning, we also considera setting very similar to the fixed sample size regime presented in the main text.The only difference is that we sample N_k∼Unif({20,40}),k∈ [K] independently for each branch and use the first half of all branches as the training data.The output statistics are the same as (<ref>) for supervised learning except that the datapoints used for training μ̂_k(·),k∈ [K] and μ̂(·) are of random sizes.In addition to the methods used in the main text, we also compare with hierarchical conformal prediction (HCP) <cit.>. The results are presented in Tables <ref> and <ref>, respectively. The conclusions are identical to those from the experiments from the main text. Here HCP performs well, similarly to our method. However, of course our method is more general and applicable to any symmetry group, not just to the hierarchical setting,as described in the main text. §.§ Additional Information about Empirical Data Example §.§.§ Discusson of Modelling Assumptions Finally, we discuss modelling assumptions, and in particular the applicabilityofexchangeability assumptions to the empirical dataset.The covariate X_1 is the number of days lacking sleep.Within a branch, this covariate has a time trend, taking values 0,1,…,9. Thus, its values for one datapoint are not exchangeable with the values for other data points within the same branch.However, even if the covariates are not exchangeable, thevaluesϵ_k=(ϵ_1^(k),…,ϵ_9^(k))^⊤of the random noise can be assumed exchangeable.Therefore, when we fit μ̂_k,k∈ [18] accurately,the residuals (or non-conformity scores) are “nearly" exchangeable.This is expected to be a sufficient assumption for analyzing this dataset.In fact, when ϵ_i^(k) is independent of X_i^(k),some distributions of ϵ_i^(k) (such as the Gaussian distribution)admit an upper bound TV(Y_i^(k)-μ̂_k(X_i^(k)), Y_i^(k)-μ_k^*(X_i^(k)))≤ C 𝔼_X_i^(k)[|μ̂_k(X_i^(k))-μ_k^*(X_i^(k)))|].Hence, if we estimate μ_k^* accurately, then the empirical residuals Y_i^(k)-μ̂_k(X_i^(k)) will be approximately exchangeable.We remark that <cit.> also make a similar exchangeability assumption.However, they train a common μ̂(·)forall branches.For a given k∈ [18], the residuals |Y_i^(k)-μ̂(X_i^(k))|,i∈ [9] may not necessarily be exchangeable, since μ̂(·) could be very different from μ_k^*(·) and thus these residuals can be strongly affected by X_i^(k). §.§.§ Additional Empirical Data PlotsIn this section, we present the additional plots on the length of prediction sets and coverage probability via various methods with level α=0.20 in the following figure <ref>.§ PROOFS §.§ Proof of Theorem <ref>Since ø(z)= by definition,P(Z∈ T_r( ))= P( ψ(f(Z)) < t_f(Z) or ψ(f(Z))=t_f(Z),V< δ_f(Z)).Now, for all ∈, by the definitions of t from (<ref>) and δ from (<ref>), we haveP_G (ψ() < t_ or ψ()=t_,V< δ_)= F^-_(t_)+ F'_(t_) δ_= 1-α.Hence, letting = f(z), we have for all z∈ that P_G (ψ(f(z)) < t_f(z) or ψ( f(z))=t_f(z),V< δ_f(z)) = 1-α. Therefore,using that Z is -distributionally-invariant and t is -invariant, (<ref>) equalsP_G,Z( ψ( f(Z)) < t_ f(Z) or ψ( f(Z))=t_ f(Z),V< δ_ f(Z))= P_G,Z( ψ( f(Z)) < t_f(Z) or ψ( f(Z))=t_f(Z),V< δ_f(Z))= _Z[P_G (ψ( f(Z)) < t_f(Z) or ψ( f(Z))=t_f(Z),V< δ_f(Z)))] = 1-α.This proves the first relation in (<ref>). The second relation follows since T_r() ⊂ T^(). Next, for ∈, by the definitions of t from (<ref>) and of F',P_G (ψ() ≤ t_)= F_(t_)≤1- α+ F'_(t_).Hence,as aboveP(Z∈ T^( ))= P(ψ( f(Z))≤ t_f(Z), ø(Z)= ) =P(ψ( f(Z))≤ t_f(Z))= _Z[P_G(ψ(f(Z)) ≤ t_f(Z))] ≤ 1-α+ _Z[F'_f(Z)(t_f(Z))],proving the third relation in (<ref>). §.§ Proof of Proposition <ref> Let _={ψ(),g∈} be the set of values of ψ(), g∈.If _ is a subgroup of ,clearly for any c ∈_, there is a unique coset = _· g, for some g ∈, such that for all g' ∈, ψ()=c. Thus, the size of each coset is |_|. Thus, F'_(x) ≤ |_|/|| for all x∈,which implies thatP(Z∈ T^( ))≤ 1-α+|_f(Z)|/||. Moreover, if _'does not depend on ',then,is clearly a group. In this case,F'_f(Z)(x) ≤ ||/|| for all x∈, almost surely for Z∼ P; and hence P(Z∈ T^( ))≤ 1-α+||/||. §.§ Proof of Proposition <ref> The proof of Proposition <ref> is derived using the same method as outlined in Theorem 4.2 of <cit.>. Therefore, we omit the details. §.§ Proof of Theorem <ref> By (<ref>), we haveP_Z (Z∈ T^( ))={P_Z (ψ( f(Z)) ≤ t_f(Z))-P_G,Z(ψ(f(Z))≤ t_f(Z))}+P_G,Z(ψ(f(Z))≤ t_f(Z)) =(𝐢)+(𝐢𝐢).For the first term, by definition,|(𝐢)| = |∫_G∫_Z[I( ψ(f(Z))≤ t_f(Z))- I( ψ(ρ̃(G)f(Z))≤ t_f(Z)) ] dP(Z) dP(G)|=|∫_Z_G'∼ U(/)[I( ψ(f(Z))≤ t_f(Z))- I( ψ(ρ̃(G')f(Z))≤ t_f(Z))] dP(Z)| ≤Δ.In the last line, we have used thatI( ψ(f(z))≤ t_f(z)) = I(ν()≤ 0) and I( ψ(ρ̃(G')f(z))≤ t_f(z)) = I(ν(ρ̃(G'))≤ 0); which also uses that t is -invariant. For the second term, by the definition of t_ from (<ref>), we have (𝐢𝐢)≤α. Combining the upper bounds on (𝐢) and (𝐢𝐢), we obtain the lower bound in (<ref>). Next,following the same proof procedure as in Theorem <ref>, for term (𝐢𝐢), we obtain thatP_G,Z(ψ(f(Z))≤ t_f(Z))≤ 1-α+[F'_f(Z)(t_f(Z))].Combining this with the upper bound of (𝐢),the upper bound in (<ref>) follows.§.§ Proof of Propositions <ref> and <ref> The proof of Propositions <ref> and <ref> follows directly from our construction of the deterministic equivariant massage-passing graph neural network and the distributional invariance of the input variable Z. Therefore, we omit the details.§.§ Proof of Theorem <ref> Due to the data-generating process and the definition of t_ in (<ref>),for the given n⃗=(n_1,…,n_K)^⊤,it holds that t_=t_g_n⃗, for anyg_n⃗∈_n⃗=_n_1⊗_n_2⊗…⊗_n_K. According the definition of 𝒢_n⃗, and t_(·),recalling ψ_n⃗()=_n_K^(K) for all ∈, it holdsconditionally onn⃗ that P_Z(ψ_n⃗(f(Z)) >t_f(Z))= P_G_n⃗,Z(ψ_n⃗(G_n⃗· f(Z))>t_f(Z)) =_Z_G_n⃗[I(ψ_n⃗(G_n⃗·f(Z))>t_f(Z))] = _Z[1/n_K∑_j=1^n_KI([f(Z)]_j^(K) > t_f(Z))].We next define ψ̃(x)=e_K^⊤ x,where e_K∈ℝ^K with the K-th entry being unity and others being zero,for all x∈ℝ^K.Furthermore, for the given n, we let h:𝒵^*→∈ℝ^K by defining its m-th output entry as [h(Z_1,…,Z_K)]_m=1/n_m∑_j=1^n_mI(f(Z_j^(m))>t_f(Z)), m∈ [K].Next, we take the randomness of Z,N⃗ into consideration.By the exchangeability of (Z_1,…,Z_K)^⊤ and the definition of t_(·),it holds that h(Z_1,…, Z_K) =_d G' h(Z_1,…,Z_K) for G'∼Unif(_K).Therefore, according to the definition of ψ̃ and h(·), it holds that _Z[1/N_K∑_j=1^N_KI([f(Z)]_j^(K) > t_f(Z))] =_Z[ψ̃(h(Z))] =_Z,G'[ψ̃(G'· h(Z))] = _Z[1/K∑_k=1^K1/N_k∑_j=1^N_kI([f(Z)]_j^(k) >t_f(Z))] ≤α.Therefore, we obtainP(Z∈ T^(,N⃗))= P_N,Z(ψ_N⃗(f(Z))≤ t_f(Z), ø(Z)= )≥ 1- α.The upper bound can be proved in a similar way as the corresponding upper bound in the proof of Theorem <ref>, and we omit the details.§.§ Proof of Theorem <ref> By the definition of the prediction set in (<ref>), we have P(ψ((G)f(ρ^-1(G)Z)> Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(ρ^-1(G)Z))) )=∑_i=1^|ℱ|w_i P(ψ(ρ̃(G)f(ρ^-1(G)Z)> Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(ρ^-1(G)Z)))|G=g_i).Since G is independent of Z, we obtain∑_i=1^|ℱ|w_i P(ψ(ρ(G)f(ρ^-1(G)Z)> Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(ρ^-1(G)Z)))|G=g_i) =∑_i=1^|ℱ|w_i P(ψ_i(f(ρ^-1(g_i)Z)> Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(ρ^-1(g_i)Z)))) ≤∑_i=1^|ℱ|w_i P(ψ_i(f(Z))> Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(Z))))+∑_i=1^|ℱ|w_i TV(ν_i(f(ρ^-1(g_i)Z)),ν_i(f(Z))),where ν_i(x)=ψ_i(x)-Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(x)).The inequality above holds since we can writeP(ψ_i(f(Z))> Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(Z))))=I(ν_i(f(Z))>0)andI(ψ_i(f(ρ^-1(g_i)Z)> Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(ρ^-1(g_i)Z))))=I(ν_i(f(ρ^-1(g_i)Z))>0). Subsequently, we prove upper and lower bounds for term∑_i=1^|ℱ|w_i P(ψ_i(f(Z))> Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(Z)))).For the upper bound, this term is at most α by definition. For the lower bound, following an argument similar to the proof of Theorem <ref>, we obtain∑_i=1^|ℱ|w_i P(ψ_i(f(Z))≤Q_1-α(∑_j=1^|ℱ|w_jδ_ψ_j(f(Z))))≤ 1-α+_Z[F^w'_f(Z)(t^w_f(Z))].We then conclude the proof.
http://arxiv.org/abs/2312.16160v2
{ "authors": [ "Edgar Dobriban", "Mengxin Yu" ], "categories": [ "stat.ME", "cs.LG", "math.ST", "stat.ML", "stat.TH" ], "primary_category": "stat.ME", "published": "20231226184114", "title": "SymmPI: Predictive Inference for Data with Group Symmetries" }
These authors contributed equally to the work. National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, ChinaThese authors contributed equally to the work. CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China Center of Materials Science and Optoelectronics Engineering, College of Materials Science and Opto-electronic Technology, University of Chinese Academy of Sciences, Beijing 100049, ChinaNational Synchrotron Radiation Research Center, Hsinchu 30077, Taiwan National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, ChinaNational Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, ChinaSchool of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, ChinaSchool of Physics, Beihang University, Beijing 100191, China CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, ChinaNational Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, ChinaDepartment of Applied Physics, Nanjing University of Science and Technology, Nanjing 210094, ChinaNational Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, [email protected] CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, [email protected] National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, [email protected] National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, ChinaIn heavy-fermion systems with f electrons, there is an intricate interplay between Kondo screening and magnetic correlations, which can give rise to various exotic phases. Recently, similar interplay appears to also occur in d-electron systems, but the underlying mechanism remains elusive. Here, using inelastic neutron scattering, we investigate the temperature evolution of the low-energy spin waves in a metallic van der Waals ferromagnet (Curie temperature T_ C∼160 K), where the Kondo-lattice behavior emerges in the ferromagnetic phase below a characteristic temperature T^*∼90 K. We observe that the magnon damping constant diverges at both low and high temperatures, exhibiting a minimum coincidentally around T^*. Such an observation is analogous to the resistivity minimum as due to the single-impurity Kondo effect. This unusual behavior is described by a formula that combines logarithmic and power terms, representing the dominant contributions from Kondo screening and thermal fluctuations, respectively. Furthermore, we find that the magnon damping increases with momentum below T_ C. These findings can be explained by considering spin-flip electron-magnon scattering, which serves as a magnonic analog of the Kondo-impurity scattering, and thus provides a measure of the Kondo coupling through magnons. Our results provide critical insights into how Kondo coupling manifests itself in a system with magnetic ordering and shed light on the coexistence of and interplay between magnetic order and Kondo effect in itinerant 3d-electron systems.Observation of Magnon Damping Minimum Induced by Kondo Coupling in a van der Waals Ferromagnet Fe_3-xGeTe_2 Jinsheng Wen January 14, 2024 ===========================================================================================================The original Kondo effect, which takes the observed resistivity minimum in dilute magnetic alloys as one of its typical characteristics, describes the scattering of conduction electrons by magnetic impurities in dilute magnetic systems <cit.>. This effect is classified as the single-impurity Kondo problem and is now well understood based on the density of states of conduction electrons and their coupling with the single-impurity spins [Fig. <ref>(a) and (d)] <cit.>. The concept of Kondo physics has been extended to include the correlated electron systems, specifically those consisting of a dense periodic array of local moments interacting with the conduction electron sea through an antiferromagnetic Kondo interaction, referred to as a Kondo lattice [Fig. <ref>(b)-(e)] <cit.>. The Kondo-lattice model was initially proposed to describe the heavy-fermion state in f-electron systems, particularly in intermetallics containing Ce, Yb, or U elements <cit.>. In Kondo lattices, the Kondo effect can also induce an effective magnetic interaction between the localized spins, known as the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, which can dominate the exchange coupling between neighboring localized spins. The competition between the Kondo effect and couplings of localized spins underlies various intriguing phenomena in heavy-fermion compounds, including quantum criticality <cit.>, strange-metal behavior <cit.>, and magnetism <cit.> and superconductivity<cit.>, which can be tuned by chemical doping, magnetic field or pressure <cit.>. In the strong Kondo coupling regime, the full hybridization between localized f electrons and the conduction electrons leads to the quenching of local moments and the expansion of the Fermi surface, resulting in the formation of a heavy Fermi-liquid state [Fig. <ref>(d) and (e)] <cit.>. Magnetic order can arise when the strength of the RKKY interaction exceeds the Kondo coupling [Fig. <ref>(e)]. Moreover, the coexistence of magnetic order with Kondo effect <cit.> or superconductivity <cit.> can be observed, particularly in systems where the duality of f electrons emerges due to the mixed-valence situation and the multiorbital nature <cit.>. More recently, the unexpected heavy-fermion state has also been observed in certain d-electron transition metals, which reside in the intermediate regime between itineracy and localization of d electrons <cit.>. The understanding of magnetism in the context of duality in these systems and the applicability of the Kondo-lattice model are currently active topics of research <cit.>. The van der Waals (vdW) metallic ferromagnet Fe_3GeTe_2 is an example material. It has recently garnered attention due to the discovery of tunable room-temperature ferromagnetism down to the monolayer limit <cit.> and promising application potentials in magnetic vdW heterostructures <cit.>. More importantly, previous studies reported the discovery of heavy-fermion state in this material, with intriguing electronic correlations, large effective electron mass renormalization, and the Kondo-lattice behavior <cit.>.Due to the coexistence of localized and itinerant 3d electrons, it is not surprising that there is conflicting evidence regarding the microscopic origin of the magnetism in  <cit.>. In our earlier work, we have reconciled the debate by showing that the ferromagnetism has a dual origin, with local moments and itinerant electrons contributing to the low-energy spin waves and columnlike continua, respectively <cit.>. Moreover, the Kondo coupling between local moments and itinerant electrons has also been reported <cit.>. It can be found that the resistivity curve of reveals an incoherent-coherent crossover at a characteristic temperature T^*, as depicted in Fig. <ref>(d) (see also Refs. PhysRevX.12.011022,Zhangeaao6791). Below T^*, the Kondo-lattice behavior emerges in the magnetic ordering phase, accompanied by the enhancement of Fermi surface volume and effective electron mass <cit.>. Notably, this resistivity behavior aligns with observations in other d-electron systems exhibiting a heavy-fermion state <cit.>, but is distinct from either the Kondo-impurity model or the Kondo lattice in f-electron systems [Fig. <ref>(d)]. Our earlier work has also found that the interplay between local moments and itinerant electrons, manifesting as the Kondo screening effect, is enhanced at low temperatures, resulting in a significantly heavier damping of spin waves at 4 K compared to 100 K <cit.>. Typically, magnons are well-defined and long-lived at low temperatures, and they gradually lose coherence upon warming due to the thermal fluctuations <cit.>. Given the presence of Kondo screening, how do spin excitations evolve with temperature?In this Letter, we use inelastic neutron scattering (INS) to carefully study the temperature evolution of the low-energy spin waves in single crystals of Fe-deficient with the Curie temperature T_ C∼160 K. We find that upon cooling from T_ C, the damping of magnons will first decrease toward T^*∼90 K and then increase rapidly, leaving a minimum at this intermediate temperature. We find it to be scaled with a linear combination of logarithmic and power terms, with these two terms corresponding to the Kondo coupling and thermal fluctuations, respectively. Furthermore, we find that the in-plane Kondo screening surpasses the out-of-plane screening, resulting in the softening of in-plane magnons at low temperatures. Our results provide smoking-gun evidence for the existence of Kondo effect in the metallic ferromagnet , and demonstrate magnon damping as an effective measure for the Kondo coupling in systems where magnetic order and Kondo effect coexist.The INS experiments on single crystals of were performed on Sika located at the OPAL facility of ANSTO in Australia <cit.>. Measurements were carried out using a fixed final-energy mode with E_ f=5.0 meV, where both incident and final neutron energies were determined by pyrolytic graphite (002) crystals. An open-open-60'-60' collimation was used to strike a fine balance between neutron flux and experimental resolution. These settings gave an energy resolution of ∼0.15 meV at the elastic line. The sample used in this measurement was the same as that used in Ref. PhysRevX.12.011022, and was mounted in the (H, H, L) scattering plane. The wavevector Q was expressed as (H, K, L) in reciprocal lattice unit (r.l.u.) of (a^*, b^*, c^*)=(4π/√(3)a, 4π/√(3)b, 2π/c) with refined lattice parameters a=b=3.946 Å and c=16.357 Å in a hexagonal structure.Figure <ref>(a) and (b) show the energy scans at various temperatures for the off-centered in-plane and out-of-plane positions, respectively. Note that no sizable spin gap can be resolved at the Brillouin zone center, although there exists the magnetocrystalline anisotropy along the c axis <cit.>. From these raw scattering data, we can already find some unexpected features. For (-0.05, -0.05, 2) within the in-plane direction [Fig. <ref>(a)], no inelastic peak due to magnons can be observed at 4 K. Interestingly, the magnon peak appears at higher temperatures, but its center changes non-monotonicallyupon warming to T_ C. For (0, 0, 1.7) within the out-of-plane direction [Fig. <ref>(b)], the peak center always shifts to lower energies upon warming. For both directions, the scattering intensities increase with increasing temperature, which can be mainly ascribed to the Bose population factor that elevates the intensities at low energies and shifts the peak toward lower energies.To eliminate the influence of the Bose statistics, we correct the raw data with the Bose population factor, and plot the corrected results in Fig. <ref>(c) and (d). As a result, the scattering intensities at different temperatures become comparable. We fit the corrected results with a damped harmonic oscillator (DHO) formula, which is applicable to damped spin waves <cit.>. The DHO formula has a form of χ”(Q,E)∝γE_0E/[(E^2-E_0^2)^2+(γ E)^2], where E_0 is the magnon energy and γ is the damping constant (the inverse of γ is proportional to the lifetime of the damped magnons) <cit.>. Based on these DHO fittings shown in Fig. <ref>(c) and (d), we can extract the intrinsic magnon energy and the damping constant, which enables us to quantitatively characterize the temperature dependence of the magnons and their damping behaviors.Figure <ref> presents the extracted γ and E_0 plotted as a function of temperature, showcasing the most intriguing results of this study. It is found that for both in-plane [Fig. <ref>(a)] and out-of-plane [Fig. <ref>(b)] directions, γ shows an upturn toward both 0 K and T_ C, causing a minimum around T^*∼90 K. Notably, this characteristic temperature is consistent with the incoherent-coherent crossover observed in the resistivity curve [Fig. <ref>(d)]. This phenomenon is reminiscent of the resistivity minimum caused by the Kondo effect in the original single-impurity Kondo model [Fig. <ref>(d)] <cit.>, where thermodynamic and transport properties depend logarithmically on temperature as -ln (T) <cit.>. Inspired by this, in Fig. <ref>, we use a similar logarithmic term to describe the divergent behavior toward 0 K. In the meantime, a power term normally describing the spin-wave damping in the hydrodynamic regime is also required to explain the divergence toward T_ C <cit.>. Actually, these two effects should exist simultaneously over the entire temperature range below T_ C. Therefore, the general formula should consist of the linear combination of the two terms, which reads as,γ(T)=A In(T^*/T)+B(1-T/T_ C)^-ν.To fit the data, we fix T^*=90 K and T_ C=160 K, and allow A, B, ν to be free parameters. The fitting results well reproduce the unusual non-monotonic temperature evolution of γ for both in-plane [Fig. <ref>(a)] and out-of-plane [Fig. <ref>(b)] directions. Specifically, we obtain A_ in=1.556±0.254, B_ in=0.888±0.129, and ν_ in=0.692±0.108 for the in-plane direction, and A_ out=0.316±0.019, B_ out=0.530±0.022, and ν_ out=0.382±0.027 for the out-of-plane direction. The distinct parameter values for the in-plane and out-of-plane directions indicate the presence of magnetic anisotropy, which originates from the quasi-two-dimensional structure of , consistent with our earlier magnetometry and neutron spectroscopy data on this material <cit.>.Other than the universal scaling of damping constant γ, the corresponding magnon energy E_0 shownin Fig. <ref> exhibits distinct behaviors for the in- and out-of-plane directions. With increasing temperature, it is found that the out-of-plane E_0 softens through a power law while approaching T_ C [Fig. <ref>(b)], a behavior similar to the temperature evolution of the magnetization in  <cit.>. On the other hand, the in-plane E_0 keeps almost constant across T^* and slightly softens upon cooling to lower temperatures [Fig. <ref>(a)]. We attribute these two distinct behaviors of E_0 for the two directions to the different degrees of Kondo screenings. Note that the magnitude of γ for the in-plane direction is much larger than that for the out-of-plane direction, indicating a stronger screening effect for the in-plane direction. This can also be revealed from the ratio of the coefficients A_ in(out)/B_ in(out), which roughly weighs the contribution to the magnon damping from the Kondo effect over the thermal broadening. It is 1.75 for the in-plane direction, about three times larger than that for the out-of-plane direction. When the Kondo screening is more significant for the in-plane direction, it can remarkably reduce the intralayer exchange coupling between local moments, resulting in a decrease in the in-plane E_0 at low temperatures, as shown in Fig. <ref>(a). In some heavy-fermion compounds with f electrons, the phonon softening at low temperatures has been observed, which was attributed to the screening of atomic force <cit.>. Intriguingly, phonon softening deviating from the anharmonic model has also been reported in by the Raman measurements <cit.>. These findings indicate a complex correlation between electrons, lattice and magnetism in .To examine the momentum dependence of the damping, we plot the results of the low-energy spin-wave excitations and their damping constant as a function of momentum in Fig. <ref>. Let us take the excitations at 90 K as an example, where the spin waves are most coherent with the least damping (Fig. <ref>). The energy scans along [110] and [001] directions are plotted in Fig. <ref>(a) and (b), respectively. As Q increases, we observe a gradual shift in the peak center to higher energy, accompanied by a broadening of the linewidth and a weakening of the scattering intensities, indicating the nature of damped spin waves. We perform DHO fittings to these scans at various Qs to extract and plot the momentum dependence of E_0 [Fig. <ref>(c) and (e)] and γ [Fig. <ref>(d) and (f)] for the two directions. To fit these low-energy ferromagnetic spin waves at small Qs, we use the quadratic dispersion relation of E_0=Δ+D Q^2, where Δ is the spin gap and D is the spin-wave stiffness. The fittings give D of 54.5±0.8  meV Å^2 for the [110] direction [Fig. <ref>(c)] and 70.9±1.1  meV Å^2 for the [001] direction [Fig. <ref>(e)] at 90 K. Concerning γ, it appears to increase with Q for both directions at the first glance. However, the detailed relationships with the reduced wavevector q differ, as they follow quasi-quadratic (∼ q^2.2) [Fig. <ref>(d)] and linear dependencies [Fig. <ref>(f)] for the in-plane and out-of-plane directions, respectively. The low-energy spin waves at other temperatures like 4, 45 and 130 K are also measured. With increasing temperature, we find that the spin-wave dispersion remains nearly unchanged for the in-plane direction but softens considerably for the out-of-plane direction [Fig. <ref>(c) and (e)]. Note that since the in-plane spin waves are heavily damped at 4 K, it is challenging to extract a reasonable dispersion. It is found that the original q-dependences of γ for the two directions at 90 K remain applicable at other temperatures, i.e., γ_ in∼η q^2.2 and γ_ out∼η q. However, the magnitude and coefficient η of the damping vary due to the temperature dependence of the Kondo screening effect [Fig. <ref>(d) and (f)]. To better illustrate the temperature evolution of the spin waves, we plot the extracted spin-wave stiffness D and damping coefficient η as a function of temperature in Fig. <ref>(g) and (h), respectively. The out-of-plane D follows a typical power-law relationship with temperature, while the in-plane one remains nearly constant. For η, it follows a non-monotonic dependence and can also be traced by Eq. <ref> for both directions. These findings further extend the results shown in Fig. <ref> by investigating the magnetic excitations at different momenta, thereby uncovering a universal law governing the Kondo physics in .To the best of our knowledge, this is the first observation of a minimum in the magnon damping constant in heavy-fermion or Kondo-lattice systems. It is noteworthy that the minimum in magnon damping curve (Fig. <ref>) is closely related with the slope change in resistivity curve of  [Fig. <ref>(d)]. Unlike the full screening of local moments in the coherent Kondo lattice of some f-electron systems [Fig. <ref>(c)], we consider demonstrates partial screening of local moments [Fig. <ref>(b)]. In this scenario, magnetic order by local moments is mediated by conduction electrons <cit.>, giving rise to the Kondo effect <cit.>. Below T^*, the localized electronic state hybridizes with the itinerant electronic state, enlarging the Fermi surface volume and effective electron mass <cit.>. Consequently, from the perspective of conduction electrons, the slope of resistivity changes, manifesting heavy-fermion behavior. Simultaneously, from the perspective of spin waves, these processes provide additional decay channels for the magnons, resulting in a significant damping of magnons below T^*.In , considering that direct exchange interactions and Kondo coupling dominate, a Ferromagnetic Kondo-Heisenberg lattice model [Fig. <ref>(b)] should be appropriate to understand our results. By treating conduction electrons as scatterers for the propagating magnons, we consider the second-order perturbation processes involving spin-flip scattering between conduction electrons and magnons to be crucial. In such processes, a magnon decays into a fermionic particle-hole pair and later reunites into a magnon. In this context, the magnon-electron interaction vertex is renormalized with a logarithmic temperature dependence, introducing a logarithmic scaling of the magnon damping rate at low temperatures, reminiscent of the single-impurity Kondo problem <cit.>. By including the contribution to magnon self-energy due to magnon-magnon interactions, which diverges as a power law at relative high temperatures, the observed damping minimum can be reproduced in principle. Furthermore, since the kinetically allowed phase space for spin-flip scattering increases with increasing magnon momentum, the magnon damping becomes heavier with increasing momentum, in agreement with the experimental observation (Fig. <ref>). In our separate work <cit.>, we have successfully reproduced the magnon damping minimum for a one-dimensional ferromagnetic Kondo-Heisenberg chain using tensor renormalization group methods. However, extending such protocols to higher-dimensional systems and obtaining analytical solutions for the magnon damping minimum remain challenging, which warrants further study.In conclusion, our INS study of the temperature evolution of low-energy magnons in reveals two main characteristics of magnon damping: a non-monotonic temperature dependence with logarithmic scaling below T^*, and an increase in damping with momentum. Additionally, we find that the in-plane magnons undergo softening at low temperatures due to the more significant in-plane Kondo coupling. These findings provide important insights into the role of Kondo coupling in from the perspective of magnons. They are consistent with other experimental observations in , such as Fano resonance features in scanning tunneling microscopy measurements <cit.>, the heavy-fermion state inferred from angle-resolved photoemission spectroscopy measurements <cit.> and the Sommerfeld coefficient <cit.>, as well as the incoherent-coherent crossover in transport and magnetic measurements <cit.>, which reflect the Kondo physics from the perspective of conduction electrons. Our work highlights magnon damping as an effective probe on theKondo coupling in systems where magnetic order and Kondo effect coexist. The work was supported by National Key Projects for Research and Development of China with Grant No. 2021YFA1400400, National Natural Science Foundation of China with Grant Nos. 12225407, 12074174, 12074175, 11904170, 11974036, 12222412, 12047503 and 12004191, Natural Science Foundation of Jiangsu province with Grant Nos. BK20190436 and BK20200738, Natural Science Foundation of the Higher Education Institutions of Jiangsu Province with Grant No. 23KJB140012, China Postdoctoral Science Foundation with Grant Nos. 2022M711569 and 2022T150315, Jiangsu Funding Program for Excellent Postdoctoral Talent No. 20220ZB5, and Fundamental Research Funds for the Central Universities. We acknowledge the neutron beam time from ANSTO with Proposal Nos. P9631 and P13772.56 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Kondo(1964)]10.1143/PTP.32.37 author author Jun Kondo, title title Resistance Minimum in Dilute Magnetic Alloys, 10.1143/PTP.32.37 journal journal Prog. Theor. Phys. volume 32, pages 37–49 (year 1964)NoStop [Hewson(1993)]hewson1997kondo author author Alexander CyrilHewson, @nooptitle The Kondo problem to heavy fermions (publisher Cambridge University Press, Cambridge, England, year 1993)NoStop [Coleman(2015)]coleman2015introduction author author Piers Coleman, @nooptitle Introduction to many-body physics (publisher Cambridge University Press, Cambridge, England, year 2015)NoStop [Pavarini et al.(2015)Pavarini, Coleman, and Koch]pavarini2015many author author Eva Pavarini, author Piers Coleman,and author Erik Koch, @nooptitle Many-body physics: from Kondo to Hubbard (publisher Theoretische Nanoelektronik, Jülich, Germany, year 2015)NoStop [Stewart(1984)]RevModPhys.56.755 author author G. R. Stewart, title title Heavy-fermion systems, 10.1103/RevModPhys.56.755 journal journal Rev. Mod. Phys. volume 56, pages 755–787 (year 1984)NoStop [Kirchner et al.(2020)Kirchner, Paschen, Chen, Wirth, Feng, Thompson, and Si]RevModPhys.92.011002 author author Stefan Kirchner, author Silke Paschen, author Qiuyun Chen, author Steffen Wirth, author Donglai Feng, author Joe D. Thompson,and author Qimiao Si, title title Colloquium: Heavy-electron quantum criticality and single-particle spectroscopy, 10.1103/RevModPhys.92.011002 journal journal Rev. Mod. Phys. volume 92, pages 011002 (year 2020)NoStop [Löhneysen et al.(2007)Löhneysen, Rosch, Vojta, andWölfle]RevModPhys.79.1015 author author Hilbert v.Löhneysen, author AchimRosch, author MatthiasVojta,and author PeterWölfle, title title Fermi-liquid instabilities at magnetic quantum phase transitions,10.1103/RevModPhys.79.1015 journal journal Rev. Mod. Phys. volume 79, pages 1015–1075 (year 2007)NoStop [Schröder et al.(2000)Schröder, Aeppli, Coldea, Adams, Stockert, Löhneysen, Bucher, Ramazashvili, and Coleman]schroder2000onset author author A. Schröder, author G. Aeppli, author R. Coldea, author M. Adams, author O. Stockert, author H.v. Löhneysen, author E. Bucher, author R. Ramazashvili,and author P. Coleman, title title Onset of antiferromagnetism in heavy-fermion metals, 10.1038/35030039 journal journal Nature volume 407, pages 351–355 (year 2000)NoStop [Gegenwart et al.(2002)Gegenwart, Custers, Geibel, Neumaier, Tayama, Tenya, Trovarelli, and Steglich]PhysRevLett.89.056402 author author P. Gegenwart, author J. Custers, author C. Geibel, author K. Neumaier, author T. Tayama, author K. Tenya, author O. Trovarelli,and author F. Steglich, title title Magnetic-Field Induced Quantum Critical Point in YbRh_2Si_2,10.1103/PhysRevLett.89.056402 journal journal Phys. Rev. Lett. volume 89,pages 056402 (year 2002)NoStop [Custers et al.(2003)Custers, Gegenwart, Wilhelm, Neumaier, Tokiwa, Trovarelli, Geibel, Steglich, Pépin, andColeman]custers2003break author author J. Custers, author P. Gegenwart, author H. Wilhelm, author K. Neumaier, author Y. Tokiwa, author O. Trovarelli, author C. Geibel, author F. Steglich, author C. Pépin,and author P. Coleman, title title The break-up of heavy electrons at a quantum critical point,10.1038/nature01774 journal journal Nature volume 424, pages 524–527 (year 2003)NoStop [Shen et al.(2020)Shen, Zhang, Komijani, Nicklas, Borth, Wang, Chen, Nie, Li, Lu et al.]shen2020strange author author Bin Shen, author Yongjun Zhang, author Yashar Komijani, author Michael Nicklas, author Robert Borth, author An Wang, author Ye Chen, author ZhiyongNie, author Rui Li, author Xin Lu,et al., title title Strange-metal behaviour in a pure ferromagnetic Kondo lattice, 10.1038/s41586-020-2052-z journal journal Nature volume 579, pages 51–55 (year 2020)NoStop [Stewart(2001)]RevModPhys.73.797 author author G. R. Stewart, title title Non-Fermi-liquid behavior in d- and f-electron metals, 10.1103/RevModPhys.73.797 journal journal Rev. Mod. Phys. volume 73, pages 797–855 (year 2001)NoStop [Chen et al.(2019)Chen, Luo, Xie, Li, Ji, Zhou, Huang, Zhang, Feng, Zhang, Huang, Hao, Liu, Zhu, Liu, Zhang, Lai, Si, andTan]PhysRevLett.123.106402 author author Q. Y. Chen, author X. B. Luo, author D. H. Xie, author M. L. Li, author X. Y. Ji, author R. Zhou, author Y. B.Huang, author W. Zhang, author W. Feng, author Y. Zhang, author L. Huang, author Q. Q. Hao, author Q. Liu, author X. G.Zhu, author Y. Liu, author P. Zhang, author X. C. Lai, author Q. Si,and author S. Y. Tan, title title Orbital-Selective Kondo Entanglement and Antiferromagnetic Order in USb_2, 10.1103/PhysRevLett.123.106402 journal journal Phys. Rev. Lett. volume 123, pages 106402 (year 2019)NoStop [Giannakis et al.(2019)Giannakis, Leshen, Kavai, Ran, Kang, Saha, Zhao, Xu, Lynn, Miao, Wray, Kotliar, Butch, and Aynajian]doi:10.1126/sciadv.aaw9061 author author Ioannis Giannakis, author Justin Leshen, author Mariam Kavai, author Sheng Ran, author Chang-Jong Kang, author Shanta R. Saha, author Y. Zhao, author Z. Xu, author J. W.Lynn, author Lin Miao, author L. Andrew Wray, author Gabriel Kotliar, author Nicholas P. Butch,and author Pegor Aynajian, title title Orbital-selective Kondo lattice and enigmatic f electrons emerging from inside the antiferromagnetic phase of a heavy fermion, 10.1126/sciadv.aaw9061 journal journal Sci. Adv. volume 5, pages eaaw9061 (year 2019)NoStop [Perkins et al.(2007)Perkins, Núñez Regueiro, Coqblin, andIglesias]PhysRevB.76.125101 author author N. B. Perkins, author M. D. Núñez Regueiro, author B. Coqblin, and author J. R. Iglesias,title title Underscreened Kondo lattice model applied to heavy fermion uranium compounds, 10.1103/PhysRevB.76.125101 journal journal Phys. Rev. B volume 76, pages 125101 (year 2007)NoStop [Lee et al.(2018)Lee, Matsuda, Mydosh, Zaliznyak, Kolesnikov, Süllow, Ruff,and Granroth]PhysRevLett.121.057201 author author Jooseop Lee, author Masaaki Matsuda, author John A. Mydosh, author Igor Zaliznyak, author Alexander I. Kolesnikov, author Stefan Süllow, author Jacob P. C. Ruff,and author Garrett E. Granroth, title title Dual Nature of Magnetism in a Uranium Heavy-Fermion System, 10.1103/PhysRevLett.121.057201 journal journal Phys. Rev. Lett. volume 121, pages 057201 (year 2018)NoStop [Aoki et al.(2001)Aoki, Huxley, Ressouche, Braithwaite, Flouquet, Brison, Lhotel, and Paulsen]Aoki2001 author author Dai Aoki, author Andrew Huxley, author Eric Ressouche, author Daniel Braithwaite, author Jacques Flouquet, author Jean-Pascal Brison, author Elsa Lhotel,and author Carley Paulsen, title title Coexistence of superconductivity and ferromagnetism in URhGe, 10.1038/35098048 journal journal Nature volume 413,pages 613–616 (year 2001)NoStop [Pfleiderer(2009)]RevModPhys.81.1551 author author ChristianPfleiderer, title title Superconducting phases of f-electron compounds, 10.1103/RevModPhys.81.1551 journal journal Rev. Mod. Phys. volume 81, pages 1551–1624 (year 2009)NoStop [Doniach(1977)]DONIACH1977231 author author S. Doniach, title title The Kondo lattice and weak antiferromagnetism, https://doi.org/10.1016/0378-4363(77)90190-5 journal journal Physica B+C volume 91, pages 231–234 (year 1977)NoStop [Yang et al.(2008)Yang, Fisk, Lee, Thompson, andPines]yang2008scaling author author Yi-feng Yang, author Zachary Fisk, author Han-Oh Lee, author J. D. Thompson,and author David Pines, title title Scaling the Kondo lattice, 10.1038/nature07157 journal journal Naturevolume 454, pages 611–613 (year 2008)NoStop [Kondo et al.(1997)Kondo, Johnston, Swenson, Borsa, Mahajan, Miller, Gu, Goldman, Maple, Gajewski, Freeman, Dilley, Dickey, Merrin, Kojima, Luke, Uemura, Chmaissem, and Jorgensen]PhysRevLett.78.3729 author author S. Kondo, author D. C. Johnston, author C. A. Swenson, author F. Borsa, author A. V. Mahajan, author L. L. Miller, author T. Gu, author A. I. Goldman, author M. B.Maple, author D. A. Gajewski, author E. J. Freeman, author N. R. Dilley, author R. P. Dickey, author J. Merrin, author K. Kojima, author G. M. Luke, author Y. J.Uemura, author O. Chmaissem,and author J. D. Jorgensen, title title LiV_2O_4: A Heavy Fermion Transition Metal Oxide,10.1103/PhysRevLett.78.3729 journal journal Phys. Rev. Lett. volume 78, pages 3729–3732 (year 1997)NoStop [Urano et al.(2000)Urano, Nohara, Kondo, Sakai, Takagi, Shiraki, and Okubo]PhysRevLett.85.1052 author author C. Urano, author M. Nohara, author S. Kondo, author F. Sakai, author H. Takagi, author T. Shiraki,and author T. Okubo, title title LiV_2O_4 Spinel as a Heavy-Mass Fermi Liquid: Anomalous Transport and Role of Geometrical Frustration, 10.1103/PhysRevLett.85.1052 journal journal Phys. Rev. Lett. volume 85, pages 1052–1055 (year 2000)NoStop [Kobayashi et al.(2004)Kobayashi, Terasaki, Takeya, Tsukada, and Ando]doi:10.1143/JPSJ.73.2373 author author Wataru Kobayashi, author Ichiro Terasaki, author Jun-ichi Takeya, author Ichiro Tsukada,and author Yoichi Ando, title title A Novel Heavy-Fermion State in CaCu_3Ru_4O_12, 10.1143/JPSJ.73.2373 journal journal J. Phys. Soc. Jpn. volume 73, pages 2373–2376 (year 2004)NoStop [Cheng et al.(2013)Cheng, Zhou, Yang, Zhou, Matsubayashi, Uwatoko, MacDonald, and Goodenough]PhysRevLett.111.176403 author author J.-G. Cheng, author J.-S. Zhou, author Y.-F Yang, author H. D. Zhou, author K. Matsubayashi, author Y. Uwatoko, author A. MacDonald,and author J. B. Goodenough, title title Possible Kondo Physics near a Metal-Insulator Crossover in the A-Site Ordered Perovskite CaCu_3Ir_4O_12, 10.1103/PhysRevLett.111.176403 journal journal Phys. Rev. Lett. volume 111, pages 176403 (year 2013)NoStop [Wu et al.(2016a)Wu, Zhao, Wang, Wang, Xiang, Luo, Wu, andChen]PhysRevLett.116.147001 author author Y. P. Wu, author D. Zhao, author A. F. Wang, author N. Z. Wang, author Z. J. Xiang, author X. G. Luo, author T. Wu,and author X. H.Chen, title title Emergent Kondo Lattice Behavior in Iron-Based Superconductors AFe_2As_2 (A=K, Rb, Cs), 10.1103/PhysRevLett.116.147001 journal journal Phys. Rev. Lett. volume 116, pages 147001 (year 2016a)NoStop [Zaliznyak et al.(2011)Zaliznyak, Xu, Tranquada, Gu, Tsvelik, and Stone]2011arXiv1103.5073Z author author Igor A. Zaliznyak, author Zhijun Xu, author John M. Tranquada, author Genda Gu, author Alexei M. Tsvelik,and author Matthew B. Stone, title title Unconventional Temperature Enhanced Magnetism in Fe_1.1Te, https://link.aps.org/doi/10.1103/PhysRevLett.107.216403 journal journal Phys. Rev. Lett. volume 107, pages 216403 (year 2011)NoStop [Kotegawa et al.(2020)Kotegawa, Matsuda, Ye, Tani, Uda, Kuwata, Tou, Matsuoka, Sugawara, Sakurai, Ohta, Harima, Takeda, Hayashi, Araki, and Kobayashi]PhysRevLett.124.087202 author author Hisashi Kotegawa, author Masaaki Matsuda, author Feng Ye, author Yuki Tani, author Kohei Uda, author Yoshiki Kuwata, author Hideki Tou, author Eiichi Matsuoka, author Hitoshi Sugawara, author Takahiro Sakurai, author Hitoshi Ohta, author Hisatomo Harima, author Keiki Takeda, author Junichi Hayashi, author Shingo Araki,and author Tatsuo C. Kobayashi, title title Helimagnetic Structure and Heavy-Fermion-Like Behavior in the Vicinity of the Quantum Critical Point in Mn_3P, 10.1103/PhysRevLett.124.087202 journal journal Phys. Rev. Lett. volume 124, pages 087202 (year 2020)NoStop [Kim et al.(2022)Kim, Kwon, Kim, Kim, Chung, Ryu, Jung, Kim, Song, Denlinger, Han, Yoshida, Mizokawa, Kyung,and Kim]Kim2022 author author Minsoo Kim, author Junyoung Kwon, author Choong H. Kim, author Younsik Kim, author Daun Chung, author Hanyoung Ryu, author Jongkeun Jung, author Beom Seo Kim, author Dongjoon Song, author Jonathan D. Denlinger, author Moonsup Han, author Yoshiyuki Yoshida, author Takashi Mizokawa, author Wonshik Kyung,and author Changyoung Kim, title title Signature of Kondo hybridisation with an orbital-selective Mott phase in 4d Ca_2-xSr_xRuO_4, 10.1038/s41535-022-00471-5 journal journal npj Quantum Mater. volume 7, pages 59 (year 2022)NoStop [Zhang et al.(2018)Zhang, Lu, Zhu, Tan, Feng, Liu, Zhang, Chen, Liu, Luo, Xie, Luo, Zhang, and Lai]Zhangeaao6791 author author Yun Zhang, author Haiyan Lu, author Xiegang Zhu, author Shiyong Tan, author Wei Feng, author Qin Liu, author WenZhang, author QiuyunChen, author Yi Liu, author Xuebing Luo, author Donghua Xie, author Lizhu Luo, author Zhengjun Zhang,and author Xinchun Lai, title title Emergence of Kondo lattice behavior in a van der Waals itinerant ferromagnet, Fe_3GeTe_2, 10.1126/sciadv.aao6791 journal journal Sci. Adv. volume 4, pages eaao6791 (year 2018)NoStop [Zhao et al.(2021)Zhao, Chen, Xi, Zhao, Xu, Zhang, Cheng, Feng, Zhuang, Pan, Xu, Hao, Li, Zhou, Dou, and Du]doi:10.1021/acs.nanolett.1c01661 author author MengtingZhao, author Bin-BinChen, author Yilian Xi, author Yanyan Zhao, author Hang Xu, author Hongrun Zhang, author Ningyan Cheng, author Haifeng Feng, author Jincheng Zhuang, author Feng Pan, author Xun Xu, author WeichangHao, author Wei Li, author Si Zhou, author Shi Xue Dou,and author Yi Du, title title Kondo Holes in the Two-Dimensional Itinerant Ising Ferromagnet Fe_3GeTe_2, 10.1021/acs.nanolett.1c01661 journal journal Nano Lett. volume 21, pages 6117–6123 (year 2021)NoStop [Bao et al.(2022)Bao, Wang, Shangguan, Cai, Dong, Huang, Si, Ma, Kajimoto, Ikeuchi, Yano, Yu, Wan, Li, and Wen]PhysRevX.12.011022 author author Song Bao, author Wei Wang, author Yanyan Shangguan, author Zhengwei Cai, author Zhao-Yang Dong, author Zhentao Huang, author Wenda Si, author Zhen Ma, author RyoichiKajimoto, author KazuhikoIkeuchi, author Shin-ichiroYano, author Shun-LiYu, author Xiangang Wan, author Jian-Xin Li, and author Jinsheng Wen,title title Neutron Spectroscopy Evidence on the Dual Nature of Magnetic Excitations in a van der Waals Metallic Ferromagnet Fe_2.72GeTe_2, 10.1103/PhysRevX.12.011022 journal journal Phys. Rev. X volume 12, pages 011022 (year 2022)NoStop [Deng et al.(2018)Deng, Yu, Song, Zhang, Wang, Sun, Yi, Wu, Wu, Zhu, Wang, Chen, and Zhang]Deng2018 author author Yujun Deng, author Yijun Yu, author Yichen Song, author Jingzhao Zhang, author Nai Zhou Wang, author Zeyuan Sun, author Yangfan Yi, author Yi Zheng Wu, author Shiwei Wu, author Junyi Zhu, author JingWang, author Xian HuiChen,and author YuanboZhang, title title Gate-tunable room-temperature ferromagnetism in two-dimensional Fe_3GeTe_2, 10.1038/s41586-018-0626-9 journal journal Nature volume 563, pages 94–99 (year 2018)NoStop [Fei et al.(2018)Fei, Huang, Malinowski, Wang, Song, Sanchez, Yao, Xiao, Zhu, May, Wu, Cobden, Chu, and Xu]fei2018two author author Zaiyao Fei, author Bevin Huang, author Paul Malinowski, author Wenbo Wang, author Tiancheng Song, author Joshua Sanchez, author Wang Yao, author Di Xiao, author XiaoyangZhu, author Andrew FMay, author Weida Wu, author H. David Cobden, author Jiun-Haw Chu,andauthor Xiaodong Xu, title title Two-dimensional itinerant ferromagnetism in atomically thin Fe_3GeTe_2, 10.1038/s41563-018-0149-7 journal journal Nat. Mater. volume 17, pages 778–782 (year 2018)NoStop [Liu et al.(2020a)Liu, Liu, Yang, Chen, Zhang, Li, Wu, Ruan, Xiu, Liu, He, Zhang, and Xu]PhysRevLett.125.267205 author author Bo Liu, author Shanshan Liu, author Long Yang, author Zhendong Chen, author Enze Zhang, author Zihan Li, author Jing Wu, author XuezhongRuan, author Faxian Xiu, author Wenqing Liu, author Liang He, author Rong Zhang,and author Yongbing Xu, title title Light-Tunable Ferromagnetism in Atomically Thin Fe_3GeTe_2 Driven by Femtosecond Laser Pulse,10.1103/PhysRevLett.125.267205 journal journal Phys. Rev. Lett. volume 125,pages 267205 (year 2020a)NoStop [Zheng et al.(2020)Zheng, Xie, Albarakati, Algarni, Tan, Wang, Peng, Partridge, Farrar, Yi, Xiong, Tian, Zhao, andWang]PhysRevLett.125.047202 author author Guolin Zheng, author Wen-Qiang Xie, author Sultan Albarakati, author Meri Algarni, author Cheng Tan, author Yihao Wang, author Jingyang Peng, author James Partridge, author Lawrence Farrar, author Jiabao Yi, author Yimin Xiong, author MingliangTian, author Yu-Jun Zhao,and author Lan Wang, title title Gate-Tuned Interlayer Coupling in van der Waals Ferromagnet Fe_3GeTe_2 Nanoflakes, 10.1103/PhysRevLett.125.047202 journal journal Phys. Rev. Lett. volume 125, pages 047202 (year 2020)NoStop [Wang et al.(2018)Wang, Sapkota, Taniguchi, Watanabe, Mandrus, and Morpurgo]wang2018tunneling author author Zhe Wang, author Deepak Sapkota, author Takashi Taniguchi, author Kenji Watanabe, author David Mandrus,and author Alberto F Morpurgo, title title Tunneling spin valves based on Fe_3GeTe_2/hBN/Fe_3GeTe_2 van der Waals heterostructures, 10.1021/acs.nanolett.8b01278 journal journal Nano Lett. volume 18, pages 4303–4308 (year 2018)NoStop [Wang et al.(2019)Wang, Tang, Xia, He, Zhang, Liu, Wan, Fang, Guo, Yang, Guang, Zhang, Xu, Wei, Liao, Lu, Feng, Li, Peng, Wei, Yang, Shi, Zhang, Han, Zhang, Zhang, Yu, andHan]wang2019current author author Xiao Wang, author Jian Tang, author Xiuxin Xia, author Congli He, author Junwei Zhang, author Yizhou Liu, author Caihua Wan, author Chi Fang, author Chenyang Guo, author WenlongYang, author Yao Guang, author Xiaomin Zhang, author Hongjun Xu, author Jinwu Wei, author Mengzhou Liao, author Xiaobo Lu, author Jiafeng Feng, author XiaoxiLi, author Yong Peng, author Hongxiang Wei, author Rong Yang, author Dongxia Shi, author Xixiang Zhang, author Zheng Han, author Zhidong Zhang, author GuangyuZhang, author GuoqiangYu,and author XiufengHan, title title Current-driven magnetization switching in a van der Waals ferromagnet Fe_3GeTe_2, 10.1126/sciadv.aaw8904 journal journal Sci. Adv. volume 5, pages eaaw8904 (year 2019)NoStop [Zhu et al.(2016)Zhu, Janoschek, Chaves, Cezar, Durakiewicz, Ronning, Sassa, Mansson, Scott, Wakeham, Bauer, and Thompson]PhysRevB.93.144404 author author Jian-XinZhu, author Marc Janoschek, author D. S. Chaves, author J. C. Cezar, author Tomasz Durakiewicz, author Filip Ronning, author Yasmine Sassa, author Martin Mansson, author B. L. Scott, author N. Wakeham, author Eric D. Bauer,and author J. D. Thompson, title title Electronic correlation and magnetism in the ferromagnetic metal Fe_3GeTe_2, 10.1103/PhysRevB.93.144404 journal journal Phys. Rev. B volume 93, pages 144404 (year 2016)NoStop [Corasaniti et al.(2020)Corasaniti, Yang, Sen, Willa, Merz, Haghighirad, Le Tacon, and Degiorgi]PhysRevB.102.161109 author author M. Corasaniti, author R. Yang, author K. Sen, author K. Willa, author M. Merz, author A. A.Haghighirad, author M. Le Tacon,and author L. Degiorgi, title title Electronic correlations in the van der Waals ferromagnet Fe_3GeTe_2 revealed by its charge dynamics, 10.1103/PhysRevB.102.161109 journal journal Phys. Rev. B volume 102, pages 161109 (year 2020)NoStop [Chen et al.(2013)Chen, Yang, Wang, Imai, Ohta, Michioka, Yoshimura,and Fang]doi:10.7566/JPSJ.82.124711 author author Bin Chen, author JinHu Yang, author HangDong Wang, author Masaki Imai, author Hiroto Ohta, author Chishiro Michioka, author Kazuyoshi Yoshimura,and author MingHu Fang, title title Magnetic Properties of Layered Itinerant Electron Ferromagnet Fe_3GeTe_2, 10.7566/JPSJ.82.124711 journal journal J. Phys. Soc. Jpn. volume 82, pages 124711 (year 2013)NoStop [Xu et al.(2020)Xu, Li, Duan, Zhang, Chen, Kang, Liang, Chen, Xia, Xu, Malinowski, Xu, Chu, Li, Guo, Liu, Yang, andChen]PhysRevB.101.201104 author author X. Xu, author Y. W. Li, author S. R. Duan, author S. L. Zhang, author Y. J. Chen, author L. Kang, author A. J. Liang, author C. Chen, author W. Xia, author Y. Xu, author P. Malinowski, author X. D. Xu, author J.-H. Chu, author G. Li, author Y. F.Guo, author Z. K. Liu, author L. X. Yang, and author Y. L. Chen,title title Signature for non-stoner ferromagnetism in the van der waals ferromagnet Fe_3GeTe_2, 10.1103/PhysRevB.101.201104 journal journal Phys. Rev. B volume 101, pages 201104 (year 2020)NoStop [Calder et al.(2019)Calder, Kolesnikov, and May]PhysRevB.99.094423 author author S. Calder, author A. I. Kolesnikov,and author A. F. May, title title Magnetic excitations in the quasi-two-dimensional ferromagnet Fe_3xGeTe_2 measured with inelastic neutron scattering, 10.1103/PhysRevB.99.094423 journal journal Phys. Rev. B volume 99, pages 094423 (year 2019)NoStop [Bai et al.(2022)Bai, Lechermann, Liu, Cheng, Kolesnikov, Ye, Williams, Chi, Hong, Granroth, May, and Calder]PhysRevB.106.L180409 author author XiaojianBai, author Frank Lechermann, author Yaohua Liu, author Yongqiang Cheng, author Alexander I. Kolesnikov, author Feng Ye, author Travis J. Williams, author Songxue Chi, author Tao Hong, author Garrett E. Granroth, author Andrew F. May,and author Stuart Calder, title title Antiferromagnetic fluctuations and orbital-selective Mott transition in the van der Waals ferromagnet Fe_3xGeTe_2, 10.1103/PhysRevB.106.L180409 journal journal Phys. Rev. B volume 106, pages L180409 (year 2022)NoStop [Bayrakci et al.(2006)Bayrakci, Keller, Habicht, andKeimer]Bayrakci1926 author author S. P. Bayrakci, author T. Keller, author K. Habicht,andauthor B. Keimer, title title Spin-Wave Lifetimes Throughout the Brillouin Zone, 10.1126/science.1127756 journal journal Science volume 312, pages 1926–1929 (year 2006)NoStop [Wu et al.(2016b)Wu, Deng, Gardner, Vorderwisch, Li, Yano, Peng, and Imamovic]Wu_2016 author author C.-M. Wu, author G. Deng, author J.S. Gardner, author P. Vorderwisch, author W.-H. Li, author S. Yano, author J.-C.Peng,and author E. Imamovic, title title SIKA—the multiplexing cold-neutron triple-axis spectrometer at ANSTO, 10.1088/1748-0221/11/10/p10009 journal journal J. Inst. volume 11, pages P10009 (year 2016b)NoStop [Yano et al.(2020)Yano, Iles, Peng, and Wu]Yano2020 author author S. Yano, author G. N. Iles, author J.-Ch. Peng,andauthor Ch.-M. Wu, title title Current Status of the Taiwanese Cold Triple Axis Spectrometer, SIKA, at ANSTO, 10.1134/S1027451020070514 journal journal J. Surf. Investig. volume 14, pages S207–S212 (year 2020)NoStop [Zhao et al.(2009)Zhao, Adroja, Yao, Bewley, Li, Wang, Wu, Chen, Hu, and Dai]zhao2009spin author author Jun Zhao, author D. T. Adroja, author Dao-Xin Yao, author R. Bewley, author Shiliang Li, author X. F. Wang, author G. Wu, author X. H. Chen, author JiangpingHu,and author PengchengDai, title title Spin waves and magnetic exchange interactions in CaFe_2As_2, 10.1038/nphys1336 journal journal Nat. Phys. volume 5, pages 555–560 (year 2009)NoStop [Chen et al.(2020)Chen, Krivenko, Stone, Kolesnikov, Wolf, Reznik, Bedell, Lechermann, and Wilson]nc11_3076 author author Xiang Chen, author Igor Krivenko, author Matthew B. Stone, author Alexander I. Kolesnikov, author Thomas Wolf, author Dmitry Reznik, author Kevin S. Bedell, author Frank Lechermann,and author Stephen D. Wilson, title title Unconventional Hund metal in a weak itinerant ferromagnet, https://doi.org/10.1038/s41467-020-16868-4 journal journal Nat. Commun. volume 11, pages 3076 (year 2020)NoStop [Dietrich et al.(1976)Dietrich, Als-Nielsen, and Passell]PhysRevB.14.4923 author author O. W. Dietrich, author J. Als-Nielsen,and author L. Passell, title title Neutron scattering from the Heisenberg ferromagnets EuO and EuS. III. Spin dynamics of EuO, 10.1103/PhysRevB.14.4923 journal journal Phys. Rev. B volume 14,pages 4923–4945 (year 1976)NoStop [Halperin and Hohenberg(1969)]PhysRev.177.952 author author B. I. Halperin and author P. C. Hohenberg, title title Scaling Laws for Dynamic Critical Phenomena, 10.1103/PhysRev.177.952 journal journal Phys. Rev. volume 177, pages 952–971 (year 1969)NoStop [Liu et al.(2017)Liu, Ivanovski, and Petrovic]PhysRevB.96.144429 author author Yu Liu, author V. N. Ivanovski,and author C. Petrovic,title title Critical behavior of the van der Waals bonded ferromagnet Fe_3xGeTe_2, 10.1103/PhysRevB.96.144429 journal journal Phys. Rev. B volume 96, pages 144429 (year 2017)NoStop [Qi et al.(2013)Qi, Durakiewicz, Trugman, Zhu, Riseborough, Baumbach, Bauer, Gofryk, Meng, Joyce, Taylor, and Prasankumar]PhysRevLett.111.057402 author author J. Qi, author T. Durakiewicz, author S. A. Trugman, author J.-X. Zhu, author P. S. Riseborough, author R. Baumbach, author E. D. Bauer, author K. Gofryk, author J.-Q. Meng, author J. J.Joyce, author A. J. Taylor,and author R. P. Prasankumar, title title Measurement of Two Low-Temperature Energy Gaps in the Electronic Structure of Antiferromagnetic USb_2 Using Ultrafast Optical Spectroscopy, 10.1103/PhysRevLett.111.057402 journal journal Phys. Rev. Lett. volume 111, pages 057402 (year 2013)NoStop [Liu et al.(2020b)Liu, Zhang, Dong, Lee, Wei, Zhang, Chen, Yuan, Yang, and Qi]PhysRevLett.124.057404 author author Y. P. Liu, author Y. J. Zhang, author J. J. Dong, author H. Lee, author Z. X. Wei, author W. L. Zhang, author C. Y.Chen, author H. Q. Yuan, author Yi-feng Yang, and author J. Qi, title title Hybridization Dynamics in CeCoIn_5 Revealed by Ultrafast Optical Spectroscopy,10.1103/PhysRevLett.124.057404 journal journal Phys. Rev. Lett. volume 124,pages 057404 (year 2020b)NoStop [Du et al.()Du, Tang, Zhao, Li, Yang, Hu, Bai, Wang, Watanabe, Taniguchi, Shi, Yu, Bai, Hasan, Zhang, and Sun]https://doi.org/10.1002/adfm.201904734 author author Luojun Du, author Jian Tang, author Yanchong Zhao, author Xiaomei Li, author Rong Yang, author Xuerong Hu, author XueyinBai, author Xiao Wang, author Kenji Watanabe, author Takashi Taniguchi, author Dongxia Shi, author Guoqiang Yu, author Xuedong Bai, author Tawfique Hasan, author Guangyu Zhang,and author Zhipei Sun, title title Lattice Dynamics, Phonon Chirality, and Spin-Phonon Coupling in 2D Itinerant Ferromagnet Fe_3GeTe_2, https://doi.org/10.1002/adfm.201904734 journal journal Adv. Funct. Mater. volume 29, pages 1904734NoStop [Tang et al.(2023)Tang, Huang, Qin, Zhai, Ideue, Li, Meng, Nie, Wu, Bi, Zhang, Zhou, Chen, Qiu, Tang, Zhang, Wan, Wang, Liu, Tian, Iwasa, and Yuan]Tang2023 author author Ming Tang, author Junwei Huang, author Feng Qin, author Kun Zhai, author Toshiya Ideue, author Zeya Li, author Fanhao Meng, author AnminNie, author Linglu Wu, author Xiangyu Bi, author Caorong Zhang, author Ling Zhou, author Peng Chen, author Caiyu Qiu, author PeizheTang, author Haijun Zhang, author Xiangang Wan, author Lin Wang, author Zhongyuan Liu, author Yongjun Tian, author Yoshihiro Iwasa,and author Hongtao Yuan, title title Continuous manipulation of magnetic anisotropy in a van der Waals ferromagnet via electrical gating, 10.1038/s41928-022-00882-z journal journal Nat. Electron. volume 6, pages 28–36 (year 2023)NoStop [Gao et al.()Gao, Wang, Li, Yan, Shi, and Li]Gao2023KHM author author Yuan Gao, author Junsen Wang, author Qiaoyi Li, author Qing-Bo Yan, author Tao Shi,and author Wei Li, @nooptitle Magnon Damping Minimum and Logarithmic Scaling in a Kondo-Heisenberg Ferromagnet, note in submissionNoStop
http://arxiv.org/abs/2312.15961v1
{ "authors": [ "Song Bao", "Junsen Wang", "Shin-ichiro Yano", "Yanyan Shangguan", "Zhentao Huang", "Junbo Liao", "Wei Wang", "Yuan Gao", "Bo Zhang", "Shufan Cheng", "Hao Xu", "Zhao-Yang Dong", "Shun-Li Yu", "Wei Li", "Jian-Xin Li", "Jinsheng Wen" ], "categories": [ "cond-mat.str-el", "cond-mat.supr-con" ], "primary_category": "cond-mat.str-el", "published": "20231226085035", "title": "Observation of Magnon Damping Minimum Induced by Kondo Coupling in a van der Waals Ferromagnet Fe$_{3-x}$GeTe$_{2}$" }
We adapt a construction of Gabrielov and Vorobjov for use in the symmetric case. Gabrielov and Vorobjov had developed a means by which one may replace an arbitrary set S definable in some o-minimal expansion of R with a compact set T. T is constructed in such a way that for a given m>0 we have epimorphisms from the first m homotopy and homology groups of T to those of S. If S is defined by a boolean combination of statements h(x)=0 and h(x)>0 for various h in some finite collection of definable continuous functions, one may choose T so that these maps are isomorphisms for 0≤ k≤ m-1. In this case, T is also defined by functions closely related to those defining S.In this paper we study sets S symmetric under the action of some finite reflection group G. One may see that in the original construction, if S is defined by functions symmetric relative to the action of G, then T will be as well. We show that there is an equivariant map T→ S inducing the aforementioned epimorphisms and isomorphisms of homotopy and homology groups. We use this result to strengthen theorems of Basu and Riener concerning the multiplicities of Specht modules in the isotypic decomposition of the cohomology spaces of sets defined by polynomials symmetric relative to n. Equivariance in Approximation by Compact Sets Alison Rosenblum January 14, 2024 =============================================§ INTRODUCTIONFix an o-minimal structure expanding a real closed field(on occasion, we will specify that this real closed field is R; one may if desired assume as much throughout). We will take all sets, maps, etc. to be definable in this structure.Gabrielov and Vorobjov in <cit.> develop a construction by which one may replace an arbitrary definable set S by a closed and bounded set T, such that we have epimorphisms (and in a special case isomorphisms) from the homotopy and homology groups of the approximating set to those of the original set. In this special case, termed the constructible case, we take S to be defined by a quantifier-free formula with atoms h>0 or h=0 for some finite number of continuous definable functions h. In this case, the procedure described in <cit.> gives an explicit description of, and control over the number of, the functions defining T. Gabrielov and Vorobjov's paper describes the ramifications of this in calculating upper bounds on Betti numbers for sets definable in certain structures.In <cit.>, Basu and Riener leverage properties of symmetry (particularly, symmetry relative to the standard action of n on ^n) to study the Betti numbers of semialgebraic sets defined by symmetric polynomials of bounded degree, developing algorithms with favorable complexity bounds. The paper makes use of the construction of Gabrielov and Vorobjov presented in <cit.>, replacing S with a set T defined by closed conditions and then applying various results requiring closedness in order to determine the structure of the cohomology spaces of T. From Gabrielov and Vorobjob's results, we know that the cohomology spaces of T are isomorphic to those of S. If we assume S is defined by symmetric polynomials of degree bounded by some d at least 2, then the set produced by the Gabrielov-Vorobjov construction is also defined by symmetric polynomials of degree no more than d. However, it is not immediately clear whether the maps on the level of homotopy and homology are equivariant. Without equivariance, we cannot conclude that the cohomology spaces of S and T have the same structures as n-modules, meaning that results for T only translate back to S in part.This paper establishes an equivariant version of the construction of Gabrielov and Vorobjov in <cit.>. We introduce a few pieces of notation in order to present our main theorem. Let S={x∈^n|F(x)} be a set defined by a formula F. Say P={h_1,…, h_s} is a finite collection of continuous definable functions ^n→. If F is a boolean combination of statements of the form h=0 and h>0 for h∈P, we call F a P-formula and S a P-set. If ℱ is a monotone boolean combination (i.e. without negations) of statements of the form h≥ 0 and h≤ 0 for h∈P, then we say that F is a P-closed formula and S is a P-closed set. [<cit.> Definition 1.7] When we say a statement holds for0<_0≪_1≪⋯≪_m ≪ 1we mean that for each 0≤ i≤ m there is a definable function f_i:(0,1)^m-i→ (0,1) (i.e. f_m is a constant in (0,1)) such that the statement holds for all _0,…, _m which satisfy, for 0≤ i≤ m, the condition that 0<_i<f_i(_i+1,…,_m). Our equivariant adaptation of the main theorem of Gabrielov and Vorobjov in <cit.> is then as follows. Let G be a finite reflection group acting on R^n, and let S⊂R^n be a definable set symmetric under the action of G. Choose an integer m>0. * (Definable Case) Say that S is represented in a compact symmetric set A by families {S_δ}_δ>0 and {S_δ, }_δ,>0 of compact sets (see Definition <ref>), and that each S_δ and S_δ, is also symmetric under the action of G. Then for parameters 0<_0,δ_0,…,_m,δ_m<1, there exists a compact symmetric set T=T(_0,δ_0,…,_m,δ_m) such that, for 0<δ_0≪_0≪…≪_m≪δ_m≪ 1, we have an equivariant map ψ:T→ S inducing equivariant epimorphismsψ_#,k :π_k(T,*)→π_k(S,*') ψ_*,k :H_k(T)→ H_k(S)for each 1≤ k≤ m.* (Constructible Case) Say that S is a P-set for P={h_1,…,h_s} a collection of continuous definable functions having the property that h∘ g∈P for all g∈ G and h∈P. For parameters r>0 and 0<_0,δ_0,…,_m,δ_m<1, letP'=⋃_h∈P∪{r^2-(X_1^2+⋯+ X_n^2)}⋃_j=0^m{h±_j, h±δ_j}Then there exists a P'-closed and bounded set T such that for sufficiently large r and 0<δ_0≪_0≪…≪_m≪δ_m, there exists an equivariant map ψ:T→ S inducing equivariant homomorphismsψ_#,k :π_k(T,*)→π_k(S,*') ψ_*,k :H_k(T)→ H_k(S)which are isomorphisms for 1≤ k≤ m-1 and epimorphisms for k=m. If m≥(S), then ψ induces a homotopy equivalence T≃ S. Note that in particular if all functions in P are symmetric relative to G, then the same is true for P'.Parts (a) and (b) of this theorem are proved in Theorem <ref> and Corollary <ref> respectively. Our arguments follow the strategy of <cit.>, with adjustments to ensure equivariance. The maps composed to obtain ψ in the definable and constructible cases are displayed in the summary in Subsection <ref>.To construct the maps of homotopy and homology groups in <cit.>, Gabrielov and Vorobjov first consider a triangulation adapted to S of some larger compact set A. Thus, a major step in our paper concerns proving the existence of a triangulation with the needed symmetry and equivariance properties for a given symmetric, definable set. This is done in Theorem <ref>. We also establish equivariant versions of a few other theorems as needed.Section <ref> contains background and definitions. Section <ref> describes the construction of the approximating set T in the definable and constructible cases. The results relating to symmetric triangulation and equivariant versions of the other background theorems can be found in Sections <ref> and <ref> respectively. Finally, in Section <ref>, we assemble our results to prove the existence of an equivariant map T→ S inducing the promised epimorphisms and isomorphisms of homotopy and homology groups. Section <ref> discusses the ramifications for the results of Basu and Riener in <cit.>.The authors wish to express heartfelt appreciation for the advice of Dr. Gabrielov throughout, and particularly concerning Theorem <ref>.§ BACKGROUNDMuch of the information here is relatively standard. We include these definitions for completeness, and to establish a few conventions. §.§ SymmetryLet G be a group acting on a set X. A subset Y⊂ X is said to be symmetric with respect to the action of G on X if for each g∈ G and y∈ Y, we have g(y)∈ Y. We extend the definition of a symmetric function, standard for the usual action of n on R^n, to more general group actions. Let f:X→ Y for sets X and Y, and let G be a group acting on X. We will say that f is symmetric with respect to the action of G if we have that f(g(x))=f(x) for all x∈ X and g∈ G.Let f:X→ Y for some sets X and Y, and let G be a group which acts on both X and Y. We say f is equivariant with respect to the action of G if for all g∈ G, the diagramX [d, "g"] [r, "f"] Y [d, "g"]X [r, "f"] Ycommutes. We do need to give consideration to symmetry and equivariance for homotopy groups. Let X be a topological space, with G a group acting on X. Then for g∈ G and basepoint x_0∈ X, the map g:X→ X induces maps g_#,k:π_k(X,x_0)→π_k(X,g(x_0)) of homotopy groups for each k≥ 0. Thus, for a pointed space (X,*), we understand that the induced action of g on any π_k(X,*) will also change the basepoint. For this reason, we will be careful to maintain basepoint notation in reference to homotopy groups. In particular, we must give attention to our basepoint when considering equivariance of maps on the level of homotopy groups. Let f:π_k(X,*)→π_k(Y,*') for topological spaces (X,*) and (Y,*'), and let G be a group acting on X and Y. We will say that f is equivariant if the diagramπ_k(X,*) [d, "g"] [r, "f"]π_k(Y,*')[d, "g"] π_k(X,g(*)) [r, "f"]π_k(Y, g(*'))commutes.§.§ Reflection Groups Let V be a finite dimensional real vector space equipped with an inner product, and let V denote the group of all orthogonal linear transformations of V. We will for the sake of clarity distinguish a linear hyperplane (a codimension 1 vector subspace, which must therefore contain 0) from an affine hyperplane (a codimension 1 affine subspace, which need not pass through the origin). If P is a hyperplane (in either sense), then the complement V∖ P has two connected components, which are called (linear or affine, resp.) half-spaces. Let G be a finite subgroup of V. A subset H⊂ V is a fundamental region of V relative to G provided * H is an open subset of V* H∩ g(H)=∅ for all e≠ g∈ G* V=⋃_g∈ Gg(H) Let G be a finite subgroup of V. Then there exists a finite collection {H_1,…, H_k} of linear half-spaces such that the regionH=⋂_i=1^k H_iis a fundamental region of V relative to G. We are interested in the case where G is a finite reflection group acting on R^n (with the standard inner product), i.e. G is generated by elements g whose action on R^n are given by g(x) reflecting x across some linear hyperplane P_g. We will frequently use the fact that any affine hyperplane P in ^n coincides with some {x∈^n| L(x)=0}, where L:^n→ is of the form L(x_1,…,x_n)=a_0+a_1x_1+⋯+a_nx_n for some a_0,…, a_n∈. P is a linear hyperplane iff a_0=0.Say that G is a finite reflection group acting on R^n. Then, as detailed in <cit.> Chapter 4, we can select a subset {g_1,…, g_k} of non-identity elements of G such that, if g_i acts by reflecting through the hyperplane P_i, we can choose functions L_i with P_i given by L_i(x)=0 so thatH=⋂_i=1^k {x∈R^n| L_i(x)>0}is a fundamental region of R^n relative to G. The sets of the form (L_1≥ 0)∩…∩ (L_i-1≥ 0)∩ (L_i=0)∩ (L_i+i≥ 0)∩…∩ (L_k≥ 0) are called the walls of this fundamental region.In this description, for g∈ G we have that there exists a subset λ_g of {1,…,k} such that g(H)=⋂_i∈λ_gL_i(x)≤ 0∩⋂_i∈{1,…,k}∖λ_g L_i(x)≥ 0 Then H∩ g(H) is given byH_λ_g=⋂_i∈λ_gL_i(x)=0∩⋂_i∈{1,…,k}∖λ_g L_i(x)≥ 0which is an intersection of walls of H. H_λ_g is also the set of points of H fixed by the action of g.In our primary example, G=n acts on R^n via the standard action given by the permutation of coordinates, i.e. for σ∈n and x=(x_1,…,x_n)∈R^n we have σ(x)=(x_σ(1),…, x_σ(n)). Here, we may take as a fundamental region the interior of the Weyl chambern={(x_1,…, x_n)∈R^n| x_1<…<x_n}(see <cit.> Notation 9). The walls of the Weyl chamber correspond to the adjacent transpositions s_i=(ii+1)∈n (i.e. the standard Coxeter generators of n). Each s_i acts by reflecting through the hyperplane given by x_i=x_i+1, so we have that the walls of the Weyl chamber are the setsn_s_i={(x_1,…, x_n)∈R^n| x_1≤…≤ x_i=x_i+1≤…≤ x_n}for 1≤ i≤ n-1 (see <cit.> Notation 11).§.§ Simplicial Complexes Our proofs make reference to both abstract and concrete simplicial complexes. Accordingly, we present both viewpoints here, beginning with the concrete setting. Let the points v_0,…, v_d∈^n be affine independent. Then the open simplex (of dimension d) with vertices v_0,…, v_d is the setΔ(v_0,…, v_d)={x ∈^n| x=t_1v_1+⋯+t_dv_d for somet_1,…,t_d∈ (0,1]witht_1+⋯+t_d=1}The corresponding closed simplex isΔ(v_0,…, v_d)={x ∈^n| x=t_1v_1+⋯+t_dv_d for somet_1,…,t_d∈ [0,1]witht_1+⋯+t_d=1}A face of a simplex Δ(v_0,…,v_d) is a simplex Δ(u_0,…, u_d') with vertex set {u_0,…,u_d'}⊂{v_0,…,v_d}. If we are drawing our vertices from an indexed set {v_i}_i∈ I, we may simply refer to a simplex Δ(v_i_0,…,v_i_d) as Δ(i_0,…, i_d). A (finite) simplicial complex in ^n is a finite collection Λ of open simplices in ^n with the following properties * If Δ∈Λ, then for each face Δ' of Δ, we have that Δ'∈Λ* If Δ_1,Δ_2∈Λ, then Δ_1∩Δ_2=Δ_3 for some Δ_3∈Λ.Given a simplicial complex Λ in ^n, we call ⋃_Δ∈ΛΔ⊂^n the geometric realization of Λ, and denote it Λ. If not clear from context, we may include a subscript to indicate where the realization is taking place. For example, say Λ is a simplicial complex in ^n seen as a subset of ^n×^m. Then we may wish to distinguish between Λ_^n and Λ_^n+m, the realizations of Λ in ^n and ^n+m respectively. Let Δ,Δ'∈Λ for some simplicial complex Λ. Then Δ' is a subsimplex of Δ if Δ'≠Δ and Δ'⊂Δ (in other words, if Δ' is a proper face of Δ).A k-flag of cells in a CW complex (so in particular, of simplices in a simplicial complex) is a sequence σ_0,…, σ_k of cells such that σ_i is contained in the boundary of σ_i-1 for each 1≤ i≤ k. When speaking of symmetry for any CW complex (and in particular the geometric realization of a simplicial complex Λ in R^n), we will in general consider the action of G on our decomposition to be that induced by a given action of G on our larger space: Let G be a group acting on R^n. A CW complex X is symmetric with respect to the action of G if for each cell σ∈Λ and each g∈ G, g(σ)={g(x)| x∈σ} is again a cell of X. In particular, this means that X is symmetric as a subset of R^n.Now, we shift our attention to abstract simplicial complexes. An abstract simplicial complex is a set V of vertices and a set Λ of finite nonempty subsets of V which we consider to be simplices, having the properties * {v}∈Λ for each v∈ V (each vertex is a simplex)* any nonempty subset of a simplex is a simplexOne can and often does ignore the distinction between an abstract simplicial complex and its set of simplices. An abstract simplicial complex also comes with a geometric realization, which we will explicitly describe for the sake of later use. Let Λ be a nonempty abstract simplicial complex. The geometric realization of Λ, denoted by Λ, is the space whose points are functions α from the set of vertices of Λ to the interval [0,1]⊂R such that * The set {v∈Λ|α(v)≠ 0} is a simplex of Λ* ∑_v∈Λα(v)=1appropriately topologized (as described in <cit.>). It will be convenient to refer to the point α using the notation ∑_v∈Λα(v) v. Let Λ be any simplicial complex and let X be a topological space which is a subset of a real vector space, appropriately topologized (see <cit.> for details; in particular euclidean space and geometric realizations of simplicial complexes meet the criteria). A continuous map f:Λ→ X is said to be linear (on Λ) if for α∈Λ, we havef(α)=∑_v∈Λα(v) f(v)Following <cit.>, we will say an abstract simplicial complex is symmetric with respect to a group G acting on its set of vertices if the map given by g is simplicial (i.e. g carries simplices to simplices) for each g∈ G. An action on Λ induces a linear action on Λ: namely, g(α) is the function from Λ to [0,1] given by g(α)(v)=α(g(v)).Any simplicial complex Λ comes with a face poset Λ, where the simplicies of Λ are ordered by σ_1≤σ_2 if σ_1 is a face of σ_2. For X a CW complex, we will still use the notation X for the cell poset of X (which has as points the cells σ of X and order given by σ_1≤σ_2 if σ_1⊂σ_2). §.§ Miscellaneous Background The following notions will appear later in the paper. Let A⊂^n and x∈^n∖ A. Then the cone with vertex x and base A is the set{ta+(1-t)x| a∈ A, t∈ [0,1]}(the union of all line segments from x to a point in A).We say a family {S_x}_x∈^m of subsets of some real closed field ^n is a definable family (with respect to some o-minimal structure on ) if there is a definable set S'⊂^m+n such that S_x is equal to {y∈^n| (x,y)∈ S'} for each x∈^m. It follows that each S_x is a definable set. The reader unfamiliar with o-minimal geometry is also encouraged to review the cell decomposition theorem (<cit.> Chapter 3 Section 2 or <cit.> Section 2.2) before approaching our equivariant triangulation proofs in Section <ref>.For a connected topological space X, we let π_k(X) denote the kth homotopy group. Let H_k(X) be the kth singular homology group with coefficients in some fixed Abelian group, and denote by b_k(X)=(H_k(X)) the kth Betti number of X. Throughout, we will use ≃ to denote homotopy equivalence and ≅ to denote group isomorphism.The following two theorems are used heavily in Gabrielov and Vorobjov's proofs in <cit.> and referenced also in our own arguments. These do not require equivariant versions; if a map f:X→ Y is equivariant, the induced homomorphisms of homotopy and homology groups will be as well. A map f:X→ Y between connected CW complexes is a weak homotopy equivalence (i.e. the induced homomorphism of homotopy groups f_#,k:π_k(X)→π_k(Y) is an isomorphism for each k>0) iff f is a homotopy equivalence.Let f:X→ Y be a continuous map between path connected topological spaces. If there is a k>0 such that the induced homomorphism of homotopy groups f_#,j:π_j(X)→π_j(Y) is an isomorphism for j<k and an epimorphism for j=k, then the induced homomorphism of homology groups f_*,j:H_j(X)→ H_j(Y) is an isomorphism for j<k and an epimorphism for j=k. § THE GABRIELOV-VOROBJOV CONSTRUCTION In this section, we describe the construction of the approximating set T, as given by Gabrielov and Vorobjov in <cit.>. We also discuss the implications when symmetry is introduced to the construction.In order to construct T, we begin with families of compact sets {S_δ} and {S_δ,} which represent S in some larger compact set A. Let S⊂R^n be definable, and let A⊂R^n be a compact definable set with S⊂ A. Let {S_δ}_δ>0 and {S_δ,}_δ,>0 be definable families of compact subsets of A. We say S is represented by {S_δ}_δ>0 and {S_δ,}_δ,>0 in A if we have that * for all δ',δ∈ (0,1), if δ'>δ, then S_δ'⊂ S_δ* S=⋃_δ>0 S_δand furthermore for each δ>0 * for all ',∈ (0,1), if '>, then S_δ,⊂ S_δ,'* S_δ=⋂_>0 S_δ,* for all δ' sufficiently smaller than δ and for all '>0 there exists a set U⊂ A with U open in A and S_δ⊂ U⊂ S_δ','We then take T to be the union of a selection of finitely many of the sets S_δ,. For any nonnegative integer m and parameters _0,δ_0,_1,δ_1,…,_m,δ_m, we denoteT=T(_0,δ_0,_1,δ_1,…,_m,δ_m):=S_δ_0,_0∪ S_δ_1,_1∪⋯∪ S_δ_m,_mGabrielov and Vorobjov refer to the general case, in which S is an arbitrary definable set together with any representing families {S_δ} and {S_δ,}, as the definable case. If S is a P-set for some collection P={h_1,…,h_s} of definable continuous functions h_i:R^n→R, they prescribe particular choices of A, {S_δ}, and {S_δ,} (see Definition <ref>), which grants an extra property (termed separability; see Definition <ref> or <cit.> Section 5.2) that allows for the stronger version of the results. We call this case the constructible case. The choices in the constructible case also guarantee that our sets S_δ and S_δ, are described by definable functions closely related to the original functions h_i.In these settings, Gabrielov and Vorobjov prove the following theorem. Note that they show (<cit.> Lemma 1.9) that provided m is at least 1, there exists a one-to-one correspondence between the connected components of S and T. This allows Gabrielov and Vorobjov to consider each connected component individually and so ignore basepoint considerations, a luxury we do not have.* (Definable Case) For 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1 and every 1≤ k≤ m, there are epimorphismsψ_#,k :π_k(T)→π_k(S) ψ_*,k :H_k(T)→ H_k(S)and in particular, (H_k(S))≤(H_k(T))* (Constructible Case) In the constructible case, for 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1 and every 1≤ k≤ m-1, ψ_#,k and ψ_*,k are isomorphisms. In particular, (H_k(S))=(H_k(T)). Moreover, if m≥(S), then T≃ S.In applications, even in the definable case one may via this construction obtain improved upper bounds on the first m Betti numbers of a given set S. Our primary setting of interest, the one considered by Basu and Reiner in <cit.>, utilizes the constructible case.Our main theorem (Theorem <ref>) is an equivariant version of the above theorem. It is clear from the definition of T that, so long as each set in the family {S_δ,} is symmetric relative to the action of some group G, then T will be as well. In the definable case, we need only assume that we have chosen our families to consist of symmetric sets. In the constructible case, we will show that if S is symmetric and the collection {h_1,…, h_s} is invariant under the action of G, our choices will produce symmetric sets. In subsequent sections, we show that for G a finite reflection group acting on R^n, we can in fact construct an equivariant map ψ:T→ S which induces the desired isomorphisms and epimorphisms ψ_#,k and ψ_*,k on the level of homotopy and homology. The remainder of this section is devoted to describing the sets S_δ and S_δ, in the constructible case. §.§ Constructible Case: Bounding S Let S⊂^n be definable, and assume S is unbounded. Via the conical structure at infinity of definable sets, there exists an r∈, r>0, such that S is (definably) homotopy equivalent to S∩B(0,r). Basu, Pollack, and Roy in <cit.> show this for semialgebraic sets. However, their proof holds for any o-minimal structure in which addition and mulitpication are definable. The proof in <cit.> centers on the local conical structure specifically of semialgebraic sets, but this property (a consequence of Hardt triviality) holds for any o-minimal expansion of some real closed field (see <cit.> Chapter 9 Theorem 2.3). The remaining steps of their proof may be performed in any o-minimal structure provided it contains addition and mulitpication. Let A⊂^n be definable. Then there exists an r∈, r>0, such that A is definably homotopy equivalent to A∩B(0,r). Specifically, there exists a continuous definable function h:[0,1]× A→ A with h(0,-)=𝕀_A, h(1,-) having image contained in A∩B(0,r), and with h(t,a)=a for each t∈ [0,1] and a∈ A∩B(0,r). In our case, for any set S⊂R^n that is symmetric under the action of a finite reflection group G, certainly S∩B(0,r) is as well. The inclusion S∩B(0,r)↪ S is also clearly equivariant, and so induces equivariant isomorphisms of homotopy and homology groupsπ_k(S∩B(0,r),*) →π_k(S, *)H_k(S∩B(0,r)) → H_k(S)for each k≥ 0.We now replace S by S∩B(0,r) to assume henceforth that S is bounded. If S was a P-set for some P={h_1,…,h_s}, we replace P with P∪{r^2-(X_1^2+⋯ X_n^2)}, and so increase s by one. Since r^2-(X_1^2+⋯ +X_n^2) is symmetric relative to the action of any finite reflection group G, our collection P remains invariant (and in fact if each h_i∈P was symmetric, the same remains true after including this new function in our collection). Note also that in the case that each h_i(x) is a polynomial, the only instance in which we have increased the maximum degree among our collection is if each of {h_1,…,h_s} had degree one.This is slightly different from the procedure employed by Gabrielov and Vorobjov in <cit.>, where the larger compact set A was taken as the definable one-point compactification of R^n. This choice aids certain results on Betti numbers in <cit.>, but we find our own method more convenient when tracking the group's action. Since Gabrielov and Vorobjov bound their sets {S_δ} and {S_δ,} in the constructible case by intersecting with closed balls of radius 1/δ, our method does not stray too far from the intuition of the original. Our assumption will increase the number of equations needed to define T, but only slightly. §.§ Constructible Case: the Sets {S_δ} and {S_δ,} In the constructible case (with the assumption that S is bounded), we form families {S_δ}_δ>0 and {S_δ,}_δ,>0 by decomposing S into sign sets of the functions defining S, and then contracting inequalities and expanding equalities by a factor of δ orrespectively, in a manner we now proceed to describe. Let P={h_1,…,h_s} be a finite collection of functions with each h_i:^n→. Let {I_0,I_+,I_-} be a partition of {1,…, s}. Then the set B_P,(I_0,I_+,I_-) given by{x∈^n|⋀_i∈ I_0 (h_i(x)=0) ∧⋀_i∈ I_+ (h_i(x)>0) ∧⋀_i∈ I_- (h_i(x)<0) }will be called the sign set of P corresponding to the tuple (I_0,I_+,I_-). Note that any two distinct sign sets of some P={h_1,…,h_s} are disjoint, and that for S a P-set, we may write S as a union of sign sets of the functions in P. Though for a given collection P of functions, some tuples (I_0,I_+,I_-) will produce empty sign sets, we will exclude these tuples from the sign set decomposition of any P-set.In the constructible case, we make the following choices for A, {S_δ} and {S_δ,}. Let S⊂R^n be a bounded P-set for some collection P={h_1,…,h_s} of continuous definable functions h:R^n→R. Let B denote the set of tuples (I_0,I_+,I_-) corresponding to elements in the sign set decomposition of S.Let r>0 be such that S⊂B(0,r). Then take A=B(0,r') for some r' sufficiently larger than r. For each δ>0, we let S_δ be the union of sets defined by⋀_i∈ I_0 (h_i=0) ∧⋀_i∈ I_+ (h_i≥δ) ∧⋀_i∈ I_- (h_i≤-δ)over all tuples (I_0,I_+,I_-)∈ℬ. For δ,>0, we let S_δ, be the union of the sets given by⋀_i∈ I_0 (-≤ h_i≤) ∧⋀_i∈ I_+ (h_i≥δ) ∧⋀_i∈ I_- (h_i≤-δ)again over all tuples (I_0,I_+,I_-)∈ℬ. One may check that S is indeed represented by the families {S_δ}_δ>0 and {S_δ,}_δ,>0 in A. It remains to consider the conditions needed to ensure that the sets S_δ and S_δ, are symmetric. Assume we are in the constructible case. If S is symmetric under the action of G and furthermore we have that for each g∈ G, {h_1∘ g,…, h_s∘ g}={h_1,…, h_s}, then every set in the families {S_δ}_δ>0 and {S_δ,}_δ,>0 is symmetric. Let P={h_1,…, h_s}. Note that for a sign set B=B_P, (I_0,I_+,I_-) and any g∈ G, g(B)={g(x)|x∈ B} is given by{x∈R^n|⋀_i∈ I_0 (h_i(g^-1(x))=0)∧⋀_i∈ I_+ (h_i(g^-1(x))>0) ∧⋀_i∈ I_- (h_i(g^-1(x))<0) } For each g∈ G, we define a function on the indices{1,…,s}→{1,…,s}, given by g(i)=j if h_j=h_i∘ g^-1.Let ℬ be the collection of tuples (I_0,I_+,I_-) corresponding to the sign sets contained in S. We claim ℬ has the property that for each g∈ G, if (I_0,I_+,I_-)∈ℬ then (g(I_0), g(I_+), g(I_-))∈ℬ. Indeed, if B is the sign set corresponding to (I_0,I_+,I_-), we have that B⊂ S and hence g(B)⊂ S. But we have seen above that g(B) is the sign set corresponding to (g(I_0), g(I_+), g(I_-)). The symmetry of each S_δ and each S_δ, follows from this property and the descriptions of each S_δ and S_δ, given in Definition <ref>. In particular, if each h_i is a symmetric function, then h_i∘ g=h_i, and so clearly {h_1∘ g,…, h_s∘ g}={h_1,…, h_s} for every g∈ G. §.§ Connected ComponentsFrom the definitions (in both the definable and constructible cases), we see a clear correspondence between the connected components of T and those of S. Let S be represented in A by any compact families {S_δ} and {S_δ,}, and let T=T(_0,δ_0,…,_m,δ_m) be as described in Definition <ref>. For 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1, let S and T denote the sets of connected components of S and T respectively. Gabrielov and Vorobjov describe a map C:T→S. If S' is a connected component of S, then those elements of T which map to S' under C are exactly those components which would make up the approximating set defined relative to S' alone. Thus, by the symmetry of T and S, C is equivariant. Gabrielov and Vorobjov demonstrate in <cit.> Lemma 1.9 that, provided m>0, we may choose the conditions upon the parameters in such a way that C is bijective. Hence, though we can't simply reduce to the case of a connected set without risking a loss of symmetry, we may when needed use C to pair the connected components of S and T and then apply the needed theorems to each pair individually.§ SYMMETRIC TRIANGULATION The construction of the maps in <cit.> relies upon triangulating the given set S inside the larger compact set A. Definable sets are triangulable (see for example <cit.> Theorem 4.4, quoted thm:Coste4.4below). However, in order to ensure equivariance, we will need a triangulation that respects the action of our group. That is the aim of this section. §.§ Triangulation of Definable SetsLet A be a closed bounded definable subset of ^n. Then a triangulation (Λ,Φ) of A is a finite simplicial complex Λ intogether with a definable homeomorphism Φ:Λ→ A. If we wish to decompose a set which is not necessarily closed into images of simplices, we must do so within some larger compact set. For A⊂^n compact and S⊂ A, we will say a triangulation (Λ, Φ) of A is adapted to S if S is a union of images by Φ of simplices of Λ. Triangulations of compact definable sets always exist, and may be adapted to any finite collection of subsets. Let A be a closed and bounded definable subset of ^n, and let S_1,…, S_l be definable subsets of A. Then there exists a triangulation (Λ,Φ) of A adapted to each of S_1,…, S_l. Furthermore, we may choose all vertices of Λ to be in Q^n. We need to triangulate the symmetric set S in a manner that retains symmetry relative to our group of interest. Specifically, we would like a triangulation with the following properties: Let G be a finite reflection group acting on ^n and let A⊂^n be a closed bounded definable set symmetric relative to G. A triangulation (Λ, Φ) of A is said to be an equivariant triangulation if Λ is symmetric as a simplicial complex and the map Φ:Λ→ A is equivariant. Our general strategy for obtaining an equivariant triangulation of A adapted to S involves triangulating a fundamental region of R^n, and then applying the action of G. This strategy has appeared in other contexts; see in particular <cit.> which proves such a result for smooth manifolds. We would like to establish a proof in the definable case. Furthermore, we would like our proof to follow the spirit of <cit.> Theorem 4.4, in which we create a concrete simplicial complex in R^n symmetric relative to our existing action of G.In order to make this approach feasible in the concrete setting, we must triangulate the portion of A lying in our fundamental region in such a way that the resulting simplicial complex lies within the same fundamental region of ^n, with points lying in its walls remaining as such. This motivates the following definition. Let B⊂^n. If a triangulation (Λ,Φ) of A⊂^n is adapted to A∩ B and furthermore we have x∈ A∩ B if and only if Φ^-1(x)∈Λ∩ B, then we say the triangulation respects the set B. To show that we may find a triangulation respecting our fundamental region and its walls, we will in fact prove something slightly stronger: for an arrangement of affine hyperplanes given by some collection {L_1(x)=0, …, L_k(x)=0} of linear functions, we can find a triangulation of A that respects any of the various half-spaces determined by our hyperplanes (for convenience, we will phrase this as respecting sign sets of {L_1,…,L_k}). We lose the property that all vertices lie in Q^n, but the vertices of our triangulation will be rational expressions in the coefficients of L_1,…, L_k. Since our application does not in fact require any such condition on the vertices, this is not an issue here.The reader may wish to study the proof of the following lemma in conjunction with Example <ref>, in order to see concrete usage of the notation set forth in the proof. Let A⊂^n be closed, bounded, and definable, and let S_1,…, S_l be definable subsets of A. Let {L_1,…, L_k} be a collection of functions with each L_i:^n→ given by L_i(x)=a_i,0+a_i,1x_1+⋯+a_i,nx_n for some a_i,j∈. Then there exists a triangulation (Λ,Φ) of A a which is adapted to each S_1,…, S_l and which respects all sign sets of {L_1,…, L_k}. Furthermore, we can choose this triangulation so that the coordinates of all vertices are in Q({a_i,j| 1≤ i≤ k, 0≤ j≤ n})^n. We can obtain a triangulation with the desired properties by way of a few slight modifications to the proof of Theorem <ref> presented in <cit.>. Specifically, the proof there chooses a number of points b_ν which serve as vertices in the simplicial complex Λ. We must add the requirement that our points b_ν belong to the correct sign sets, and confirm that our additional claims hold with this adjustment.The proof in <cit.> proceeds by induction on dimension. In the case n=1, we have that L_i=a_i,0+a_i,1x_1 for each i. We may assume without loss of generality that a_i,1≠ 0 for every i. Set c_i=-a_i,0/a_i,1. Reorder and remove duplicates to assume we have points c_1<⋯<c_k∈. Then the sign sets we must respect are the points and intervals (-∞, c_1), {c_i} for each i, (c_i,c_i+1) for 1≤ i≤ k-1, and (c_k, ∞). As <cit.> describes, we choose points ξ_1<⋯<ξ_p∈ so that each of A, S_1…, S_l, and each nonempty intersection of A and a sign set of {L_1,…, L_k} can be written as a union of points and intervals {ξ_μ} and (ξ_μ, ξ_μ+1).We define a map τ:{ξ_1,…, ξ_p}→ as follows: * If k=0, then let τ(ξ_i)=i-1 for each i.Otherwise, * If ξ_μ=c_i for some i, let τ(ξ_μ)=c_i* Let ξ_0,1<⋯<ξ_0,p_0 with {ξ_0,1,…, ξ_0,p_0}={ξ_1,…, ξ_p}∩ (-∞, c_1). Then set τ(ξ_0,μ)=c_1-p_0+μ-1* Let ξ_i,1<⋯<ξ_i,p_i with {ξ_i,1,…, ξ_i,p_i}={ξ_1,…, ξ_p}∩ (c_i, c_i+1) (where 1≤ i≤ k-1). Then set τ(ξ_i,μ)=c_i+μc_i+1-c_i/p_i+1* Let ξ_k,1<⋯<ξ_k,p_k with {ξ_k,1,…, ξ_k,p_k}={ξ_1,…, ξ_p}∩ (c_k, ∞). Then set τ(ξ_k, μ)=c_k+μ In either case, we have that τ preserves order and respects containment in sign sets of {L_1,…, L_k}. Each τ(ξ_μ) is also a rational expression in the coefficients of the L_i's. We extend τ piecewise linearly to obtain a definable order preserving homeomorphism τ': →, and take our simplicial complex Λ to be τ'(A) and Φ=(τ')^-1_Λ:Λ→ A.Assume now that n>1, and that our claim holds in dimension n-1. We can safely follow the procedure in <cit.> to assume each S_i is closed. We denote the boundary of A by F_0, and the boundary of S_i by F_i for each 1≤ i≤ l. Choose a cell decomposition of ^n adapted to each of F_0,…, F_l and also to the sign sets of {L_1,…, L_k}. If we let p:^n→^n-1 be the projection onto the first n-1 coordinates, we have that our cell decomposition of ^n gives us a decomposition of p(A) into definably connected definable subsets X_α, and for each X_α a finite number of continuous functions ζ_α,1<…<ζ_α,m_α:X_α→. This means each F_i can be written as a union of graphs ζ_α,μ. Note that because our cell decomposition of ^n is adapted to the sign sets of {L_1,…, L_k}, each cell in A is contained in precisely one such sign set. Furthermore for each sign set B of {L_1,…, L_k}, we have that A∩ B is a union of cells in the decomposition.Consider the sets given by ⋂_i∈λ{L_i(x)=0} for various selections of λ⊂{1,…, k}. If for a given λ, P'_λ=p⋂_i∈λ{L_i(x)=0} has dimension n-2, then P'_λ is an affine hyperplane of ^n-1, and therefore is the zero set for some linear function L_λ'(x)=a_λ,0'+a_λ,1'x_1+⋯+a_λ,n-1'x_n-1:^n-1→. Furthermore, we can choose L_λ' such that each a_λ,j'∈ is contained in Q({a_i,j| i∈λ, 0≤ j≤ n}). Removing any duplicates, we obtain a collection {L_1',…, L_k''} of functions L_i'(x)=a_i,0'+a_i,1'x_1+⋯+a_i,n-1'x_n-1:^n-1→ having the property that all coefficients a_i,j' are rational expressions in the coefficients {a_i,j} of the functions L_i. Furthermore, if B is a sign set of {L_1,…,L_k}, then p(B) is a union of sign sets of {L_1',…,L_k''}. Since our cell decomposition was adapted to all sign sets B of {L_1,…, L_k}, we have that each X_α is contained in exactly one sign set of {L_1',…, L_k''}.Let (Λ_n-1, Ψ_n-1) be a triangulation of p(A)⊂^n-1 which is adapted to each X_α and respects each sign set of {L_1',…, L_k''}. Consider (as <cit.> does) the setA'={(x',x_n)∈Λ_n-1×| (Ψ_n-1(x'),x_n)∈ A}Note that the map Ψ=(Ψ_n-1, 𝕀):A'→ A is a definable homeomorphism. Let {δ_β} denote the (finite) collection of all simplices of Λ_n-1. Since Ψ_n-1 is such that for each cell X_α⊂ p(A), Ψ_n-1^-1(X_α) is a union of simplices of Λ_n-1, we have that for each δ_β there is precisely one α so that Ψ_n-1(δ_β)⊂ X_α. Consider the graphs ζ_α,1<⋯<ζ_α, m_α defined on each X_α in our cell decomposition of A. Then for a given β and 1≤μ≤ m_β:=m_α, ξ_β, μ=ζ_α,μ∘Ψ_n-1_δ_β:δ_β→ is such that Ψ∘ξ_β, μ=ζ_α,μ∘Ψ_n-1 on δ_β. We also have that on each δ_β, ξ_β, 1<⋯ <ξ_β, m_β. Let {C_ν} be the (finite) collection of all graphs ξ_β,μ:δ_β→ and bands (ξ_β, μ, ξ_β, μ+1)⊂δ_β× which are contained in A'. Note that for each ν, there exists one sign set B of {L_1,…, L_m} such that Ψ(C_ν)⊂ B.For each δ_β, let b(δ_β)=((b(δ_β))_1,…,(b(δ_β))_n-1) denote the barycenter of the simplex δ_β. For each cell C_ν in A', we will choose a point b_ν to serve as a vertex in our new simplicial complex Λ. Consider a simplex δ_β in Λ_n-1. LetI ={1≤ i≤ k| L_i(Ψ_n-1(b(δ_β)), x_n)=0has a unique solution}= {1≤ i≤ k| L_i(b(δ_β), x_n)=0 has a unique solution}For i∈ I, let c_β, i∈ be such that L_i(Ψ_n-1(b(δ_β)), c_β, i)=0, and let c_β, i'∈ be such that L_i(b(δ_β), c_β, i')=0. Because our triangulation of A' respects {L_1',…, L_k''}, we know that the map of subsets {c_β,i| i∈ I}→{c'_β,i| i∈ I} ofgiven by c_β,i↦ c'_β,i is a well-defined, order preserving bijection. So, let ({c_β,i| i∈ I})=k_β, and reorder and remove skipped indices and duplicates, to assume c_β, 1<⋯ <c_β, k_β (and in this same indexing c_β, 1'<⋯ < c_β, k_β').For each cell C_ν defined on δ_β, letb_ν=((b(δ_β))_1,…,(b(δ_β))_n-1, (b_ν)_n)where (b_ν)_n is assigned as follows. First, assume C_ν is the graph of the function ξ_β,μ. Then we assign (b_ν)_n according to a schema similar to the one appearing in the n=1 case: * If k_β=0, let (b_ν)_n=μ-1Otherwise, * If ξ_β,μ(b(δ_β))=c_β,i for some i, let (b_ν)_n=c_β,i'* Let ξ_β,0,1<⋯<ξ_β,0,p_0 be those graphs with ξ_β,0,μ(b(δ_β))<c_β,1. Then for C_ν corresponding to ξ_β,0,μ, set (b_ν)_n=c_β,1'-p_0+μ-1* For each 1≤ i<k_β, let ξ_β,i,1<⋯<ξ_β,i,p_i be those graphs with c_β,i<ξ_β,0,μ(b(δ_β))<c_β,i+1. Then for C_ν corresponding to ξ_β,i,μ, set (b_ν)_n=c_β,i'+μc_β,i+1'-c_β,i'/p_i+1* Let ξ_β,k_β,1<⋯<ξ_β,k_β,p_k_β be those graphs such that c_β,k_β<ξ_β,0,μ(b(δ_β)). Then for C_ν corresponding to ξ_β,k_β,μ, set (b_ν)_n=c_β, k_β'+μFinally, if C_ν is a band (ξ_β, μ, ξ_β, μ+1), set (b_ν)_n=(b_ν_0)_n+(b_ν_1)_n/2, where C_ν_0 and C_ν_1 correspond to the graphs ξ_β,μ and ξ_β,μ+1 respectively (note that since A' is bounded, all of our bands appearing among the sets C_ν are bounded). Then for any cell C_ν, we have that the point b_ν is in the same sign set of {L_1,…, L_k} as Ψ(C_ν) is. Note also that each c_β,i' is a rational expression in the coordinates of b(δ_β) and the coefficients of L_i. By our inductive hypothesis, all vertices in Λ_n-1 have coordinates which are rational expressions in the coefficients of L_1',…, L_k'', and hence of L_1,…, L_k. Hence the same applies to our barycenters b(δ_β) for δ_β∈Λ_n-1, and so by the definition of (b_ν)_n above, each b_ν has coefficients which are in Q({a_i,j})^n.For each cell C_ν, <cit.> next builds a polyhedron D_ν together with its subdivision into simplices. The procedure occurs inductively on the dimension of C_ν: if C_ν is a point, take D_ν to be {b_ν}. Otherwise, take D_ν as the cone from b_ν to the union of all D_ν' with C_ν'⊂∂(C_ν). The decomposition of D_ν into simplices comes from taking cones with vertex b_ν and base a simplex contained in D_ν' for some D_ν'⊂∂(D_ν).Taking all these simplices together, we obtain our desired simplicial complex Λ with Λ=⋃D_ν. Of primary note is the fact that the vertices of all simplices come from among the points b_ν. Hence because L_1=0,…, L_k=0 define affine subspaces of ^n, we have that for a given sign set B of {L_1,…, L_k}, a (relatively open) simplex Δ(b_ν_1,…, b_ν_m) in Λ is either contained in B (if all b_ν_i are in B) or disjoint from B (else).For each D_ν, we define a preparatory homeomorphism θ_ν: D_ν→C_ν as follows: if C_ν is a graph ξ_β, μ (see <cit.> for a justification of why we may continuously extend ξ_β, μ to the closed simplex δ_β), let θ_ν(x',x_n)=(x',ξ_β,μ(x')). If C_ν is a closed band [ξ_β,μ,ξ_β, μ+1], we map each segment ({x'}×)∩D_ν affinely to the corresponding segment {x'}× [ξ_β,μ(x'),ξ_β,μ+1(x')]. Composing with Ψ (and by convexity of our sign sets), we have that for a simplex Δ of Λ with Δ⊂D_ν, Ψ∘θ_ν(Δ)⊂ B∩ A iff Δ⊂ B.Unfortunately, we cannot simply piece together our maps Ψ∘θ_ν to obtain the desired homeomorphism Φ:Λ→ A. Instead, we construct a new Φ':Λ→ A', inducting on the dimension of D_ν. If D_ν is a point, take Φ'_ν:D_ν→C_ν to be θ_ν. Otherwise, we can construct a homeomorphism ρ_ν:∂(D_ν)→∂(D_ν) by specifying that ρ_ν_D_ν'=θ_ν^-1∘Φ'_ν' for each D_ν'⊂∂(D_ν). We use the conic structure of D_ν to extend ρ_ν to a homeomorphism η_ν:D_ν→D_ν, and set Φ'_ν=θ_ν∘η_ν. Now, Φ' given by Φ'_D_ν=Φ'_ν is well defined even on the boundaries of the sets D_ν. Finally, set Φ:Λ→ A to be Ψ∘Φ'.That the triangulation (Λ,Φ) is adapted to S_1,…,S_l and also A∩ B for each sign set B of {L_1,…, L_m} follows as from the proof in <cit.>. We must still check that the triangulation respects all of our sign sets. Let x∈Λ, and let Δ∈Λ be the unique simplex with x∈Δ. Take D_ν to be of minimal dimension with Δ⊂D_ν. If D_ν is a point, then D_ν=Δ={x}={b_ν}, and we have already established that for a given sign set B, x=b_ν∈ B iff Φ(x)=Ψ∘θ_ν(x)∈ B. Say that (D_ν)>0. Then Δ=Δ(b_ν_0,…, b_ν_q-1,b_ν_q=b_ν), and there is some ν' such that D_ν'⊂∂(D_ν) and Δ'=Δ(b_ν_0,…,b_ν_q-1)⊂D_ν'. Then assuming we have established that Φ respects sign sets of {L_1,…, L_m} on D_ν', we know that ρ_ν_D_ν'=θ_ν^-1∘Φ'_ν' carries Δ' to a subset of some B∩Λ iff Δ' is already a subset of B∩Λ. Writing any y∈D_ν (uniquely) as ty'+(1-t)b_ν for some t∈ [0,1] and the proper choice of y'∈∂(D_ν), we have that η_ν(y)=tρ(y')+(1-t)b_ν. Then for our given x∈Δ, x∈ B iff b_ν_1,…,b_ν_q∈ B iff x'∈Δ'⊂ B and b_ν∈ B iff ρ_ν(x')∈ B and b_ν∈ B iff η_ν(x)∈ B. Then since Φ'_ν=θ_ν∘η_ν, we have that x∈ B iff Φ(x)=Ψ∘Φ'(x)∈ B.Let A=B(0,1)⊂R^2. We will illustrate how Lemma <ref> may be applied to give a triangulation of A respecting sign sets of {L_1=y, L_2=x-y}. Throughout, we will associate objects to their notation in the proof of Lemma <ref>, so that the example may aid in the parsing of the proof.We begin with a cell decomposition of R^2 adapted to A and to all sign sets of {L_1,L_2}. Following the most obvious choice of decomposition, we obtain a subdivision of R^1 into points {-1}, {-√(2)/2}, {0}, {√(2)/2}, and {1} and the intervals between them. The full cell decomposition is shown in Figure <ref>. Projecting to R^1 for our induction, we must obtain a triangulation (Λ_n-1,Ψ_n-1) of [-1,1] adapted to the sets {-1}, (-1,-√(2)/2), {-√(2)/2}, (-√(2)/2,0), {0}, (0,√(2)/2), {√(2)/2}, (√(2)/2,1), and {1} and respecting the sets (-∞,0), {0}, and (0,∞).We apply the n=1 case of the algorithm outlined in the proof of Lemma <ref>. In that notation, we have k=1, c_1=0, and ξ_1=-1,ξ_2=-√(2)/2,ξ_3=0,ξ_4=√(2)/2, ξ_5=1. Then the map τ is given by ξ_1↦ -2, ξ_2↦ -1, ξ_3↦ 0, ξ_4↦ 1, ξ_5↦ 2. Hence our triangulation (Λ_n-1,Ψ_n-1) of p(A) is such that Λ_n-1=[-2,2] and Ψ_n-1:Λ_n-1→ p(A) is induced by piecewise linearly extending the pairings given by τ.To construct our triangulation of A itself, we first identify our vertices b_ν. For the sake of example, we will concentrate on the simplex δ_β=(0,1) of Λ_n-1. We have that Ψ_n-1(δ_β)⊂ X_α=(0,√(2)/2) in our cell decomposition of p(A) (in fact, in this case Ψ(δ_β)=X_α), and that the barycenter b(δ_β)= 1/2∈R^1. In the cell decomposition of A, we have four graphs defined on (0,√(2)/2): let ζ_1 be the lower semicircle, ζ_2 be the line y=0, ζ_3 be the line y=x, and ζ_4 be the upper semicircle. There are a total of seven cells (graphs and bands) defined on this interval and contained in A. In A, the line {x=Ψ_n-1(b(δ_β))} (that is, {x=√(2)/4}) meets {L_1=0} and {L_2=0} at y-values of c_β,1=0 and c_β,2= √(2)/4 respectively. Translating to our triangulation built upon Λ_n-1, we have that {x=b(δ_β)} meets {L_1=0} and {L_2=0} at c'_β,1=0 and c'_β,2= 1/2. In order to preserve this correspondence, we assign our vertices b_ν∈R^2 as follows. * C_ν corresponds to the graph ζ_1: b_ν=(1/2, -1)* C_ν corresponds to the graph ζ_2: b_ν=(1/2, 0)* C_ν corresponds to the graph ζ_3: b_ν=(1/2, 1/2)* C_ν corresponds to the graph ζ_4: b_ν=(1/2, 3/2)* C_ν corresponds to the band (ζ_1,ζ_2): b_ν=(1/2, -1/2)* C_ν corresponds to the band (ζ_2,ζ_3): b_ν=(1/2, 1/4)* C_ν corresponds to the band (ζ_3,ζ_4): b_ν=(1/2, 1)We use these vertices and those obtained by applying the same process to the remaining cells to build polyhedra D_ν, each of which comes with a subdivision into simplices. Taken together, these simplices give us our desired complex Λ respecting sign sets of {L_1, L_2}, as shown in Figure <ref>. There are a total of 10 2-dimensional polyhedra, with 64 2-dimensional simplices. Now we prove our symmetric triangulation theorem. Let A be a closed and bounded subset of R^n symmetric under the action of a finite reflection group G, and let S_1,…, S_l be symmetric sets which are subsets of A. Then there exists an equivariant triangulation (Λ, Φ) of A adapted to S_1,…, S_l.Assume we have fixed a collection {L_1,…,L_k} of functions L_i:R^n→R given by L_i(x)=a_i,1x_1+⋯ a_i,nx_n, so thatH=⋂_i=1^k {L_i(x)≥ 0}is a fundamental region of R^n with respect to G. Then we may choose our triangulation so that vertices of Λ are in Q({a_i,j| 1≤ i≤ k, 1≤ j≤ n})^n.Let G be a finite reflection group acting on R^n, and if not already selected, let {L_1,…, L_k} be linear functions defining a fundamental region H of R^n with respect to G. We apply Lemma <ref> to the closed, bounded, definable set A∩H, definable subsets S_1∩H,…, S_l∩H, and our collection of functions {L_1,…, L_k}. The lemma gives us a finite simplicial complex Λ_𝕀 with vertices in Q({a_i,j}) and a definable homeomorphism Φ_𝕀:Λ→ A∩H. Note that since H and all its walls are sign sets of {L_1,…, L_k}, the triangulation (Λ_𝕀, Φ_𝕀) respects H and all its walls. In particular, Λ⊂H.We now define our proposed triangulation of S. LetΛ=⋃_g∈ G{g(Δ)|Δ∈Λ_𝕀}and let Φ:Λ→ A be given by Φ(x)=g(Φ_𝕀(x')) where x'∈Λ_𝕀 and g∈ G are such that g(x')=x. We claim that (everything is well-defined and) Λ and Φ give an equivariant triangulation of A adapted to S_1,…, S_l.Λ is a symmetric simplicial complex: We have that if Δ∈Λ_𝕀 and g∈ G, g(Δ) remains a simplex in R^n by linearity of g, and so Λ is a collection of simplices. The symmetry property for Λ then holds by construction, so it remains to show that for Δ_i,Δ_j∈Λ, Δ_i∩Δ_j is the closure of some simplex in Λ. Without loss of generality, assume Δ_1∈Λ_𝕀 and Δ_2=g(Δ_2') for Δ_2'∈Λ_𝕀 and g∈ G. This means Δ_1∩g(Δ_2')⊂H∩ g(H). As described in Subsection <ref>, H∩ g(H)=H_λ_g is a set of the formH_λ_g=⋂_i∈λ_g{L_i(x)=0}∩⋂_i∈{1,…,k}∖λ_g{L_i(x)≥ 0}which in particular is a sign set of {L_1,…, L_k}. Since (Λ_𝕀,Φ_𝕀) is hence adapted to A∩ H_λ_g, we have that Δ_1∩ H_λ_g is also the closure of a simplex of Λ_𝕀. Since points of H_λ_g are fixed under the action of g, Δ_2∩ H_λ_g=Δ_2'∩ H_λ_g is the closure of a simplex of Λ_𝕀 as well. Then Δ_1∩Δ_2=(Δ_1∩ H_λ_g)∩(Δ_2∩ H_λ_g) is an intersection of closures of simplices of Λ_𝕀, and hence a common face of Δ_1 and Δ_2.Vertices of Λ are in Q({a_i,j})^n: We will show that, for g∈ G and any point x in Q({a_i,j})^n, g(x)∈Q({a_i,j})^n. Since any element g of G can be written as a product of those elements g_1,…, g_k, where the action of g_i is reflection through the linear hyperplane L_i=0, we may assume g=g_i for some 1≤ i≤ k. The reflection of x through L_i=0 is given byg_i(x)=x-2(x,r_i)r_i/(r_i,r_i)where we take r_i to be the vector a_i,1,…,a_i,n (which is perpendicular to the hyperplane L_i=0). Then since we have assumed that we are using the standard inner product on R^n, the coordinates of g_i(x) are all also in Q({a_i,j}), as desired.Φ is well-defined: Let x∈Λ and say that x=g_1(x_1)=g_2(x_2) for x_1,x_2∈Λ_𝕀 and g_1,g_2∈ G. We must show that g_1(Φ_𝕀(x_1))=g_2(Φ_𝕀(x_2)). Because x_1=g_1^-1(g_2(x_2)) with x_1,x_2∈Λ_𝕀, we have that x_1∈Λ_𝕀∩ g_1^-1∘ g_2(Λ_𝕀) ⊂ H_λ_g_1^-1g_2. Since g_1^-1∘ g_2 fixes the points of H_λ_g_1^-1g_2, we obtain that x_1=x_2. Since Φ_𝕀 carries H_λ_g_1^-1g_2 to itself, we have that Φ_𝕀(x_1)∈ H_λ_g_1^-1g_2, as is g_1^-1∘ g_2(Φ_𝕀(x_1)), and so Φ_𝕀(x_1)=g_1^-1∘ g_2(Φ_𝕀(x_1)), i.e. we have obtained that g_1(Φ_𝕀(x_1))=g_2(Φ_𝕀(x_2)). That the image of Φ is A follows from the symmetry of A={g(x)|x∈ A∩H andg∈ G}.Φ is a homeomorphism: We have already established that the surjectivity of Φ follows from the surjectivity of Φ_𝕀 onto A∩H. To show injectivity, say Φ(x_1)=Φ(x_2) for some x_1,x_2∈Λ, i.e.g_1(Φ_𝕀(x_1'))=g_2(Φ_𝕀(x_2')) for some x_1',x_2'∈Λ_𝕀 and g_1,g_2∈ G. Then since Φ_𝕀(x_1')=g_1^-1∘ g_2(Φ_𝕀(x_2')), both are in H_λ_g_1^-1g_2, and so Φ_𝕀(x_1')=Φ_𝕀(x_2'). By the injectivity of Φ_𝕀, this means x_1'=x_2'. Finally, since x_1'=x_2' is in H_λ_g_1^-1g_2, x_1'=g_1^-1∘ g_2(x_2'), so we have x_1=g_1(x_1')=g_2(x_2')=x_2. Continuity of Φ follows from continuity on g(Λ_𝕀) for each g∈ G and agreement on the boundaries.Φ is equivariant under the action of G by construction. The preimage under Φ of each set S_i among S_1,…, S_l is the union of the simplices g(Δ) for g∈ G and Δ∈Λ_𝕀 such that Δ⊂Φ_𝕀^-1(S_i∩H). Hence, (Λ,Φ) gives our desired triangulation.§.§ Triangulation of Definable Functions In the proofs in <cit.>, to ensure that the triangulation we use is properly compatible with the family of sets {S_δ}_δ>0, the authors invoke the triangulation of definable functions. The original theorem from <cit.> is below. We will proceed to prove a version for functions symmetric relative to the action of some finite reflection group G. Let A be a closed and bounded definable subset of ^n and f:A→ a continuous definable function. Then there exists a finite simplicial complex Λ in ^n+1 and a definable homeomorphism ρ: Λ→ A such that f∘ρ is an affine function on each simplex of Λ. Moreover, given S_1,…, S_l definable subsets of A, we may choose the triangulation ρ:Λ→ A to be adapted to the S_i. Following <cit.>, we will prove a more general result concerning the triangulation of symmetric definable subsets of R×R^n, which we will apply to what is essentially the graph of our function f. Our procedure is similar to the one used for symmetric definable sets. Let π:×^n→ denote projection on the first coordinate. Let A be a closed, bounded, definable subset of ×^n, and let S_1,…, S_l be definable subsets of A. Let {L_1,…, L_k} be a collection of functions with L_i:×^n→ given by L_i:(y,x_1,…, x_n)↦ a_i,0+a_i,1x_1+⋯+a_i,nx_n. Then there exists a triangulation (Λ,Φ) of A adapted to S_1,…, S_l, respecting all sign sets of {L_1,…, L_k}, and having vertices of Λ in Q×Q({a_i,j| 1≤ i≤ k, 0≤ j≤ n})^n, as well as a definable homeomorphism τ:→, such that τ∘π∘Φ=π_Λ. We induct on n. In the n=0 case, since we are assuming all functions in our collection {L_1,…, L_k} to be independent of the first coordinate, we have nothing more to prove than <cit.> does, and so may as there choose a finite partition ofadapted to A and S_1,…, S_l. Letting x_1<⋯< x_p be the points indefining the partition, we take Λ to be the points i∈{1,…, p} and intervals [i,i+1] such that x_i∈ A or [x_i,x_i+1]⊂ A. We let τ(x_i)=i and extend piecewise affinely to a map τ: →.Assume n>0 and that the statement holds for n-1. We may again assume all S_i are closed (as described in the proof of <cit.> Theorem 4.4). We set F'_0 to be the boundary of A, F'_i to be the boundary of S_i for each i, and F'=F'_0∪ F'_1∪…∪ F'_l. Since F' is definable with dimension at most n, we have finitely many c∈ for which the set {x∈^n| (c,x)∈ F'} has dimension n. Let C be the set of all such c. For reasons of dimension, the proof in <cit.> considers sets F_i given by taking the union of F'_i∖(C×^n) with the boundary of F'_i∩ (C×^n) in C×^n. We choose a cell decomposition of ×^n adapted to F_0,…, F_l, the sets {c}×^n for each c∈ C, and all sign sets of {L_1,…, L_k}.Let p:×^n→×^n-1 be the projection on the first n coordinates. Our cell decomposition partitions p(A) into definably connected subsets X_α. By our inductive hypothesis, we have a triangulation (Λ_n-1, Ψ_n-1) of ×^n-1 which is adapted to each X_α, respects sign sets of {L_1',…, L_m''} (where this collection is defined in a manner analogous to the proof of Lemma <ref>), and with vertices of Λ_n-1 in Q×Q({a_i,j'| 1≤ i≤ m', 0≤ j≤ n-1})^n-1= Q×Q({a_i,j| 1≤ i≤ m, 0≤ j≤ n})^n-1. We also have a map τ':→, having the property that τ'∘π_n,1∘Ψ_n-1=π_n,1_Λ_n-1 (where π_n,1 is the projection ×^n-1→ on the first coordinate). Now, if we follow the remaining steps in the proof of Lemma <ref>, we obtain a triangulation (Λ,Φ) of A which is adapted to S_1,…, S_l, respects sign sets of {L_1,…, L_k}, and has vertex coordinates in Q×Q({a_i,j})^n. Λ and Φ are such that p∘Φ=Ψ_n-1∘ p_Λ. Taking τ=τ', this means that our property τ∘π∘Φ=π_Λ holds.Let A be a closed, bounded, definable subset of R×R^n symmetric under the action of some finite reflection group G on R^n extended to R×R^n, and let S_1,…, S_l be definable symmetric subsets of A. Then there exists an equivariant triangulation (Λ, Φ) of A adapted to S_1,…, S_l and a definable homeomorphism τ:R→R having the property that τ∘π∘Φ=π_Λ.Assume we have fixed a collection {L_1,…,L_k} of functions L_i:R^n→R given by L_i(x)=a_i,1x_1+⋯ a_i,nx_n, so thatH=⋂_i=1^k {L_i(x)>0}is a fundamental region of R^n with respect to G. Then we may choose our triangulation so that vertices of Λ are in Q×Q({a_i,j| 1≤ i≤ k, 1≤ j≤ n})^n. This is analogous to the proof of Theorem <ref>. Let H be a fundamental region of R^n with respect to G. If not already specified, we take {L_1,…, L_k} with L_i(x)=a_i,1x_1+⋯+a_i,nx_n to be a collection functions defining H. For each i, let L̃_i:R×R^n→R with L̃_i(y,x)=L_i(x). We apply Lemma <ref> to A∩ (R×H), subsets S_1∩ (R×H),…, S_l∩(R×H), and linear functions {L̃_1,…, L̃_k}. We obtain a triangulation (Λ_𝕀,Φ_𝕀) with Λ_𝕀⊂R×H and coordinates of vertices of Λ_𝕀 in Q×Q({a_i,j})^n, and Φ_𝕀:Λ_𝕀→ A∩ (R×H), and also obtain a map τ:R→R, having the property that τ∘π∘Φ_𝕀=π_Λ_𝕀. Again, we takeΛ=⋃_g∈ G{g(Δ)|Δ∈Λ_𝕀}and Φ:Λ→ A given by Φ(x)=g(Φ_𝕀(x')), where g∈ G and x'∈Λ_𝕀 are such that x=g(x'). As in the proof of Theorem <ref>, this provides a symmetric triangulation of A adapted to S_1,…, S_l, with all vertices of Λ in Q×Q({a_i,j})^n.It remains to show that τ∘π∘Φ=π_Λ. Take x∈Λ. Then we have that x=g(x') for some g∈ G and x'∈Λ_𝕀. Note that by the definition of our action of G extended to R×R^n, we have that π is symmetric relative to this action. Thenτ∘π∘Φ(x) =τ∘π(g(Φ_𝕀(x')))=τ∘π(Φ_𝕀(x'))=π(x')=π(g(x'))=π(x)as desired. Our theorem now follows, applying Lemma <ref> to A'={(f(x),x)|x∈ A}. Let A be a closed, bounded, definable subset of R^n symmetric under the action of a finite reflection group G, and let f:A→R be a continuous definable function symmetric relative to the action of G. Then there exists a finite symmetric simplicial complex Λ in R^n+1 and a definable equivariant homeomorphism ρ: Λ→ A such that f∘ρ is an affine function on each simplex of Λ. Moreover, given S_1,…, S_l definable symmetric subsets of A, we may choose the triangulation ρ: Λ→ A to be adapted to the sets S_i.Assume we have fixed a collection {L_1,…,L_k} of functions L_i:R^n→R given by L_i(x)=a_i,1x_1+⋯ a_i,nx_n, so thatH=⋂_i=1^k {L_i(x)>0}is a fundamental region of R^n with respect to G. Then we may choose our simplicial complex Λ so that all vertices of Λ are in Q×Q({a_i,j| 1≤ i≤ k, 1≤ j≤ n})^n. Consider the set A'={(f(x), x)∈R×R^n|x∈ A}. Since f is symmetric relative to the action of G on R^n, A' is a symmetric set relative to the action induced by G on R×R^n, and projection on the last n coordinates gives an equivariant definable homeomorphism p:A'→ A. By Lemma <ref>, we have a symmetric triangulation (Λ, Φ) of A' which is adapted to the sets S_1',…, S_l' (with S_i'={(f(x),x)|x∈ S_l}) and has vertices in Q×Q({a_i,j})^n, and a definable function τ:R→R such that τ∘π∘Φ=π_Λ (where π:R×R^n→R is projection on the first coordinate). Applying f to x∈ A is equivalent to applying π to (f(x),x)∈ A', and so taking ρ:Λ→ A to be p∘Φ, we have that f∘ρ=τ^-1∘π_Λ, which is an affine map on each simplex of Λ by construction.§.§ Equivariance and Hardt Triviality We will also want an equivariant version of Hardt Triviality for o-minimal sets. Because the proof uses the same sort of argument as equivariant triangulation, we include it within this section. Let X⊂^n and A⊂^m be definable sets and f:X→ A a continuous definable function. For A'⊂ A, we say that f is definably trivial over A' if for any a∈ A' there is a definable homeomorphism h:f^-1(A')→ f^-1(a)× A' such that the diagramf^-1(A') [dr, "f"] [rr, "h"]f^-1(a)× A'[dl, "π"'] A'commutes (where π is the projection on the second coordinate).For X_1,…, X_l definable subsets of X, we say that the definable trivialization h respects X_1,…, X_l if h maps each X_j∩ f^-1(A') homeomorphically to (X_j∩ f^-1(a))× A'.Let X⊂^n and A⊂^m be definable sets, f:X→ A a continuous definable function, and X_1, …, X_l definable subsets of X. Then we may partition A into a finite number of definable subsets A_i such that f is definably trivial over each A_i in a manner that respects each of X_1,…, X_l. We would like to show that, if f is a symmetric function, then the homeomorphisms guaranteed by the trivialization are equivariant. Let G be a finite reflection group acting on R^n, and let X⊂R^n and X_1,…, X_l⊂ X be symmetric relative to G. Say that A⊂R^m is definableand f:X→ A is a continuous, definable function symmetric relative to G. Then we may partition A into a finite number of definable subsets A_i such thatfor each i there is an equivariant homeomorphism h_i:f^-1(A_i)→ f^-1(a_i)× A_i giving a definable trivialization of f over A_i (where a_i is any element of A_i, and the action on f^-1(a_i)× A is given by g(x,y)=(g(x),y)). Furthermore, each trivialization h_i respects the subsets X_1,…, X_l. Let H be a fundamental region of R with respect to G. We apply Theorem <ref> to f_H:H∩ X→ A, asking that our definable trivializations respect the sets H∩ X_1,…, H∩ X_l as well as each H_λ∩ X for H_λ an intersection of walls of H. We obtain a finite partition {A_i} of A, as well as homeomorphisms h_𝕀,i:f^-1_H(A_i)→ f^-1_H(a_i)× A_i which respect both the portions of each X_j within H and the intersections of X with the walls of H.Since f is a symmetric function, we know that for a_i∈ A_i,f^-1(a_i)=⋃_g∈ G g(f^-1_H(a_i))and that the same holds true for f^-1(A_i). We define h_i:f^-1(A_i)→ f^-1(a_i)× A_i by h_i(x)=g(h_𝕀,i(x')), where x'∈ f^-1(A_i)∩H and g∈ G are such that g(x')=x. We must show that h_i is a definable trivialization of f.We may use arguments similar to those of Theorem <ref> to establish that h_i is a homeomorphism from f^-1(A_i) to f^-1(a_i)× A_i. The map h_i is equivariant by construction, and h_i sends each X_j∩ f^-1(A_i) homeomorphically to (X_j∩ f^-1(a_i))× A_i by construction and the symmetry of each X_j. Say that x∈ f^-1(A_i) with f(x)=y. Then if x=g(x') for some g∈ G and x'∈ f^-1(A_i)∩H, by symmetry we know that f(x')=y as well, and so h(x')=(z,y) for some z∈ f^-1(a_i). Then h(x)=g(z,y)=(g(z),y), so h_i is indeed a definable trivialization of f over A_i. § EQUIVARIANT VERSIONS OF SOME TOPOLOGICAL RESULTS We address some topological aspects of the construction in this section. Specifically, for X a regular CW complex whose cells are convex polyhedra (so specifically, for X a simplicial complex), we explicitly describe certain maps between X and the order complex of the face poset of X, in aid of demonstrating equivariance. We also establish an equivariant version of a Nerve Theorem due to Björner. Let P be a partially ordered set. The order complex of P, Δ(P), is the abstract simplicial complex with vertex set the elements of P and simplices given by finite chains x_0<⋯<x_d of elements of P. The order complex of the face poset of a simplicial complex coincides with the first barycentric subdivision of that simplicial complex. For a regular CW complex, then, taking the order complex of the face poset serves as a generalized barycentric subdivision: Let X be a regular CW complex. Then Δ(X) is homeomorphic to X. The proof involves choosing a point in the interior of each cell σ to act as the barycenter of the cell, and sending the vertex of Δ(X) corresponding to σ to that barycenter. Elsewhere, the homeomorphism arises from the fact that the characteristic maps used to assemble X as a regular CW complex endow each cell with the structure of a cone with the barycenter as vertex and boundary as base. Unfortunately, this construction as it stands is not specific enough to ensure equivariance without some sort of equivariance condition on the characteristic maps. However, if we know that each cell of X is, for example, a convex polyhedron, we may make our choices in a manner sufficiently canonical to ensure equivariance. Note that by polyhedron in this context we mean a bounded subset of ^n obtained by intersecting a finite number of affine half planes. Let X⊂^n be a regular CW complex in which each cell is a convex polyhedron. Then we will refer to the homeomorphism Ψ described below as the centroidal homeomorphism Δ(X)→ X.We define Ψ inductively. Let X^k denote the k-skeleton of X. In the k=0 case, let Ψ^0: Δ(X^0)→ X^0 send the vertex v_σ of Δ(X^0) corresponding to the 0-dimensional cell σ={x} to the point x∈ X.Now say k≥ 1. Assume we have defined Ψ^k-1:Δ(X^k-1)→ X^k-1 and established that Ψ^k-1 is a homeomorphism. We may identify Δ(X^k-1) with the subset of Δ(X^k) consisting of simplices whose vertices correspond to cells of X of dimension less than k; on this subset, let Ψ^k agree with Ψ^k-1. Now, assume σ is a cell of dimension k, and let v_σ be the vertex of Δ(X^k) corresponding to σ. Define Ψ^k(v_σ)=b_σ, where b_σ is the centroid of the cell σ (which is in the interior of σ by convexity). Say x∈Δ(X^k). Then we may uniquely write x=tv_σ+(1-t)x', where σ is a cell of X of dimension k, x' is in the realization of some simplex of Δ(X^k-1), and t∈ [0,1]. Set Ψ^k(x)=tb_σ+(1-t)Ψ^k-1(x'). Since each cell σ of X can be seen as a cone with base ∂σ and vertex b_σ, this gives a well-defined homeomorphism to X^k. Since X=⋃_k X^k, we inductively obtain our desired homeomorphism Ψ:Δ(X)→ X. When we refer to the barycentric subdivision of a polyhedral CW complex X⊂^n, we will mean the simplicial complex in ^n whose geometric realization is equal as a set to X and whose cell structure is inherited from this map. Let X⊂^n be a regular CW complex whose cells are all convex polyhedra. If G is a group acting linearly on X such that X is symmetric as a CW complex under the action of G, then the action of G on X induces an action of G on Δ(X) under which Δ(X) is symmetric, and the homeomorphism Ψ:Δ(X)→ X of Definition <ref> is equivariant. We will also want to consider subsets of simplicial or polyhedral CW complexes which are not full subcomplexes. Let X be polyhedral CW complex and let Y be a union of cells of X. Gabrielov and Vorobjov in <cit.> Remark 2.12 describe how we may consider only those cells of the barycentric subdivision of X which are contained with their closures in Y. This gives us a subcomplex of the barycentric subdivision of X which is homotopy equivalent to Y, and which may replace Y in our applications. Though this procedure is relatively standard, we will describe the contraction explicitly so that we may ensure equivariance. Let X⊂^n be a regular CW complex whose cells are all convex polyhedra, and let Y be a union of cells of X. Let X be the barycentric subdivision of X, and let Y={Δ∈X|Δ⊂ Y}. Define the barycentric retraction of Y to be(Y)={Δ∈X|Δ⊂ Y}Say Y is a union of cells of some regular CW complex X⊂^n whose cells are all convex polyhedra. Then (Y) is homotopy equivalent to Y We construct a homotopy h_:[0,1]× Y→ Y, which we will term the barycentric retracting map.Observe that (Y)⊂ Y is a subcomplex of X, whose vertex set is precisely those 0-simplices of X corresponding to cells of Y. In fact, the simplices of (Y) correspond precisely to flags of cells in Y. This means that Y=∅ iff (Y)=∅.Let Δ=Δ(v_0,…, v_d) be a simplex of Y, and define the set J_Δ={v_i_0,…,v_i_d'}={v_i∈{v_0,…,v_d}| v_i∈(Y)}. Then if we let Δ'=Δ(v_i_0,…, v_i_d'), it is clear from the correspondence of simplices of (Y) to flags of cells in Y that J_Δ is nonempty and that Δ∩(Y)=Δ'. Say that x∈ Y with x∈Δ. Then we may write x uniquely as x=∑_i=0^d t_v_i v_i, where 0≤ t_v_i≤ 1 for each i and ∑ t_v_i = 1. Then, for t∈ [0,1], leth_(t,x)=t ∑_v_i∉J_Δ t_v_i v_i + 1-t∑_v_i∉J_Δ t_v_i/∑_v_i∈ J_Δ t_v_i∑_v_i∈ J_Δt_v_i v_iFrom this, we obtain our desired map h_:[0,1]× Y→Y, noting that the definition of h_ agrees on the boundaries of simplices. Observe that h_(0,-):Y→(Y) serves as homotopy inverse to the inclusion (Y)↪ Y.Let X⊂^n be a regular CW complex whose cells are all convex polyhedra, and let Y be a union of cells of X. If X is symmetric as a CW complex under the action of some group G acting linearly on X and Y is also symmetric relative to G, then h_:[0,1]× Y→ Y is equivariant (where the action of G on [0,1]× Y is given by g(t,x)=(t,g(x)) for all g∈ G). From the symmetry of X and Y, we have that X, Y, and (Y) are all symmetric. Hence, for Δ∈Y, g(J_Δ)=J_g(Δ). Equivariance of h_ then follows from the linearity of the action of G. Even when Y is not a full subcomplex of X, we may make sense of the simplicial complex Δ(Y). As described in Definition <ref>, we may identify Δ(Y) with (Y). Furthermore, if X and Y are both symmetric, then the homeomorphism Δ(Y)→(Y) is equivariant.We conclude by addressing equivariance in Börner's Nerve Theorem, which will be used in Section <ref>. Let {X_i}_i∈ I be a family of sets. The nerve of {X_i}_i∈ I is the abstract simplicial complex N having vertex set I and simplices given by the finite subsets σ of I such that⋂_i∈σ X_i≠∅A variety of theorems exist which relate a space covered by a family of sets to the nerve of that cover. Björner's Nerve Lemma has the advantage of only requiring the triviality of sufficiently many homotopy groups of intersections of sets in the cover (with a slightly weaker conclusion in exchange). Let X be a regular connected CW complex and {X_i}_i∈ I a family of subcomplexes with X=⋃_i∈ I X_i. Let N be the nerve of {X_i} * Say that every finite nonempty intersection X_i_1∩…∩ X_i_t is (k-t+1)-connected. Then there is a map f:X→N such that the induced homomorphism f_#,j:π_j(X)→π_j(N) is an isomorphism for all j≤ k and an epimorphism for j=k+1.* If every finite nonempty intersection X_i_1∩…∩ X_i_t is contractible, then f gives a homotopy equivalence X≃N.A version of Björner's nerve lemma sufficiently equivariant for our purposes reads thus: Let X be a regular connected CW complex which is a subset of a real vector space, and whose cells are all convex polyhedra. Let G act linearly on X in such a way that X is symmetric as a CW complex under the action of G. Let {X_i}_i∈ I be a family of subcomplexes with X=⋃_i∈ IX_i, and say that for each g∈ G and i∈ I, g(X_i)∈{X_i}_i∈ I. Then if N is the nerve of {X_i}_i∈ I, we have the following: * Say every finite nonempty intersection X_i_1∩…∩ X_i_t is (k-t+1)-connected. Then there is an equivariant map f:X→N such that the induced (equivariant) homomorphism f_#,j:π_j(X,*)→π_j(N,f(*)) is an isomorphism for all j≤ k and an epimorphism for j=k+1.* If every finite nonempty intersection X_i_1∩…∩ X_i_t is contractible, then f gives a homotopy equivalence X≃N.(i): We claim that we may construct an equivariant map in the manner described in the proof in <cit.>. By Proposition <ref>, we have an equivariant homeomorphism X→Δ(X). Let φ: X→𝒩 be given by φ(σ)={i∈ I|σ∈ X_i} for each cell σ of X. This is an order-reversing map of posets. It is also equivariant, since for a given cell σ, {g(i)| i∈ I with σ∈ X_i}={i∈ I| g(σ)∈ X_i}. Then φ induces an equivariant continuous function Δ(X)→Δ(𝒩). Since Remark <ref> gives an equivariant homeomorphism Δ(𝒩)→𝒩, composing gives us an equivariant continuous functionf: X→Δ(X)→Δ(𝒩)→𝒩The equivariance of f gives the equivariance of each induced f_#,j:π_j(X)→π_j(𝒩).Since our equivariant map f is the map from the original proof of the nerve theorem, part (ii) follows automatically. Note that though, if following the proof for (ii), one would obtain a map g:N→ X which serves as a homotopy inverse to f, we have not guaranteed the equivariance of this or any map in the opposite direction. Thus there is more work to be done before we may term this a true equivariant version of Björner's nerve theorem.The barycentric retracting map described in Definition <ref> allows us to apply this version of the nerve theorem to coverings by sets open in our larger space. This version of the nerve theorem is used in <cit.>. Let X be a regular connected CW complex. Let {Y_i}_i∈ I be a family of subsets of X. Assume each Y_i may be written as a union of cells of X, and that each Y_i is open in X. Let Y=⋃_i∈ I Y_i, and let N_Y be the nerve of this family. * Say that every finite nonempty intersection Y_i_1∩…∩ Y_i_t is (k-t+1)-connected. Then there is a map f:Y→N_Y such that the induced homomorphism f_#,j:π_j(Y)→π_j(N_Y) is an isomorphism for all j≤ k and an epimorphism for j=k+1.* If every finite nonempty intersection Y_i_1∩…∩ Y_i_t is contractible, then f gives a homotopy equivalence X≃N_Y.From h_ in <ref>, we obtain a map f_1:Y→(Y) which induces a homotopy equivalence. Observe that (Y) is itself a regular CW complex, and that each (Y_i) is a subcomplex of (Y). It is clear that (Y)⊃⋃_i∈ I(Y_i). To show the opposite inclusion, say that Δ=Δ(v_0,…, v_d)∈(Y). This means that Δ∈Y, i.e. that the vertices of Δ, suitably ordered, correspond to a flag σ_0,…, σ_d of cells of X, all of which are contained in Y. Assume σ_d has lowest dimension amongst the cells in this chain, and say that i∈ I is such that σ_d⊂ Y_i. Since Y_i is open in X, this means that each of σ_0,…, σ_d are contained in Y_i, and therefore Δ∈(Y_i). Hence as desired, (Y)=⋃_i∈ I(Y_i).For Y_i_1,…,Y_i_t∈{Y_i}_i∈ I, we can see that⋂_j=1^t (Y_i_j) =⋂_j=1^t{σ∈X|σ⊂ Y_i_j}={σ∈X|σ⊂⋂_j=1^t Y_i_j}=(∩_j=1^t Y_i_j)If N_(Y) denotes the nerve of the covering of (Y) by the family {(Y_i)}_i∈ I, this means that N_Y=N_(Y). Furthermore, since via h_ we have that ⋂_j=1^t Y_i_j is homotopy equivalent to (⋂_j=1^t Y_i_j)=⋂_j=1^t (Y_i_j), we may conclude that for any k. a finite intersection intersection Y_i_1∩⋯∩ Y_i_t is k-connected iff (Y_i_1)∩⋯∩(Y_i_t) is.Let f_2:(Y)→N_(Y) be as given by the standard Nerve Theorem (Theroem <ref>). Then composingYf_1→(Y)f_2→N_(Y)=N_Ywe obtain our map f:Y→N_Y with the desired properties.Let X, Y, and {Y_i}_i∈ I be as in Theorem <ref>. Assume further that X is a subset of a real vector space and all cells of X are convex polyhedra. Let the group G act linearly on our space in such a way that X is symmetric as a CW complex and {Y_i} has the property that for each g∈ G and i∈ I, g(Y_i)∈{Y_i}_i∈ I. Let N_Y denote the nerve of the covering of Y by {Y_i}_i∈ I. Then the map f:Y→N_Y of Theorem <ref> is equivariant. This follows from Proposition <ref> and Theorem <ref>. We will want one more equivariant Nerve Theorem. The statement we need was proved by Hess and Hirsch in <cit.>. This version is stated for X a simplicial complex covered by a family for which nonempty intersections are contractible, but from it we may obtain equivariant maps in both directions.For G a group and σ a simplex in some simplicial complex X which is symmetric relative to G, let G_σ be the subgroup of G given by G_σ={g∈ G| g(σ)=σ}. Let X be a simplicial complex symmetric relative to the action of G, and let {X_i}_i∈ I be a family of subcomplexes with X=⋃_i∈ IX_i and with g(X_i)∈{X_i}_i∈ I for each g∈ G and i∈ I. Say that every nonempty finite intersection ⋂_i∈σX_i, for σ⊂ I, is G_σ-contractible. Then if N is the nerve of {X_i}_i∈ I, we have that X and N are G-homotopy equivalent. § EQUIVARIANCE IN THE GABRIELOV-VOROBJOV CONSTRUCTION In what follows, we assume that G is a finite reflection group acting on R^n. Let S be the definable set we intend to approximate. Assume we have also chosen a compact definable set A⊂R^n with S⊂ A and families {S_δ}_δ>0 and {S_δ, }_δ, >0 representing S in A as described in Section <ref>. The conclusions of Subsections <ref> and <ref> (which concern relations between S and an intermediate set V) apply both to the definable and constructible cases. The distinctions between these two cases are addressed in Subsections <ref> and <ref>, which describe relations between V and our approximating set T. §.§ Symmetric construction for V Gabrielov and Vorobjov' argument in <cit.> first uses a triangulation of A adapted to S to construct, for a given integer m>0 and sequence 0<_0,δ_0,…, _m,δ_m<1, an intermediate set V=V(_0,δ_0,…,_m,δ_m) and homomorphisms τ_#,k: π_k(V)→π_k(S) and τ_*,k:H_k(V)→ H_k(S). The set V echoes the behavior of T relative to the parameters _i and δ_i. However, it also has a cover that allows one to liken it to the triangulation of S well enough to establish the aforementioned maps. We show that, if we start with a symmetric triangulation, the set V is also symmetric and that there exists an equivariant map τ:V→ S which induces τ_# and τ_*.Let (Λ,Φ) be a symmetric triangulation of A adapted to S. We replace S by Φ^-1(S) to assume that S is a union of simplices of Λ. We will primarily work within the first barycentric subdivision of Λ, denoted Λ. If Δ(b_0,…,b_p) corresponds to a simplex in Λ, we have that for each b_i there is a unique simplex Δ_b_i in Λ such that b_i is the barycenter of Δ_b_i. We will assume when we write Δ(b_0,…,b_p) that b_0,…,b_p are ordered so that (Δ_b_0)>⋯>(Δ_b_p). By S we mean the set of all simplices of Λ which belong to S.Gabrielov and Vorobjov in <cit.> construct V as a union of sets K_B(δ_i,_i) defined for pairs of simplices K∈Λ and B∈S. The set K_B(δ_i,_i) depends on something that Gabrielov and Vorobjov call the core of the simplex B=B(b_0,…, b_p). This in some sense is meant to capture which faces of B continue to intersect the preimages of the sets S_δ as δ shrinks to 0. To properly define this notion, though, we must reiterate some technical terminology from <cit.>. We say S is marked if for each pair of simplices (Δ',Δ) in S with Δ' a subsimplex of Δ, we have designated Δ' as either a hard or soft subsimplex of Δ. For a pair (Δ',Δ) with Δ' not in S, we always designate Δ' as a soft subsimplex of Δ.If A and S are symmetric and (Λ,Φ) is an equivariant triangulation of A adapted to S, we say S is symmetrically marked if S is marked in such a way that for each g∈ G, (g(Δ'),g(Δ)) has the same hard/soft designation as (Δ',Δ). Since Δ' is in S iff g(Δ') is also in S by our choice of triangulation, this stipulation does not interfere with the property that simplices Δ' not contained in S are always designated as soft subsimplices. We will specify separate hard-soft relations for our triangulation depending on whether we are in the separable case or not (see Subsections <ref> and <ref>). For the moment, it is enough to assume that S is symmetrically marked. For a simplex B=B(b_0,…,b_p) of Λ contained in S, the core of B, denoted C(B), is the maximal subset {b_0,…,b_p'} of {b_0,…,b_p} so that for 0≤ν≤ p', we have Δ_b_ν is a hard subsimplex of Δ_b_μ for every 0≤μ<ν. Note that we always have b_0∈ C(B). We will establish the convention that if B is not in S, C(B)=∅. Let B=B(b_0,…,b_p) be a simplex in S and K=K(c_0,…,c_q) a simplex in Λ with B⊂K. Let I={b_0,…,b_p} and J={c_0,…,c_q}. Then for 0<δ<1 and 0<<1, defineK_B(δ,):={∑_c_ν∈ Jt_c_νc_ν∈ K(c_1,…,c_q)|∑_b_ν∈ C(B)t_b_ν >δ,. .∑_b_ν∈ It_b_ν>1-,and ∀ b_ν∈ I ∀ c_μ∈ (J∖ I), t_b_ν>t_c_μ}Given B a simplex in S, let S_B denote the set of all simplices B' of Λ with B'⊂B∩S. Fix m>0 and a sequence 0<_0,δ_0,_1,δ_1,…, _m,δ_m<1. Then for B a simplex in S, letV_B=⋃_B'∈ S_B ⋃_K⊃ B' ⋃_i=0^m K_B'(δ_i,_i)(that is, for each B'∈ S_B the union is taken over all simplices K in Λ with B'⊂K). LetV=⋃_B∈S V_B Say that S and A are symmetric under the action of G, (Λ,Φ) is an equivariant triangulation of A adapted to S, and S is symmetrically marked. Then the family {V_B}_B∈S is symmetric under the induced action of G, and hence the set V is symmetric. This follows more or less immediately from construction. By symmetry of Λ, S, and our marking and by linearity of the action of G, we can see that g(K_B'(δ_i,_i))=g(K)_g(B')(δ_i,_i), and hence g(V_B)=V_g(B). Some properties of the sets K_B(δ,), V_B and V are worth noting here. Let B=B(b_0,…, b_p) be a simplex in S and K=K(c_0,…, c_q) a simplex in Λ with B⊂K. Then for 0<δ,,δ','<1 we haveK_B(δ,)∪ K_B(δ',') =K_B(min{δ,δ'}, max{, '})K_B(δ,)∩ K_B(δ',') =K_B(max{δ,δ'}, min{, '})This follows immediately from the definition. The second line appears as <cit.> Lemma 4.1.For any pair of simplices B_1 and B_2 in S, one of the following holds * V_B_1∩ V_B_2=∅ (⇔B_1∩B_2∩S= ∅)* V_B_1∩ V_B_2=V_B_0 where V_B_0 is the unique simplex in S with B_1∩B_2∩S=B_0∩SThe bulk of this statement is taken from <cit.>. Because we assert something slightly more detailed, we include a proof.From the definition, we have that for B, B'∈S and K,K'∈Λ with B⊂K and B'⊂K', and any 0< ,δ<1, then K_B(δ,)∩ K'_B'(δ,)≠∅ implies that K=K' and either B⊂B' or B'⊂B (this is <cit.> Lemma 4.1). This means thatV_B_1∩ V_B_2=⋃_B'∈ S_B_1∩ S_B_2 ⋃_K⊃ B' ⋃_i=0^m K_B'(δ_i,_i)Since S_B_1∩ S_B_2=∅ ⇔ B_1∩B_2∩S=∅ and otherwise S_B_1∩ S_B_2=S_B_0 where B_0 is the simplex such that B_0∩S=B_1∩B_2∩S, the statements follow.For B in S, m≥ 1, and 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1, the set V_B is open in Λ and (m-1)-connected. Note that Gabrielov and Vorobjov give V_B asV_B=⋃_B'∈ S_B ⋃_K⊃ B ⋃_i=0^m K_B'(δ_i,_i)where the union is only taken over those K∈Λ with B⊂K. However, we need to define V_B as it appears here in Definition <ref> in order for Proposition <ref> to be stated and used as it is in <cit.>. Lemma <ref> holds with the updated definition; in the proof of <cit.> Lemma 4.5, one needs only to define the sets U_B',i for B'∈ S_B asU_B',i=⋃_K⊃ B' K_B'(δ_i,_i)rather thanU_B',i=⋃_K⊃ BK_B'(δ_i._i)(i.e., one must take the union over all simplices K of Λ with B'⊂K, and not just those with B⊂K). Then the updated sets U_B',i for B'∈ S_B and 1≤ i≤ m cover the updated V_B, but the intersection condition and hence the nerve of this family remains unchanged, and so the argument in <cit.> continues to hold. §.§ Equivariance of the map τ The existence of homomorphisms τ_#,k: π_k(V)→π_k(S) and τ_*,k: H_k(V)→ H_k(S) for 0≤ k≤ m is given in <cit.> Theorem 4.8. In order to show that we may construct these functions equivariantly, we must dissect the proofs there and describe more explicitly some of the details involved in applying the nerve theorem to V. We will in fact obtain an equivariant map τ:V→ S on the level of sets, inducing equivariant maps of (pointed) homotopy and homology groups which are isomorphisms or epimorphisms for the promised indices.Our first goal is to endow Λ with a regular CW complex structure fine enough to allow us to write V as a union of cells. Let K be a simplex of some simplicial complex Λ, and say K has vertex set {v_0,…,v_d}. * Let C_K,vert={L∈Λ| L⊂ K}.* For I⊂{v_0,…, v_d} and a given 0<<1, let ∼ C_K,I,>,={∑_i=1^d t_iv_i∈ K|∑_i∈ I t_i >}∼ C_K,I,<,={∑_i=1^d t_iv_i∈ K|∑_i∈ I t_i <}∼ C_K,I,=,={∑_i=1^d t_iv_i∈ K|∑_i∈ I t_i =}We denote C_K,I,={C_K,I,>,, C_K,I,<,,C_K,I,=,}. Fix a simplex K of some simplicial complex Λ, let I be a subset of the vertex set of K, and let 0<<1. For a given subsimplex K' of K, let I_K' be the intersection of I with the vertex set of K'. Then⋃_K'⊂KC_K', I_K',gives a regular CW decomposition of K into convex polyhedra, where the union is taken over all simplices K'⊂K. Each cell of our collection is a convex polyhedron more or less by definition, and hence a regular cell in a fairly immediate manner. We have that for each simplex K'⊂K, C_K',I_K', gives a partition of K', and so since the simplices K' partition K, our collection indeed forms a partition of K.It remains to show that for a cell C in our collection, ∂ C is contained in a union of cells of lower dimension. Say C=C_K',I_K',=,. Then the boundary of C is the union of those cells C_K”,I_K”,=,, K”⊊K', which are nonempty (so, those for which the vertex set of K” is neither contained in nor disjoint from I). If C=C_K', I_K', >,, then ∂ C consists both of C_K',I_K',=, together with the cells in its boundary and those nonempty cells C_K”,I_K”,>, corresponding to simplices K”⊊ K' (i.e. where the vertex set of K” is not disjoint from I). The case of C=C_K', I_K', <, is identical save that in the condition for nonemptiness we must replace I with its complement. Hence this collection indeed gives a regular CW decomposition of K. We return to our primary setting, in which Λ is a symmetric triangulation of A adapted to S. For K=K(c_1,…, c_q) a simplex of Λ and 0< δ, < 1, we letC_K,δ,={C=L∩⋂_I⊂{c_0,…,c_q}C_K,I,δ∩⋂_I⊂{c_0,…,c_q}C_K,I,|.L∈C_K, vert, C_K,I,δ∈C_K, I, δ andC_K,I,∈C_K, I,. for eachI,andC≠∅}and let C_δ,=⋃_K∈ΛC_K,,δ. For any 0<δ,<1, 𝒞_δ, gives a regular CW decomposition of Λ which is symmetric relative to G and in which each cell is a convex polyhedron. We have that for each K∈Λ, the collection of cells of the second barycentric subdivision of Λ contained in K gives a regular CW decomposition of K into convex polyhedra. Lemma <ref> shows that for each subset I of the vertex set of K, ⋃_K'⊂K𝒞_K', I_K', and ⋃_K'⊂K𝒞_K', I_K', δ each gives a regular CW decomposition of K. Since each set in 𝒞_δ, is an intersection of one set from each of these decompositions across the various simplices K of Λ, the cells of 𝒞_δ, remain convex polyhedra. Also for each K, 𝒞_K.δ, remains a partition of K, and so 𝒞_δ, gives a partition of Λ.Let C⊂ K for some C∈𝒞_δ,. We simplify the notation above to write C=C_1∩⋯∩ C_l for C_i cells of our various decompositions. Then we have∂ C⊂⋃_i=1^l C_1∩…∩C_i-1∩∂ C_i∩C_i+1∩…∩C_l=⋃_J⊊{1,…,l}⋂_i∈ J C_i ∩⋂_i∈{1,…, l}∖ J∂ C_iafter decomposing each C_i=∂ C_i ∪ C_i and distributing. We know δ C_i is a union of cells of dimension lower than that of C_i from the decomposition of K corresponding to the decomposition of K from which C_i comes. Replacing each instance of ∂ C_i with this decomposition for each 1≤ i≤ l in the expression above and again distributing, rewriting, and discarding empty intersections, we obtain that ∂ C is contained in a union of cells of 𝒞_δ, all of which have dimension lower than that of C. Hence 𝒞_δ, gives the desired CW decomposition of Λ.To show symmetry, let C∈C_δ, with C⊂ K for a simplex K in Λ. Then C is the set of points of the form ∑ t_c_i c_i, with the c_i being the vertices of K, such that the coefficients t_c_i satisfy certain conditions. The linearity of the action of G implies that g(C) then consists of points ∑ t_c_i g(c_i) such that the coefficients t_c_i satisfy those same conditions. From our definition of C_δ,, this means C is a cell contained in C_g(K),δ,⊂C_δ,, as desired. Say V=V(_0, δ_0,…, _m,δ_m), and let δ=min{δ_0,…,δ_m} and =max{_0,…,_m}. For each B a simplex of S, V_B can be written as a union of cells of 𝒞_δ,, and hence so can V. Furthermore, the set {C∈𝒞_δ,| C⊂ V} is symmetric under the action of G. First, observe that if a simplex K has vertex set {c_0,…,c_q}, then the simplices of the barycentric subdivision of K belonging to K correspond to the subsets of K given by0<t_c_i_1=⋯=t_c_i_λ_1<t_c_i_λ_1+1=⋯=t_c_i_λ_2< …<t_c_i_λ_l=⋯=t_c_i_q<1for various 1≤λ_1≤⋯≤λ_l≤ q and permutations i_1,…, i_q of 1,…, q. Using this observation to translate the condition “t_b_ν>t_c_ν for all b_ν∈ I and c_μ∈ (J∖ I)" from the definition of K_B(δ, ), we see we have defined C_δ, in such a way that for each pair K and B with K a simplex of Λ, B a simplex of S, and B⊂K, each cell of C_δ, is either contained in or disjoint from the set K_B(δ, ) (compare Notation <ref> and Definition <ref>). However, by Lemma <ref>, we have that K_B(δ, )=K_B(δ_0, _0)∪⋯∪ K_B(δ_m, _m). Now, for B any simplex contained in S, we have that V_B is the union of sets K_B'(δ, ) for K in Λ with B⊂K and B'∈ S_B, and hence is a union of cells of C_δ,.Let C∈C_δ, with C⊂ K for K a simplex in Λ. If C⊂ K_B(δ, ) for some appropriate K and B, then g(C)⊂ g(K)_g(B)(δ,). Hence the set of cells of C_δ, contained in V is symmetric under the action of G. Now, we may begin assembling our equivariant map τ: V→ S. Let N_V denote the nerve of the covering of V by the family {V_B}_B∈S (see Definition <ref>). Then for m≥ 1 and 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1, there exists an equivariant map ψ_V:V→N_V such that the induced homomorphism (ψ_V)_#,k:π_k(V,*)→π_k(N_V,ψ_V(*)) is an isomorphism for k≤ m-1 and an epimorphism for k=m. By Lemma <ref>, Λ together with the collection C is a symmetric regular CW complex whose cells are all convex polyhedra. Each set in {V_B}_B∈S is open in Λ (Lemma <ref>) and is a union of cells of C (Lemma <ref>), and the family {V_B}_B∈S has the property that g(V_B)∈{V_B} for each g∈ G and B∈S (Proposition <ref>). Furthermore, for each finite nonempty intersection we have V_B_1∩…∩ V_B_t = V_B_0 for some B_0∈S (Proposition <ref>), and so this is (m-1)-connected (Lemma <ref>). The map ψ_V:V→N_V given as in Corollary <ref> is then equivariant. We also see that, when restricted to each connected component of V, ψ_V induces a homomorphism of homotopy groups which is an isomorphism for k≤ m-1 and an epimorphism for k=m. Hence (ψ_V)_#,k:π_k(V,*)→π_k(N_V,ψ_V(*)) is also an isomorphism for k≤ m-1 and an epimorphism for k=m. Let (S) be the set of simplices in the barycentric subdivision of Λ which are contained with their closure in S (see Definition <ref>). For a simplex B∈S, let B=B∩S. Let N_V be the nerve of the covering of V by the family {V_B}_B∈S, and let N_ S denote the nerve of the covering of (S) by the family {(B)}_B∈S Then there exists an equivariant homeomorphism ξ: N_V→N_(S). The homeomorphism ξ: N_V→N_S is described in the proof of <cit.> Theorem 4.8. Both families {V_B} and {(B)} are indexed over the simplices of S. Furthermore, by Proposition <ref> we see that σ⊂S is a simplex of N_V iff σ is a simplex of N_(S). Thus, since ξ is induced from the identity map on the vertex sets of N_V and N_(S), ξ is equivariant.Let N_(S) denote the nerve of the covering of (S) by the family {(B)}_B∈S. Then there exists an equivariant map ψ_S: N_(S)→ S which induces a homotopy equivalence. Observe that (S) is a full simplicial complex, each (B) is a subcomplex of (S), and that the family {(B)}_B∈S is invariant under the action of G. Furthermore, any nonempty intersection (B_1)∩…∩(B_l) is equal to (B_0) for some B_0∈S.Let v_B_0 be the vertex of the second barycentric subdivision of Λ corresponding to the simplex B_0∈S. For Δ a simplex of the second barycentric subdivision of Λ having v_B_0 as one of its vertices, we may define a map h_Δ:[0,1]×Δ→Δ byh_Δ(t,x)=t∑_v_i≠ v_B_0 t_v_iv_i+1-t∑_v_i≠ v_B_0 t_v_iv_B_0where x=∑ t_v_iv_i∈Δ. Since we have agreement on the boundaries, and since (B_0) is the union of all such Δ which are contained entirely in S, we may extend to obtain a map h:[0,1]×(B_0)→(B_0). The map h has the property that h(1,-)=𝕀 and h(0,-)=v_B_0, and by linearity together with the fact that for such g, g(B_0)=B_0, we see that g(h(t,x))=h(t,g(x)) for any g with {g((B_1)),…,g((B_l))}={(B_1),…,(B_l)}.Thus by Theorem <ref>, we obtain an equivariant map N_(S)→(S) which induces a homotopy equivalence. However, as described in Definition <ref>, the embedding (S)↪S=Φ^-1(S) induces a homotopy equivalence (and is equivariant). Composing gives us the desired equivariant map φ_(S):N_(S)→ S. Now, to establish the equivariance of tau_#,k and τ_*,k we need only compose these three maps. (ref <cit.> Theorem 4.8) For m>0, 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1 and V=V(_0,δ_0,…, _m, δ_m), there is an equivariant map τ:V→ S inducing equivariant homomorphisms τ_#,k:π_k(V,*”)→π_k(S,*') and τ_*,k:H_k(V)→ H_k(S) with τ_#,k, τ_*,k isomorphisms for every k≤ m-1 and τ_#,m, τ_*,m epimorphisms. Moreover if m≥(S), then τ induces a homotopy equivalence V≃ S. Lemmas <ref>, <ref>, and <ref> demonstrate the equivariance of the mapsVψ_V→N_Vξ→N_(S)ψ_S→ Swhich are used in <cit.> Theorem 4.8 to construct the homomorphisms τ_k. Accordingly, let τ=ψ_(S)∘ξ∘ψ_V:V→ S, and the induced (equivariant) homomorphisms of homotopy groupsτ_#,k=(ψ_(S))_#,k∘ξ_#∘ (ψ_V)_#,k:π_k(V,*”)→π_k(S,*')Since ψ_(S) induces a homotopy equivalence and ξ is a homeomorphism, the Whitehead Theorem on weak homotopy equivalence (cited as Theorem <ref> here) states that on the level of homotopy groups both of these maps induce isomorphisms. Hence τ_#,k, like (ψ_V)_#,k, is a isomorphism for k≤ m-1 and an epimorphism for k=m.Gabrielov and Vorobjov in <cit.> next apply the Whitehead Theorem on homotopy and homology (cited as Theorem <ref> here). The equivariant map τ:V→ S induces an equivariant homomorphismτ_*,k=*ψ_(S))_*,k∘ξ_*∘ (ψ_V)_*,k: H_k(V)→ H_k(S)On each connected component of V and S, τ_*,k is an isomorphism of homology groups for 1≤ k≤ m-1 and an epimorphism when k=m. Hence τ_*,k itself is an isomorphism for k≤ m-1 and an epimorphism when k=m.As observed in the proof of Theorem 4.8 in <cit.>, when m≥(S) we may apply part (ii) of Theorem <ref> to see that the map ψ_V induces a homotopy equivalence. Thus, τ:V→ S also induces a homotopy equivalence. We have shown that there exists an equivariant map V→ S which induces a homotopy equivalence between the two spaces. Unfortunately, this does not itself guarantee that there is an equivariant homotopy inverse S→ V. Theorem <cit.> together with the contraction discussed in Definition <ref> give us an equivariant homotopy inverse of ψ_(S), and ξ is a homeomorphism (and so ξ^-1 must also be equivariant). All that lacks is a fully equivariant version of Björner's nerve theorem. It seems plausible that one could prove this using the methods of Hess and Hirsch in <cit.>, but for the purposes of this paper we do not need an equivariant map S→ V.For the remainder of this section, we follow <cit.> and choose our triangulation in such a way as to account for our families of sets representing A. Consider the projection ρ: A×[0,1]→ [0,1]. Since A× [0,1] is closed, bounded, definable, and symmetric under the induced action given by g(x,t)=(g(x),t), the projection ρ is continuous, definable, and symmetric relative to this action of G on A× [0,1]. Since each S_δ is assumed to be a symmetric set, the set S'=⋃_δ∈ (0,1) (S_δ, δ)⊂ A× [0,1] is also symmetric relative to our action on A× [0,1]. So, via Theorem <ref>, let (Λ', Φ') be an equivariant triangulation of ρ which is compatible with S'. Then we take (Λ,Φ) to be the triangulation induced by Λ' on ρ^-1(0). This is a symmetric triangulation of A adapted to S, and so all conclusions stated thus far in Section <ref> hold for this particular triangulation. §.§ The Main Theorem: Definable Case Recall that in Subsection <ref>, we assumed that S was symmetrically marked without specifying the relation. In the general definable case, we take Δ_1 to be soft in Δ_2 for each pair (Δ_1,Δ_2) with Δ_1 a (proper) subsimplex of Δ_2. This means that for B=B(b_0,…, b_p) a simplex in S, we always have C(B)={b_0}. This is trivially a symmetric marking.In the definable case, while the proofs in <cit.> also utilize sets of the form V(_0,δ_0,…,_m,δ_m), they ultimately relate S to a similarly defined set V”=V”(”). We now outline the construction of V”. Say that B=B(b_0,…, b_p) is a simplex in S and K=K(c_0,…, c_q) is a simplex in Λ with B⊂K. Let I={b_0,…, b_p} and J={c_0,…, c_q}, and let 0<<1. We defineK_B():={∑_c_ν∈ Jt_c_νc_ν∈ K(c_0,…,c_q) |∑_b_ν∈ It_b_ν>1-,and . .∀ b_ν∈ I ∀ c_μ∈ (J∖ I), t_b_ν>t_c_μ}For a given parameter 0<”<1, we let V”(”) be the union of all K_B(”) for B a simplex of S and K a simplex of Λ with B⊂K.We endow Λ with a cell structure adapted to V” in a manner similar to that for V. For K a simplex of Λ with vertex set {v_0,…, v_d} and 0<<1, letC_K,={C=L ∩⋂_I⊂{v_0,…,v_d}C_K,I,| L∈C_K, vert,. .C_K,I,∈C_K, I,for eachI,andC≠∅}Let C_=⋃_K∈ΛC_K,.For any 0<<1, C_ gives a regular CW decomposition of Λ in which each cell is a convex polyhedron. This proof is analogous to that of Lemma <ref>.Let 0<”<1. Given B a simplex in S, let U_B be the union of all sets K_B(”) for K a simplex of Λ with B⊂K. Then the family {U_B| B∈Λ is contained in S} is symmetric under the induced action of G, and hence the set V”(”)=⋃_B∈S U_B is symmetric. This is analogous to (and slightly simpler than) the proof of Proposition <ref>Fix an 0<”<1 and let V”=V”(”). Then U_B can be written as a union of cells of 𝒞_”, and hence so can V”. Furthermore, the set {C∈𝒞_”| C⊂ V”} is symmetric under the action of G. Again, this is analogous to the corresponding statement, Lemma <ref>.For 0<”<1, there is an equivariant homotopy equivalence τ”: V”→ S inducing equivariant isomorphisms of homotopy and homology groups τ_#,k”:π_k(V”,*”)→π_k(S,*') and τ_*,k”:H_k(V”)→ H_k(S). We apply Theorem <ref>to the covering of V” by the family {U_B}_B∈S. Gabrielov and Vorobjov in the proof of Lemma 5.3 in <cit.> argue that any nonempty intersection U_B_1∩…∩ U_B_l is contractible, and so part (ii) of Theorem <ref> supplies an equivariant map ψ_V”:V→N_V” inducing a homotopy equivalence. Gabrielov and Vorobjov also assert that such an intersection U_B_1∩…∩ U_B_l is nonempty iff B_0,…,B_l, suitably reordered, form a flag of simplices of S. This means that there is an (equivariant) homeomorphism ξ”:N_V”→Δ(S). By Proposition <ref> and the remark immediately following, we have an equivariant homeomorphism Δ(S)→(S), and the (equivariant) inclusion (S)↪S=Φ^-1(S) induces a homotopy equivalence. Hence, composing, we obtain the desired equivariant map τ”:V”→ S, which induces equivariant isomorphisms τ_#,k”:π_k(V”,*”)→π_k(S,*') and τ_*,k”:H_k(V”)→ H_k(S) for all k≥ 0. We include the next statement from <cit.> for reference, though no claims of equivariance need be added here. For 0<_0'≪⋯≪_i'≪_i≪δ_i≪δ_i'≪⋯≪δ_m'≪”, ifV'=V(_0',δ_0',…, _m',δ_m'), T=T(_0,δ_0,…, _m,δ_m), and V”=V”(”), then we haveV'⊂Φ^-1(T)⊂ V” For 0<_0≪δ_0≪⋯≪_m≪δ_m≪”≪ 1 and for every k≤ m, the inclusion ζ:Φ^-1(T)↪ V” induces equivariant epimorphismsζ_#,k:π_k(T,*)→π_k(V”,*”)and ζ_*,k:H_k(T)→ H_k(V”)That ζ_#,k and ζ_*,k are epimorphisms is justified in <cit.>. That they are equivariant is clear. This brings us to the equivariant version of the main theorem of <cit.> for the definable case. For 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1 and every 0≤ k≤ m, there is an equivariant map ψ:T→ S inducing equivariant epimorphismsψ_#,k :π_k(T,*)→π_k(S,*') ψ_*,k :H_k(T)→ H_k(S)and in particular, (H_k(S))≤(H_k(T)). This follows from Lemmas <ref> and <ref>, takingψ=τ”∘ζ:T→ S §.§ The Main Theorem: Separable Case The proof of part (ii) of Theorem 1.10 in <cit.> requires that our family {S_δ}_δ>0 have an additional property referred to as separability. In particular, this property holds in the constructible case. For a family {S_δ}_δ>0 representing S in A and triangulation (Λ,Φ) of A, the pair (Λ,{S_δ}_δ>0) is called separable if we have, for any pair (Δ_1,Δ_2) of simplices of S with Δ_1 a subsimplex of Δ_2,Δ_2∩Φ^-1(S_δ)∩Δ_1=∅⇔Δ_1⊂Δ_2∖Φ^-1(S_δ)for all sufficiently small δ>0.In the constructible case, (Λ,{S_δ}_δ>0) is separable. So, for the remainder of the section, we will assume we are in the definable case but that (Λ,{S_δ}_δ>0) is separable, and all claims made will in particular apply to the constructible case. Provided with separability, we may define the hard/soft relation for a pair (Δ_1,Δ_2) of simplices of S with Δ_1 a subsimplex of Δ_2 by saying that Δ_1 is a soft subsimplex of Δ_2 if Δ_2∩Φ^-1(S_δ)∩Δ_1=∅ for all sufficiently small δ, and Δ_1 is a hard subsimplex of Δ_2 otherwise.Given sequences of parameters (^(j),δ^(j))=^(j)_0,δ^(j)_0,…, ^(j)_m,δ^(j)_m, we will denote the sets defined relative to these parameters by V^(j)=V(^(j),δ^(j)) and T^(j)=T(^(j),δ^(j)). We cite and where necessary adjust a few results from <cit.> about relative containments of such sets. For0<^(1)_0≪⋯≪^(1)_i≪^(2)_i≪δ^(2)_i≪δ^(1)_i≪⋯≪δ^(1)_m≪ 1Φ^-1(T^(1))⊂ V^(2) and V^(1)⊂Φ^-1(T^(2)).For0<^(1)_0≪⋯≪^(1)_i≪^(2)_i≪δ^(2)_i≪δ^(1)_i≪⋯≪δ^(1)_m≪ 1the inclusion maps T^(1)↪ T^(2) and V^(1)↪ V^(2) are homotopy equivalences. These inclusions therefore induce equivariant isomorphisms of homotopy and homology groups. At least for T^(1) and T^(2), we will want an equivariant map in the opposite direction.For0<^(1)_0≪⋯≪^(1)_i≪^(2)_i≪δ^(2)_i≪δ^(1)_i≪⋯≪δ^(1)_m≪ 1there is an equivariant map T^(2)→ T^(1) which is a homotopy inverse of the inclusion T^(1)↪ T^(2). Gabrielov and Vorobjov in the proof of their Lemma 5.11 use Hardt Triviality to show that T^(1) is a strong deformation retract of T^(2). Equipped with our equivariant version of Hardt Triviality, we demonstrate that the homotopy they construct is equivariant.Let T⊂R^n×R^2m+2 be the union of sets T(_0,δ_0,…,_m,δ_m) over 0<_i,δ_i<1, and let ρ:T→R^2m+2 be the projection on the second coordinate. Then ρ is a symmetric function, to which we may apply Theorem <ref>. We obtain a partition of R^2m+2 into a finite number of definable sets A_i over which ρ is definably trivial. Subdividing further, we may assume each A_i is connected (using the fact that sets definable over R have a finite number of connected components). Let A_0 be the element of this partition which contains both (^(1),δ^(1)) and (^(2),δ^(2)) for 0<^(1)_0≪⋯≪^(1)_i≪^(2)_i≪δ^(2)_i≪δ^(1)_i≪⋯≪δ^(1)_m≪ 1. Then in particular, there is an equivariant maph:ρ^-1(A_0)→ρ^-1(^(2),δ^(2))× A_0=T^(2)× A_0 Choose a definable simple curve γ:[0,1]→ A_0 with γ(0)=(^(2),δ^(2)) and γ(1)=(^(1),δ^(1)). This means ρ^-1(γ([0,1])) is homeomorphic to T^(2)×γ([0,1]) via an equivariant map. For any 0≤ t≤ t'≤ 1, we can use this to define an equivariant homeomorphism Φ_t,t':ρ^-1(γ(t'))→ρ^-1(γ(t)). After possibly adjusting the point (^(2),δ^(2)) to assume that ρ^-1(γ(t'))⊂ρ^-1(γ(t)) for all 0≤ t≤ t'≤ 1, Gabrielov and Vorobjov construct the following homotopy F:T^(2)× [0,1]→ T^(2).Let (x,t)∈ T^(2)× [0,1]. If there is a t'≤ t such that x∈ρ^-1(γ(t')) but x∉ρ^-1(γ(t”)) for any t”>t', let F(x,t)=Φ_t',t(x). Otherwise, let F(x,t)=x. Since for any 0≤ t'≤ 1, ρ^-1(γ(t')) is symmetric, it follows that F is equivariant. Then F(-,1):T^(2)→ T^(1) is the desired equivariant homotopy inverse. If desired, we could prove that the inclusion V^(1)↪ V^(2) has an equivariant homotopy inverse in a similar manner.Now, we are ready to relate T and V. In the separable case, for 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1 and every k≥ 0, there is an equivariant map ζ:T→ V inducing equivariant isomorphisms ζ_#,k:π_k(T)→π_k(V) and ζ_*,k:H_k(T)→ H_k(V) (and hence a homotopy equivalence T≃ V). To show the existence of isomorphisms ζ_#,k and ζ_*,k, <cit.> employs four different sequences (^(j),δ^(j))=_0^(j), δ_0^(j),…, _m^(j), δ_m^(j). Lemma <ref> gives us the chain of inclusionsV^(1)ι'↪Φ^-1(T^(2))ι↪ V^(3)ι”↪Φ^-1(T^(4))for 0<_0^(j-1)≪⋯≪_i^(j-1)≪_i^(j)≪δ_i^(j)≪δ_i^(j-1)≪⋯≪δ_m^(j-1)≪ 1 (for j=2,3,4). The argument in <cit.> uses the fact that ι∘ι' and ι”∘ι are homotopy equivalences (which follows from Lemma <ref>) to obtain that ι induces isomorphisms ι_#,k:π_k(T^(2))→π_k(V^(3)) and hence also isomorphisms ι_*,k:H_k(T^(2))→ H_k(V^(3)). Since ι is the inclusion map, the maps ι_ #,k and ι_*,k are equivariant. Lemma <ref> gives an equivariant map η:T^(3)→ T^(2) that induces a homotopy equivalence. Let ζ=ι∘η:T^(3)→ V^(3). Then ζ is equivariant and if we letζ_#,k: π_k(T^(3))→π_k(V^(3)) ζ_*,k: H_k(T^(3))→ H_k(V^(3))we have that ζ_#,k and ζ_*,k are equivariant isomorphisms for all k≥ 0. Again, though ζ induces a homotopy equivalence T^(3)→ V^(3) and a similar argument exchanging the roles of T and V would produce an equivariant map V^(3)→ T^(3) that also induces a homotopy equivalence between these spaces, these maps are not necessarily homotopy inverses of one another. In the separable (and so, in the constructible) case, for 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1, there is an equivariant map ψ:T→ S inducing equivariant homomorphismsψ_#,k : π_k(T,*)→π_k(S,*') ψ_*,k : H_k(T)→ H_k(S)which are isomorphisms for 1≤ k≤ m-1 and epimorphisms for k=m. In particular, H_k(T)= H_k(S) and if m≥(S), ψ induces a homotopy equivalence T≃ S. Take ψ=τ∘ζ: T→ S. The result follows from Theorem <ref> and Theorem <ref>.§.§ SummaryWe set out all of the maps involved in constructing ψ in both the definable and separable (so in particular, constructible) cases. In the diagrams, double headed arrows indicate homeomorphisms while pairs of arrows indicate homotopy equivalences. Dashed arrows indicate that we have not demonstrated the existence of an equivariant map in this direction. Hooked arrows indicate inclusion maps.In the diagrams below, we let m>0 and T=T(_0,δ_0,…, _m,δ_m) for 0<_0≪δ_0≪⋯≪_m≪δ_m≪ 1. Φ denotes the homeomorphism of the triangulation Φ:Λ→ A described at the beginning of Subsection <ref> and refined immediately before Subsection <ref>.Definable Case Let V”=V”() for any δ_m≪”≪ 1.T[r, leftrightarrow, "Φ^-1_T"]Φ^-1(T)[r, hook, "ζ","<ref>"']V”[r, bend left=50, "ψ_V”"][r, phantom, "<ref>(ii)" font=]N_V”[l, dashed, bend left=50] [r, leftrightarrow, "ξ”","<ref>"'] Δ(S)[r, leftrightarrow, "<ref>"'](S)[r, hook, bend left=50]S=Φ^-1(S)[l, bend left=50, "<ref>"'][r, leftrightarrow, "Φ_S"] S The map ζ induces epimorphisms of homotopy and homology groups for k≤ m.Separable/Constructible Case Let S_r=S∩B(0,r) for some sufficiently large r, V=V(_0,δ_0,…, _m,δ_m), and for a sequence0<_0'≪⋯≪_i'≪_i≪δ_i≪δ_i'≪⋯≪δ_m'≪ 1let T'=T(_0',δ_0',…, _m',δ_m'). T[r, bend left=50, "r","<ref>"']T'[l, hook, bend left=50, "<ref>"'][r, leftrightarrow, "Φ^-1_T'"]Φ^-1(T')[r, hook, bend left=50, "ι", "<ref>"']V[l, dashed, bend left=50,"<ref>"' near start][r, "ψ_V", "<ref>"']N_V[r, leftrightarrow, "ξ","<ref>"'] N_(S_r)[r, bend left=50][r,phantom, "<ref>" font=](S_r)[l, bend left=50][r, hook, bend left=50]S_r=Φ^-1(S_r)[l, bend left=50, "<ref>"'][r, leftrightarrow, "Φ_S_r"] S_r[r,hook, bend left=50] S[l,dashed, bend left=50, "<ref>"'near start] The map ψ_V induces isomorphisms of the homotopy and homology groups of T and S for 0≤ k≤ m-1 and epimorphisms for k=m. If m≥(S), ψ_V induces a homotopy equivalence.Making use of the equivariant version of Hardt Triviality and with a bit of care, one may see that the map S→ S_r is also equivariant. It seems likely that one could construct an equivariant homotopy inverse for ψ_V” in the definable case or ϕ_V for large enough m in the separable case. It might be interesting to investigate the existence of an equivariant homotopy inverse of ι, but that goes far beyond the requirements of our application.§ APPLICATION TO COHOMOLOGY OF SYMMETRIC SEMIALGEBRAIC SETS Basu and Riener in <cit.> investigate the cohomology of semialgebraic sets defined by symmetric polynomials of bounded degree. They develop an algorithm for computing the first l Betti numbers of such a set S, with complexity polynomially bounded in dimension, n, and number of symmetric polynomials, s. To accomplish this, Basu and Riener develop results on the structure of the cohomology groups of P-closed semialgebraic sets, which allow for a significant reduction in the number of computations which must be performed. To extend these results to an arbitrary semialgebraic set, they apply the Gabrielov-Vorobjov construction. From the isomorphisms of homology groups, it follows that S and its approximating set have the same Betti numbers (at least up to a chosen degree). However, the structural results rely on decomposing the cohomology spaces as n-modules. The Gabrielov-Vorobjov construction alone does not guarantee that the cohomology spaces of S and its compact approximation have the same n-module structure, but our equivariant version does. Utilizing our new equivariance results, we strengthen a few results of Basu and Riener in <cit.>.Let R[X_1,…, X_n]^n_≤ d denote the space of symmetric polynomials over R of degree at most d (see <cit.> Notation 4). By H^k(S) we mean the kth cohomology space (with rational coefficients) of S. If S is a symmetric semialgebraic set, then the action of n on S induces an action of n on H^*(S), giving H^*(S) the structure of a finite dimensional n-module. As such, each H^k(S) admits a decomposition into a direct sum of irreducible n-modulesH^k(S)≅_n⊕_λ⊢ n m_k,λ(S) S^λwhere the sum is taken over all partitions λ of the integer n and S^λ is the particular irreducible n-module (the Specht module) corresponding to λ. The integer m_k,λ(S)∈Z_≥ 0 is called the mulitplicity of S^λ in H_k(S). The dimension of each S^λ may be computed (via what is known as the hook length formula), and so by this isotypic decomposition, we have reduced the task of computing the kth Betti number b_k(S)=(H^k(S)) to that of computing the various mulitpicities m_k,λ(S). Appendix 6 of <cit.> summarizes the classical results from the represention theory of finite groups pertaining to the above decomposition. For S a P-closed set, Basu and Riener in <cit.> Theorem 4 prove that the mulitpicities corresponding to partitions whose lengths are too long or too short must be zero. This dramatically reduces the number of multiplicities one needs to compute. Though <cit.> defines the Betti number b_k(S) in terms of cohomology spaces rather than homology groups, in this setting it holds that H_k(S)≅_nH^k(S), and so the distinction may be set aside (see <cit.> Remark 1). Let S be a P-semialgebraic set for some finite P⊂R[X_1,…, X_n]^n_≤ d, where d≥ 2. For algorithmic reasons, Basu and Riener reset the Gabrielov-Vorobjov results in terms of Puiseux series. Let R be the real closed field of algebraic Puiseux series inwith coefficients in R, and letR_m,_m-1,…,_0=R_m_m-1⋯_0(see <cit.> Notation 15). Then in the unique ordering on R_m,…, _0, we have 0<_0≪_1≪…≪_m (where here ≪ indeed denotes `infinitesimally smaller than'). In this phrasing, the equivariant Gabrielov-Vorobjov construction (Theorem <ref> part (ii)) gives us a P'-closed and bounded semialgebraic setS'_m⊂Rδ_m, _m, δ_m-1,_m-1,…, δ_0,_0^nand equivariant homomorphisms ψ_#,k:π_k(S'_m,*)→π_k(S,*') and ψ_*,k:H_k(S'_m)→ H_k(S) which are isomorphisms for 0≤ k≤ m-1 and epimorphisms for k=m. By construction (see Section <ref>) we know thatP'⊂Rδ_m,_m,…, δ_0,_0[X_1,…, X_n]^n_≤ dand if P has cardinality s, P' has cardinality 4m(s+1). See <cit.> Section 5.2 for more details on this rephrasing.Because we have an isomorphism H^k(S_m')→ H^k(S) for any 0≤ k≤ m-1 which is equivariant relative to n, we know that these spaces have the same n-module structure and hence the same isotypic decomposition:⊕_λ⊢ nm_k,λ(S)S^λ≅_nH^k(S)≅_nH^k(S_m')≅_n⊕_λ⊢ n m_k,λ(S_m')S^λIn particular, for each 0≤ k≤ m-1 and λ⊢ n, m_k,λ(S)=m_k,λ(S_m'). This allows us to make the following strengthenings of two of Basu and Riener's theorems in <cit.>.Let kS={λ⊢ n| m_k,λ(S)≠ 0} (see <cit.> Notation 5). Let d,n∈Z_>0, d≥ 2, and let S⊂R^n be a P-semialgebraic set with P⊂R[X_1,…, X_n]^n_≤ d. Then, for all λ⊢ n, *m_k,λ=0fork≤(λ)-2d+1or equivalentlymax_λ∈kS(λ) < k+2d -1 *m_k,λ=0fork≥ n-(^tλ)+d+1or equivalentlymax_λ∈kS(^tλ)<n-k+d+1 Note that Theorem 4 of <cit.> required S to be 𝒫-closed. The proof of Theorem 4 in <cit.> already uses the techniques discussed in Subsection <ref> to replace an arbitrary P-closed semialgebraic set by a bounded one that is equivariantly homotopy equivalent to the original set. We make use of the equivariant Gabrielov-Vorobjov construction to replace S where S'_m for m=(S). Since then m_k,λ(S)=m_k,λ(S_m') for all k≥ 0, the result follows from applying <cit.> Theorem 4 to S_m'. Let D be an ordered domain contained in R, and let l,d≥ 0. There exists an algorithm which takes as input a finite set 𝒫⊂ D[X_1,…, X_n]^n_≤ d and a 𝒫-formula ℱ, and computes the multiplicities m_k,λ(S) for each 0≤ k≤ l and λ⊢ n, as well as the Betti numbers b_k(S) for 0≤ k≤ l, where S is the realization of ℱ in R^n.The complexity of this algorithm, measured by the number of arithmetic operations in D, is bounded by (snd)^2^O(d+l). If D=Z and the bit-sizes of the coefficients of the input are bounded by τ, then the bit-complexity of the algorithm is bounded by (τ s n d)^2^O(l+d). Note that Theorem 3 of <cit.> only guaranteed the existence of an algorithm computing the first l+1 Betti numbers of S. The algorithm in question appears as Algorithm 3 in <cit.>. Since we now have that m_k,λ(S)=m_k,λ(S'), the multiplicities computed in lines 15 and 19 of Algorithm 3 in <cit.> are in fact the multiplicities for our original semialgebraic set. We need only assign a value of 0 to those multiplicities m_k,λ with (λ)>k+2d-1 and then output the multiplicities as well as the Betti numbers.
http://arxiv.org/abs/2312.16647v1
{ "authors": [ "Saugata Basu", "Alison Rosenblum" ], "categories": [ "math.AG", "14P99 (Primary) 03C64 (Secondary)" ], "primary_category": "math.AG", "published": "20231227172332", "title": "Equivariance in Approximation by Compact Sets" }
10pt[\begin@twocolumnfalse A comprehensive study on the accuracy and generalization of deep learning-generated chemical ODE integrators Han Li^a,b,1, Ruixin Yang^a,b,1, Yangchen Xu^a, Min Zhang^a,b, Runze Mao^a,b, Zhi X. Chen^a,b,*^aState Key Laboratory of Turbulence and Complex Systems, Aeronautics and Astronautics, College of Engineering,Peking University, Beijing, 100871, China^bAI for Science Institute (AISI), Beijing, 100080, China ===================================================================================================================================================================================================================================================================================================================================================0.5ptThe application of deep neural networks (DNNs) holds considerable promise as a substitute for the direct integration of chemical source terms in combustion simulations. However, challenges persist in ensuring high precision and generalisation across various different fuels and flow conditions. In this study, we propose and validate a consistent DNN approach for chemistry integration in a range of fuels and premixed flame configurations. This approach generates thermochemical base state from a set of low-dimensional laminar flames, followed by an effective perturbation strategy to enhance the coverage of the composition space for higher generalisation ability. A constraint criterion based on heat release rate is then employed to remove the nonphysical perturbed states for improved accuracy. Without specific tuning, three DNNs are consistently trained for three representative fuels, i.e., hydrogen, ethylene and Jet-A. Comprehensive validations are conducted using 1-D laminar flames and two typical turbulent premixed flames. The DNN model predictions on various physical characteristics, including laminar and turbulent flame speeds, dynamic flame structures influenced by turbulence-chemistry interactions, and conditional scalar profiles, all exhibit good agreement with the results obtained from direct integration. This demonstrates the exceptional accuracy and generalisation ability of the proposed DNN approach. Furthermore, when the DNN is used in the simulation, a significant speed-up for the chemistry integration is achieved, approximately 50 for the ethylene/air flame and 90 for the Jet-A/air flame. 1.0Keywords: Machine learning; Chemistry integration; Turbulent combustion modelling; Deep Neutral Network 0.5pt*Corresponding author.E-mail address: [email protected] (Zhi X. Chen).\end@twocolumnfalse]§ INTRODUCTION10ptThe utilization of finite rate chemistry (FRC) in combustion modeling typically yields a more comprehensive and accurate depiction of reaction processes and flame dynamics.However, detailed FRC modeling entails a substantial computational cost driven by direct integration (DI) of stiff ordinary differential equations (ODEs) <cit.>. To improve computational efficiency, various methods for chemical mechanism reduction <cit.>have been introduced.However, even with a reduced mechanism, which typically retains several tens of chemical species exhibiting markedly distinct chemical time-scales, DI of the associated ODEs remains a expensive task, constituting over 80% of the total computational cost in practical combustion simulations.To tackle the challenge of balancing accuracy and efficiency, recent strides in artificial intelligence for scientific applications, notably in ML, have opened up innovative perspectives <cit.>. In the present work, we focus on employing ML models to replace DI with a similar level of accuracy. As an pioneering work, Christo et al. <cit.> developed a neural network (NN) to represent a three-step mechanism in the joint PDF simulation of turbulent jet flames, showcasing the substantial potential of ML in combustion modelling. Blasco et al. <cit.> proposed a self-organizing map (SOM) approach with an enhanced precision in predicting the temporal evolution of reactive species.However, the aforementioned works relied on training the NN on the specific problem of interest with limited range of applicability. To address this issue, Sen et al.  <cit.> obtained thermochemical states from direct numerical simulation (DNS) and Linear Eddy Mixing (LEM) model calculations, and proved successful in simulating syngas/air flames. Chatzopoulos et al. <cit.> collected samples from non-premixed laminar flames and using the SOM technique to train NN models. These models were subsequently applied to Reynolds Averaged Navier–Stokes (RANS)-PDF simulations of DLR jet flames. In a follow-up work, Franke et al. <cit.> integrated extinguishing flamelets into the training dataset. Recently, Wan et al. <cit.> generated training samples from a non-premixed micro-mixing canonical problem and then achieved good agreement with the DI approach in a syngas turbulent oxy-flame. Ding et al. <cit.> and Readshaw et al. <cit.> collected samples from numerous 1-D laminar flames and randomised these data for training using multiple NNs (three for each species). The resulting NNs were tested to be effective on methane/air flames, including one-dimensional laminar flame and Sandia turbulent jet flames. In contrast to sampling from low-dimensional flames, Zhang et al. <cit.> introduced a multi-scale sampling method to collect data from the full composition space. Owing to the powerful fitting capability of deep neural network (DNN), They trained a rather big model using a large dataset comprising over 5 million samples, covering a broad composition space for hydrogen/air flames. With over 1.6 million model parameters, this DNN showed a good generalisation capability across a range of laminar and turbulent flames under various conditions.The usefulness of ML models essentially relies on achieving both high accuracy and generalisation ability. Previous studies employing a multiple layer perceptron (MLP) architecture such as the SOM-MLP approach <cit.> and MMLP <cit.> have demonstrated improved accuracy. Effective generalisation has also been achieved by collecting training data from simple canonical problems <cit.>. However, the validation of these ML models was predominantly limited to canonical problems and a specific multi-dimensional turbulent flame, leaving the generalization for other turbulent flame configurations unexplored. Furthermore, prior research has primarily focused on a single and simple chemical system with a small number of chemical species, such as hydrogen, methane, and syngas. Limited attention has been given to the generalization ability across different and complex fuels, like large hydrocarbons, whose reduced mechanisms comprise dozens of chemical species. With this motivation, the primary objective of the present work is to develop a consistent and robust methodology for generating generic samples, training high-precision DNNs, and comprehensively assessing their validity across a spectrum of fuels ranging from simple hydrogen to complex kerosene and in different turbulent premixed flame configurations. Three DNNs are trained for reactive mixtures of hydrogen/air with a 9-species mechanism <cit.>, ethylene/air with a 24-species mechanism <cit.> and Jet-A/air with a 41-species mechanism <cit.>, respectively. The model deployment and a posteriori assessment are then performed using our recently developed open-source code DeepFlame <cit.>, which interfaces OpenFOAM, Cantera and PyTorch libraries.Two typical turbulent premixed flame cases are considered: a temporally evolving jet flame and a propagating flame kernel in homogeneous isotropic turbulence (HIT). All DNN models, CFD codes and test cases presented in this study are made available for community data sharing and reproducibility[https://github.com/deepmodeling/deepflame-devAvailable at: github.com/deepmodeling/deepflame-dev].The remainder of this paper is organised as follows. Section 2 discusses the methodology for generic DNN training. In Section 3, we present the turbulent case setups for model validation. In Section 4, the results for different fuels and chemical mechanisms are discussed. Conclusions are summarised in Section 5.§ DEEP LEARNING METHODOLOGY10ptThis section describes the step-by-step procedures for the proposed DNN approach including training data generation and sampling, network design and learning, and model prediction test. These are all kept consistent for a range of fuels and turbulent flame configurations considered in this study.§.§ Thermochemical base state generation10pt Due to the highly non-linear and stiff nature of chemical ODE systems, the error tolerance allowed for reaction rate integrator is extremely stringent. Thus, it is imperative to generate representative training data that encompasses a proper distribution of thermochemical states and thoroughly covers the relevant composition space. Instead of directly exploring the high-dimensional thermochemical sampling space, we first locate a low-dimensional manifold region as the base states. This does not necessarily imply a flamelet assumption but provides a good start point for the sampling process.In this study, the thermochemical base states are collected from simulations of a set of canonical laminar premixed flames to ensure generalisation and also to minimise the complexity of data generation. This approach can be easily extended to diffusion flames and other canonical configurations.The computational domain of these 1-D laminar flames is initialised with premixed fuel/air mixture in one half and equilibrium states are set in the other half. The initial conditions (i.e., temperature, pressure and equivalence ratio) are set according to the global parameters of the target turbulent flames. Simulations are conducted until steady states are reached. The simulated time of each 1-D flame is estimated to be around ten times the respective chemical time scale τ_chem=δ_L/S_L, and the temporal thermochemical states are sampled every 100 simulation time steps. §.§ Data perturbation and augmentation10ptThe thermochemical states obtained from laminar flames might not comprehensively cover the relevant composition space in a posteriori applications. More critically, these particular states follow an exact path in the sample space (essentially flamelet manifolds) and hence the trained model is susceptible to perturbations, i.e. deviation from the manifold in the thermochemical state.To address this issue and enhance model robustness, a data augmentation strategy is applied to perturb the collected states, mimicking the multi-dimensional transport and turbulence perturbations. At each sample point, temperature, pressure and inert species are randomly perturbed using <cit.>x_R = x + α*β*(x_max - x_min),where x_R and x represent the temperature, pressure or inert species mass fraction of the perturbed sample and the original sample, respectively. The perturbation amplitude α is user-specified and β is a uniformly randomised number within the range (-1,1). Given the significant changes in mass fraction magnitudes for reactive chemical species, a different exponential randomisation strategy is implemented: y_R = y^1+α*β,where y_R and y represent the species mass fraction (excluding inert species) of the perturbed sample and the original sample, respectively. The resulting randomly generated mass fractions of these species are normalised to 1-x^N_2_R to ensure mass conservation.The perturbation amplitude α and number of randomisation times N_R can be adjusted according to the amount of collected states and the turbulence intensity of the flow field. In this work, we use α∈ [0.1,0.15] and N_R=10 perturbations for all three fuels considered. This practice substantially enhances the coverage of the composition space. However, it may generate numerous nonphysical states if left unconstrained, which effectively lowers the model accuracy in the regions of interest. To address this, a threshold criterion, based on the heat release rate change between the original collected state and the perturbed state, is used to remove the nonphysical perturbed states for improved accuracy and generalization ability. The sample space distribution before and after the data augmentation is illustrated in Fig. <ref>, where the manifold-like orange symbols represent states collected from 1-D laminar flames, and the scattered blue symbols depict the physics constrained random perturbation states. §.§ Deep neural network10ptThe DNN input layer includes temperature, pressure and species mass fractions, represented as x(t)={T(t), P(t), ℱ[Y(t)]}.The output layer consists of the change of species mass fraction over a given time step size, denoted as u^*[x(t);Δ t]=ℱ[Y(t+Δ t)] - ℱ[Y(t)],where ℱ is the Box-Cox transformation (BCT)  <cit.> employed for the multi-scale species mass fractions. This transformation provides a more uniform distribution of small-scale clustered sample data, thereby enhancing the performance of the neural network in predicting the species mass fraction changes. The training dataset D={x_i,u^*_i}^N_i=1 undergoes Z-score normalisation, involving the subtraction of the mean and division by the standard deviation, for both input x_i and output u^*_i. Here, N represents the sample size. To ensure accuracy in predictions across the output, each species mass fraction is trained and predicted individually; however, the DNNs are assembled in one integrated model after training. Each DNN consists of three hidden layers with 1600, 800 and 400 perceptrons. The activation function used for the network is the Gaussian Error Linear Unit (GELU), and hyper-parameter optimisation is performed using the Adam algorithm. It is important to note that the DNN predictions yield only the mass fractions. The temperature and density are subsequently computed based on enthalpy and the mass conservation laws.A common loss function for the DNN output constrain is given byℒ =1/N∑_i=1^N |u^*_i - u_i | ,where u_i is the DNN output. In this study, we incorporate three novel additional principles into the loss function, including mass fraction unity conservation, energy conservation, and considerations related to heat release rate, which substantially improves the model accuracy. Generally, the training ||L1|| loss on ℱ[Y(t)] is of the order of 10^-4, and more details can be found in the shared code included in the Supplementary Material. §.§ A priori assessment10ptAs a standard procedure in deep learning, once the DNN model is trained a priori test is performed to evaluate the prediction errors. This step is crucial before subsequent validation using reacting flow cases. As an example, here we consider the DNN model for ethylene/air mixture and the results are similar for other fuels. Figure <ref> shows the predictions for the fuel species C2H4, an important radical OH and a product species CO2. It can be seen that the predicted values are in excellent agreement with the randomly chosen label values. The root mean square errors (RMSE) are of order of 10^-7, which is expected to satisfy the chemical ODE requirement in various laminar and turbulent flames. § TEST CASE SETUP10ptTo validate the trained DNN models in combustion simulations, two 2-D premixed turbulence flame cases are designed and described in this section. §.§ Temporally evolving turbulent jet flame10ptTurbulent planar jets are prototypical free shear flows, which are widely used to study turbulence-chemistry interaction <cit.>. Here, we present a two-dimensional temporally evolving planar turbulent jet flame, considering the mixing and reaction processes of scalars in turbulent shear flows. A similar configuration has also been utilized by Satio et al. <cit.> to validate their DNN model for ammonia combustion.As shown in Fig. <ref>, a square computational domain of L=16 mm is considered, initially filled with stoichiometric fuel/air mixture in the central region and an equilibrium state gas elsewhere. To initialise turbulent shear flow, the internal velocity field is generated from a precursor non-reactive jet flow simulation using the synthetic eddy turbulence inflow generator and then superimposed onto the unburnt gas region. Periodic boundary conditions are applied on the left and right sides, while outlet conditions are set for the top and bottom boundaries. The domain is discretised with 800×550 grids, with a minimum grid size of 20μ m to ensure proper resolution on the flame front. The grid is uniform in the x-direction and stretched at both ends in the y-direction. §.§ Ignition in homogeneous isotropic turbulence10ptThe second test case involves a flame kernel ignition of premixed mixture in two-dimensional homogeneous isotropic turbulence (HIT). This simulation setup features an ignition to propagation transition process, highlighting the turbulence effect on the flame evolution <cit.>. Therefore, it serves as a challenging validation case for the DNN models.In the simulation, a square computational domain of L× L=10π× 10π mm^2 is used, initialised with premixed stoichiometric fuel/air mixture at a given temperature and pressure. To ignite the mixture, a circular hot spot with a radius of L/10 filled with equilibrium gases is placed in the center of the domain. The HIT generation approach in <cit.> is adopted and the fully evolved velocity field is then mapped to the computational domain as the initial flow field. Boundary conditions are set to zero gradient for temperature and species mass fractions, and a non-reflective wave transmissive condition is used for pressure and velocity. The domain is uniformly discretised with 1024 × 1024 grids to ensure good resolution for both the flame and turbulence. The simulations are continued until the full domain is ignited.§ RESULTS AND DISCUSSION10ptThis section presents extensive validations of the proposed methodology and DNN models in various 1-D laminar flames and 2-D turbulent flames coupling the effects of convection, stretching and turbulence.All cases are scale-resolved using detailed numerical simulation with detailed chemistry and mixture-averaged transport <cit.>.The predictions using DNN models are validated against the results obtained through DI using the Cantera CVODE solver. For conciseness, results for the relatively simple hydrogen/air combustion are provided in the Supplementary Material.For ethylene/air combustion, thermochemical states are collected from three laminar flame cases with an unburnt gas temperature T_u of 500 K and pressure of 1 atm. The equivalence ratios for the three cases are 0.8, 1.0 and 1.2, respectively. Subsequently, approximately 200,000 state points are collected, perturbed and augmented to generate a training set of around 800,000 samples. A time-step size of 10^-7 s is used for DNN model prediction. The training process typically evolves for 2000 epochs with an initial learning rate of 0.001, which decreases 10 times every 200 epochs. For Jet-A combustion, the unburnt gas temperature is specified as 800 K, while keeping other conditions consistent with the ethylene case. Next, the resulting DNN models are tested comprehensively in 1-D laminar flames and 2-D turbulent flames. §.§ 1-D premixed laminar flame10ptIn Fig. <ref>, the spatial distributions of temperature and major species mass fraction are presented for the ethylene/air laminar flames at T_u= 300 K. The flame front position and flame structure predicted by the DNN model align closely with the results obtained through direct integration using CVODE. This demonstrates the high precision of the trained DNN models in predicting chemical kinetics in premixed laminar flames. Furthermore, Fig. <ref> presents the laminar flame speeds for ethylene/air mixture at various equivalent ratios ranging from 0.6 to 1.3, extending beyond the sampling range (0.8 to 1.2). The predictions by the DNN model closely match those from CVODE, demonstrating an excellent robustness of the data perturbation and augmentation approach which significantly enhances the generalization and even extrapolation capability of the DNN model. §.§ 2-D evolving jet flame10ptFigure <ref> depicts the contours of heat release rate in the evolving jet flame for stoichiometric ethylene/air mixture. Iso-lines of key intermediate CH2O at a mass fraction of 5e-4 are also shown to assess fine radial structure prediction. Again, an excellent agreement is observed between the DNN and DI results for both time instants considered. For a more quantitative comparison, conditional heat release rate and mass fractions in the progress variable space are plotted in Fig. <ref>. The progress variable is defined using a combined mass fraction of H2O and CO2 normalised by their burnt values. It can be seen that all the conditional profiles given by the DNN agree well with the DI results, suggesting a high accuracy under turbulent flame conditions. In addition, the turbulent burning velocity is calculated usingS_T=1/A∫_Vω̇_T/c_p(T_b-T_u) d V, where ω̇_T is the heat release rate, c_p is the specific heat capacity at constant pressure, T_b and T_u are the burnt and unburnt gas temperature respectively, and A is the equivalent flame front area. The temporal evolution of S_T in Fig. <ref> shows no observable difference between the predictions of the DNN model and CVODE, further confirming the exceptional precision of the DNN model.§.§ 2-D HIT flame10ptThe DNN model for Jet-A/air combustion is examined using the configuration of HIT flame ignition case. Figure <ref> depicts the contours of the temporal temperature distribution and the vorticity of the flow field for stoichiometric mixture to qualitatively compare the predictions of flame front structures and the effect of turbulence-chemistry interactions. As seen, the flame front propagation and flame wrinkling behaviour predicted using the DNN model show close agreement with those using CVODE. Next, quantitative comparison is performed on the turbulent burning velocity and presented in Fig. <ref>. It can be observed that the turbulent burning rates still exhibit satisfactory agreement with a maximum error prediction of 7% for this more complex chemistry.§.§ Computational acceleration analysis10ptFollowing the above accuracy validations, the efficiency gain of the DNN approach is discussed. The overall computational time for a representative time period ( t = 0.3 to 0.4 ms) in the propagating HIT flames is considered as an example. For the comparison shown here, the simulations with DI are run with 16 CPUs (AMD Zen1), while the simulations with DNN models use one GPU (NVIDIA RTX 4090) for inference. As seen in Table <ref>, the DNN models achieve a speed-up factor of approximately 50 on chemistry calculations and 11 on overall calculations using one GPU for the ethylene/air case. Additionally, a higher speed-up factor of 90 on chemistry calculations and 11.6 on overall is observed for the larger Jet-A/air mechanism. The percentage of chemistry in the overall time cost is significantly reduced from 87% to 12% when the DNN is employed. This highlights a superior computational acceleration with the DNN models and suggests an increasing speed-up effect with the complexity of the chemical mechanism.§ CONCLUSIONS10ptThis work proposed a consistent and robust ML methodology designed for developing DNN models across diverse fuels and turbulent flames, and comprehensive validations were conducted to evaluate the model precision and generalisation capabilities. The methodology involves collecting thermochemical base states from a small set of canonical laminar flames. To enhance the coverage of these base states in composition space for high-dimensional flames, an effective strategy involving random perturbation and data augmentation is employed, along with an essential constraint on heat release rate changes to eliminate nonphysical perturbed states. The resulting high-quality training samples serve as the input in the DNN training, wherein special considerations of physical principles, including mass fraction unity conservation, enthalpy conservation, and error constraints on heat release rate, are integrated into the hyperparameter optimisation process. A thorough validation process was conducted, encompassing a range of laminar and turbulent flame configurations, across different chemical systems from simple hydrogen to complex jet fuel. The results obtained from the DNN models generally exhibit excellent agreement with those from direct integration, showcasing the high accuracy and generalisation of the proposed deep learning approach as a versatile tool for chemical reaction rate integration. Furthermore, computational acceleration using this approach provides a promising speed-up factor of 50 to 90 for chemistry integration and approximately 10 for the overall simulation. This observation indicates that stiff chemistry calculations will no longer be the computational bottleneck for turbulent flame simulations with high-precision DNN models.While the approach and results presented in this work seem quite encouraging, there remains room to further improve the DNN model accuracy, particularly for complex fuels. Future work will also be directed towards model compression and optimisation to further increase the computational efficiency. Additionally, ongoing efforts are being dedicated to incorporating model inference into the solving procedure of partial differential equations (PDEs), aiming to achieve a full computation on the GPU. With the memory copy overhead eliminated, a further one or two orders of magnitude speed-up can be achieved. Declaration of competing interest 10ptThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.Supplementary material 10ptA supplementary file is attached. 9ptpci 10pt
http://arxiv.org/abs/2312.16387v1
{ "authors": [ "Han Li", "Ruixin Yang", "Min Zhang", "Runze Mao", "Zhi X. Chen" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20231227032153", "title": "A comprehensive study on the accuracy and generalization of deep learning-generated chemical ODE integrators" }
[email protected] Instituto de Geociências e Ciências Exatas, Unesp - Univ Estadual Paulista, Departamento de Estatística, Matemática Aplicada e Computação. Av. 24-A, Rio Claro/SP/Brazil, CEP 13506-900.[cor1]Corresponding author In this work, we study the dynamics of rotation of the small satellites Methone and Aegaeon and revisit previous works on the rotation of Prometheus, Metis, and Amalthea. In all cases, the surfaces of section computed with the standard spin-orbit model reveal that the synchronous regime shares another large domain in the rotation phase space. We reproduce and apply the hamiltonian theory given in Wisdom (2004) to analytically characterize the detected structure as being a secondary resonance where the period of oscillations around the synchronism is similar to the orbital period. Being that the current rotational states of this sort of satellite should be synchronous (Thomas and Helfenstein 2020), our results can be taken into account in evolutionary studies of their rotation.Aegaeon; Amalthea; Metis; Methone; Prometheus; Saturnian and Jupiter small satellites; Spin-Orbit resonance. § INTRODUCTION Consider the rotation of a rigid and homogeneous satellite orbiting a punctual planet in the case the satellite rotates around a single axis corresponding to its largest moment of inertia. The mutual perturbations between the bodies belonging to multiple systems or due to the non-sphericity of the secondary are neglected in this work, such that the motion of the system is governed by the laws of the two-body problem (Goldreich and Peale 1966).A hamiltonian for the sort of problem is given by: H(θ,p,t)=p^2/2C-ϵ^2n^2C/4[/r(t)]^3 cos2[θ-f(t)],where θ is an angle of rotation of the satellite measured from an inertial line; p=Cθ̇ is the angular momentum of rotation around the z-axis assumed to be perpendicular to the orbital plane, and θ̇ is the angular velocity of rotation. A<B<C are the moments of inertia around the principal axis of the satellite around the x, y, z, respectively; , e, f(t), r(t)=(1-e^2)/1+ecos f(t), n are the semi-major axis, orbital eccentricity, true anomaly, planet-satellite distance, and the mean-motion of the satellite, respectively.It is well known that the dynamics of rotation of mid-sized satellites of the outer planets and the Moon are mainly characterized by synchronous orbit-rotation resonances (see Peale 1977, Melnikov and Shevchenko 2022). The exact synchronism occurs when the angular velocity of rotation of the satellite equals its mean motion, n. In practice, the synchronous resonance of the satellites is characterized by the physical libration of the angle Ψ(t)=θ(t)-f(t).Ψ contains all main components of the spin-orbit perturbations, namely, the optical, free, and forced librations (see Callegari and Ribeiro 2015, and references therein). In the case of a small amplitude of libration of Ψ and small orbital eccentricity of the satellite, the physics of the synchronous resonance can be interpreted by an analog of the forced harmonic oscillator (see Murray and Dermott 1999). Suppose also to model the shape of the satellite with a Roche ellipsoid with semi-axes a, b, and c in the x, y, z directions, respectively; in this case, a>b>c (see Callegari and Rodríguez 2013, and references therein). Denote the free frequency by ω (which corresponds to the frequency of the oscillator in the harmonic approximation). The average linear theory[W.r.t. mean anomaly and valid for small eccentricity.] of synchronous resonance predicts that the ratio of the free and orbital frequencies ω/n is given by: ϵ≡ω/n=(3B-A/C)^1/2=[3(a^2-b^2/a^2+b^2)]^1/2.Thus, when ϵ is a rational number, an inner resonance can rise due to the commensurability between n and ω. We have therefore a definition for secondary resonance within the synchronous regime (in the case of a synchronous rotating satellite). The approximation of a simple oscillator is no longer enough to describe the rotational dynamics in this situation, and additional analyses (e.g. perturbation theory) are necessary to describe the dynamics of rotation in more detail.The case of the Saturnian satellite Enceladus studied in Wisdom (2004) is a good example to illustrate the dynamics of secondary resonances. Under the hypothesis of a homogeneous body, and based on Voyager spacecraft data, the dimensions of Enceladus are a=256.3±0.3 km, b=247.3±0.3 km, c=244.6±0.3 km (Dermott and Thomas 1994). From Equation (<ref>) we obtain ϵ∼0.327, a value close to 1/3, so that Wisdom conjectured if Enceladus would be currently rotating close to the ω/n=1/3 secondary resonance[After Cassini images, the amplitude of the physical libration of Enceladus has been detected (∼0.12 degree; Thomas et al. 2016). This value is too high and therefore is not consistent with a homogeneous satellite. The best fitting given in Thomas et al. (2016) corresponds to a multi-layered satellite including a global subsurface ocean.].While it is certain that current rotational states of all mid-sized satellites of the giant planets are characterized by synchronous resonance, the same is not necessarily true for all other smaller secondary companions having mean-radius of the order of a few dozens kilometer. Several of them are irregularly shaped, and due to this, ϵ is large. Let us consider the case of Amalthea (J15), a member of the inner group of Jupiter satellites composed of Metis, Adrastea, Amalthea, and Thebe. Amalthea is significantly out of round with dimensions a × b × c of the order of ∼ 125 ×∼ 73 ×∼64 such that ϵ∼1.214. These values are given after analyses of Galileo tour in Jupiter system; Thomas et al. 1998 (see also Pashkevich et al. 2021, and references therein). The phase space of rotation of Amalthea shows the existence of two resonant modes within the synchronism. They denote these modes by “α-resonance” and “β-resonance”, and named this property of the synchronous regime by the “Amalthea effect”. The existence of such kind of “bifurcation” inside the synchronous domain leads to questioning and discussions on topics related to the evolution of rotation and the true current equilibrium configuration of the satellite (see discussions in Melnikov and Shevchenko 2002, and references therein). Another body with similar rotational properties of the phase space is the Prometheus (Melnikov and Shevchenko 2008), from Saturn, a close-in with a mean radius of 42.8 km (see Table 1). The objects we study in this project belong to another class of even smaller inner satellites of the Jovian planets having mean diameters of a few kilometers or even less. We consider in this work the Saturnian satellites Methone (S/2004 S 1) and Aegaeon (S/2008 S 1) discovered by the Cassini spacecraft in 2004 (Porco 2004) and 2008 (Porco 2009), respectively. Similarly to Amalthea and Prometheus, their phase spaces also display the co-existence of the α and β-“resonances”. Methone, jointly with Aegaeon and Pallene is classified as ellipsoidal-like satellites[The other denominations are irregular (e.g. Prometeus, Pandora, Epimetheus, Janus, Telesto, Calypso, and Helene), and irregular with equatorial ridges (e.g. Atlas, Pan, Daphnis) (Thomas and Helfenstein 2020).]. We will focus next and throughout the paper on the case of Methone, and at the end of the paper, similar results will be shown and discussed for Aegaeon and Prometheus. The cases of Amalthea and Metis will also be revisited given the theory developed here.To investigate the rotation of Methone, we apply the standard model of spin-orbit resonance. We utilize the Everhart (1985) algorithm to solve numerically the rotation differential equations derived from hamiltonian (<ref>). The full equation is non-autonomous such that, at first, we compute the surfaces of section (hereafter denoted by SOS), a practical procedure often applied in studies of rotation after Wisdom et al. (1984). In this technique, the pair of generalized variables (θ,θ̇/n) is plotted every time the satellite passes through the pericenter of its orbit. Figure <ref>a) shows the SOS for Methone in the vicinity of the synchronous domain. The fixed points of the α and β-“resonances” are indicated by vertical arrows. Note that the regimes are separated by a new separatrix (red curve in Fig. <ref>a)), while they are encompassed by a thin chaotic layer associated to the synchronous domain. The main goal of this work is to investigate the nature of the α and the β-regimes in the case of the satellite Methone. Both share the synchronous domain, so it is worth giving a cinematic description of the rotation of clones of Methone departing from initial conditions very close to their fixed points to identify their differences. Figs. <ref>(b,c) show the time variations of θ̇/n and Ψ=θ-f corresponding to the α and β-regimes, respectively. The initial conditions are θ_0=0 in both cases and θ̇(0)/n=1, θ̇(0)/n=0.45 in b) and c), respectively.In the case of the β-regime (Fig. <ref>b)), Ψ oscillates around zero with amplitude of ∼0.04 radian (or ∼2 degree) and θ̇/n oscillates with amplitude of ∼0.036. In both variables there is a “long-term” mode resulting from close values of the fundamental frequencies n and ω. In fact, from Equation (<ref>) we have ω=nϵ and ϵ∼1.077. Denoting by T, P, T_ω the periods of the beating, the orbital period, and the period of the free libration, we obtain T∼13.2 day from the relation 1/T=1/P-1/T_ω considering P∼1.014 day and T_ω=P/ϵ.In the case of the α-regime (Fig. <ref>c)), Ψ oscillates around zero with amplitude of ∼0.5 radian (or ∼ 29 degree). θ̇/n oscillates around the unit with amplitude of ∼0.5 such that at the minima (indicated by black points in Fig. <ref>(b)), the satellite is always located at the pericenter of its orbit (open circles in Fig. <ref>(b)). Thus, due to this relatively large amplitude of oscillation of θ̇/n, the SOS displays θ̇/n∼ 0.45, the same as that of the initial value.Since the Methone's phase space of rotation displays additional structures within the synchronous regime, all discussion given above leads us to investigate the existence of secondary resonances. Accordin to the most recent analyses of data taken from Cassini measurements, Thomas and Helfenstein (2020) determined that the dimensions of Methone are a=1.94±0.02 km, b=1.29±0.04 km, c=1.21±0.02 km so that ϵ∼1.077. This value of ϵ is close to the unit so that we conjecture if a 1:1 secondary resonance of the type ω/n∼1/1 would explain the existence of α and β structures within the synchronous domain. To prove the conjecture we return to the Wisdom's (2004) paper. He developed a simple analytical model for spin-orbit problem in hamiltonian formalism aiming to explain the phase space of Enceladus in the close vicinity of the synchronous regime. Having in hand an expanded version of hamiltonian of Equation (<ref>), he selected the correct terms and explained analytically the properties of the secondary 1/3 resonance. We follow his steps and reproduce its main results, and we choose those terms in hamiltonian which are proportional to the arguments containing the 1/1 secondary resonance. By adopting a similar methodology given in Wisdom (2004), we successfully explain the co-existence of the α and β-regimes in synchronous resonance by comparing the SOS with level curves of the constructed hamiltonian[It is worth noting that the mapping of the 1:1 secondary resonance within the synchronous regime have been done in many works where very sophisticated models have been adopted (see Gkolias et al. 2016, Gkolias et al. 2019, Lei 2023.) ].The presentation of the results of this paper is divided as follows. In Section 2, we deduce the expanded hamiltonian developed in Wisdom (2004) in detail, showing the terms of the hamiltonian explicitly (that ones related to equation 20 of the referred paper); some extensions of Wisdom's model are also given. In Sections 3.1 and 3.2, we show exactly how the perturbed hamiltonian can give rise to the secondary resonances and write the hamiltonian for the 1/1 secondary resonance. The level curves applied in the case of Methone are shown in Section 3.3. Analytical estimative of the bifurcations of the level curves are also provided. In Section 4 we analyze the cases of Aegaeon and Prometheus,and Section 5 is devoted to conclusions and more general discussion involving other satellites. Since the phase space of rotation of Methone, Aegaeon, Prometheus, Amalthea, Metis, and others share similar structures and properties within the synchronous regime, we are led to question their current states of rotation. § EXPANSION OF THE HAMILTONIAN (<REF>)§.§ An integrable hamiltonianThe hamiltonian (<ref>) is analogous to that one of a simple non-autonomous pendulum. An expanded version of the hamiltonian can be obtained resulting in a sum of autonomous-like pendulums. This can easily achieved after substituting in hamiltonian (<ref>) the developed forms of [/r(t)]^3, cos2f(t), sin2f(t): [/r(t)]^3≈1+3ecosnt+3/2e^2(1+3cos2nt), cos2f(t)≈cosnt+e(cos2nt-1)+9/8e^2(cos3nt-cosnt), sin2f(t)≈sinnt+esin2nt+1/8e^2(9sin3nt-7sinnt). We have: H(θ,p,t)=p^2/2C - ϵ^2n^2C/4(+17/2e^2)cos(2θ-4nt)- ϵ^2n^2C/4(+7/2e)cos(2θ-3nt)- ϵ^2n^2C/4(1-5/2e^2)cos2(θ-nt)- ϵ^2n^2C/4(-1/2e)cos(2θ-nt)+… .Each term in the right side of (<ref>) proportional to ϵ^2 can be considered a distinct disturbing resonance associated to the commensurabilitiesθ̇=2n, θ̇=3/2n, θ̇=n and θ̇=n/2, defining therefore the 2:1, 3:2, 1:1 (synchronous), and the 1:2 spin-orbit resonant states. We can isolate a single resonance after applying the average principle (see footnote 1) and we can note that, whatever resonance, the resulting hamiltonian is always analogous to a simple non-autonomous pendulum. Canonical transformations can make each of the resonances given in Equation (<ref>) locally integrable. For instance, consider the synchronous resonance and define H_0 by: H_0(θ,p,t)=p^2/2C-ϵ^2n^2C/4(1-5/2e^2)cos2(θ-nt).Given the canonical transformation: ϕ = θ-ntΦ = p-nC,the new H'_0 is given by: H'_0(ϕ,Φ,-) = -nΦ+(Φ+nC)^2/2C-ϵ^2n^2C/4cos2ϕ⇒= Φ^2/2C-ϵ^2n^2C/4cos2ϕ,where the term -nΦ comes from phase space extension (see Sussman and Wisdom 2001, Ferraz-Mello 2007). We have also neglected the term in e^2 since the orbital eccentricity of Methone is of the order of 10^-3. Moreover, an irrelevant additive constant (n^2C/2) is not included in (<ref>).Thus, we have an integrable hamiltonian for the synchronous spin-orbit resonance. Since the hamiltonian (<ref>) is analogous to the simple autonomous pendulum, we can use the equations of transformation to the action-angle variables developed to the simple pendulum which are given in several books. In this work, we consider only the libratory regime in the case of small oscillations.The action variable J is defined as being the area on the plane (ϕ,Φ) divided by 2π. The conjugated action ψ varies linearly in time. The new hamiltonian depends only on the action and can be written as follows: H^”_0(-,J̃) = ϵ^2n^2C/4( -1+J̃-1/16J̃^2-1/256J̃^3+…),whereJ̃=4J/ϵ nC,such thatH^”_0(-,J)=-ϵ^2n^2C/4+ϵ nJ-1/4CJ^2-1/16ϵ nC^2J^3+….The algebraic expression between the new variables (ψ,J) and the old ones (ϕ,Φ) is:sin2ϕ≈2ϕ=F_1sinψ+F_3sin3ψ+…,whereF_1 = (2J̃+1/4J̃^2+27/512J̃^3+…)^1/2≈√(J̃)+1/16(√(2J̃))^3,F_3 = 1/192[(2J̃)^3(1+9/16J̃+…) ]^1/2≈1/192(√(2J̃))^3+…. Equations (<ref>)-(<ref>) have been taken from Wisdom (2004). A proof of them can be obtained after applying the pendulum model to the hamiltonian (<ref>) in the case of small oscillation approximation (see for instance appendix B.3 in Ferraz-Mello 2007).§.§ Perturbation of the integrable hamiltonian Let us consider now the time-dependent perturbations on the integrable part of the synchronous resonance of the hamiltonian such that H'(ϕ,Φ,t)=H'_0+H'_1,where H'_0 is given in (<ref>) and H'_1 is the perturbation. We will include in H'_1 the two neighboring resonances of the synchronous resonances, namely, 3:2 and 1:2 (see Equation (<ref>)). In terms of old variables (ϕ,Φ) we can obtain from (<ref>) that: H'_1(ϕ,Φ,t)=-ϵ^2n^2C/4[7e/2cos(2ϕ-nt)-e/2cos(2ϕ+nt)]. To write H'_1 in terms of the action-angle variables we must consider the transformation equation (<ref>). The algebraic manipulations can be much more simple considering the limit of small oscillations (Wisdom 2004). Thus, first write (<ref>) as follows: H'_1(ϕ,Φ,t) = -ϵ^2n^2C/4[7e/2(C_2cosnt+S_2sinnt)-e/2(C_2cosnt-S_2sinnt)], C_2 = cos2ϕ≈1-1/2(2ϕ)^2, S_2=sin2ϕ≈2ϕ-1/6(2ϕ)^3. After substituting the expression (<ref>) in (<ref>) and the result in (<ref>), we can obtain the perturbation H”_1(ψ,J,t) as follows:H”_1(ψ,J,t) = ϵ^2n^2Ce×+ (3/16)F_1^2cosnt +(3/16)F_3^2cosnt-(3/4)cosnt- (1/2)F_1cos(nt-ψ)+ (1/16)F_1^3cos(nt-ψ)- (1/16)F_1^2F_3cos(nt-ψ)+ (1/8)F_1F_3^2cos(nt-ψ)- (3/32)F_1^2cos(nt-2ψ)+ (3/16)F_1F_3cos(nt-2ψ)+ (1/16)F_3^3cos(nt-3ψ)- (1/48)F_1^3cos(nt-3ψ)+ (1/8)F_1^2F_3cos(nt-3ψ)- (1/2)F_3cos(nt-3ψ)- (3/16)F_1F_3cos(nt-4ψ)+ (1/16)F_1F_3^2cos(nt-5ψ)- (1/16)F_1^2F_3cos(nt-5ψ)- (3/32)F_3^2cos(nt-6ψ)- (1/16)F_1F_3^2cos(nt-7ψ)- (1/48)F_3^3cos(nt-9ψ)- (1/16)F_1^3cos(nt+ψ)+ (1/2)F_1cos(nt+ψ)- (1/8)F_1F_3^2cos(nt+ψ)+ (1/16)F_1^2 F_3cos(nt+ψ)+ (3/16)F_1F_3cos(nt+2ψ)- (3/32)F_1^2cos(nt+2ψ)- (1/16)F_3^3cos(nt+3ψ)+ (1/48)F_1^3cos(nt+3ψ)+ (1/2)F_3cos(nt+3ψ)- (1/8)F_1^2F_3cos(nt+3ψ)- (3/16)F_1F_3cos(nt+4ψ)+ (1/16)F_1^2F_3cos(nt+5ψ)- (1/16)F_1F_3^2cos(nt+5ψ)- (3/32)F_3^2cos(nt+6ψ)+ (1/16)F_1F_3^2cos(nt+7ψ)+ (1/48)F_3^3cos(nt+9ψ). Note that at this step we are not utilizing the expanded versions of F_1 and F_3 given in the right side of Equations (<ref>) and (<ref>).§ PERTURBED HAMILTONIAN AND THE RISE OF SECONDARY RESONANCES§.§ The 1:1 secondary resonance The transformation variable given in Equation (<ref>), ϕ=θ-nt, consists of an approximation of the physical libration Ψ valid for small eccentricity.For small amplitude of oscillation, the frequency of the angle variable ψ is the frequency of ϕ divided by 2π. Thus, Equation (<ref>) can be utilized to study several secondary resonances. Inspection of (<ref>) shows the terms in arguments of the cosines of the form nt-kψ, where k=1,2,…. Therefore, when k is a multiple of n, we should have a secondary resonance.For instance, after collecting the terms proportional to cos(nt-3ψ), H”_1(ψ,J,t)=-ϵ^2n^2Ce/4( 1/12F_1^3+2F_3-1/4F_3^3-1/2F_1^2F_3)cos(3ψ-nt),we can obtain the disturbing part of the hamiltonian associated with the 3:1 secondary resonance utilized in the theory developed by Wisdom (2004) to study the rotation of Enceladus (see Section 1). The problem we are considering in this work is related to the nature of the α and β regimes of motion in the phase space of rotation of Methone and, as pointed out in Section 1, we conjectured that the coexistence of the regimes is explained by a 1:1 secondary resonance. We will prove our conjecture in the next two subsections after considering the terms factored by cos(ψ-nt) in (<ref>) so that we have the following disturbing part of the hamiltonian: H”_1(ψ,J,t)=-ϵ^2n^2Ce/4( -1/4F_1^2F_3-2F_1+1/4F_1^3+1/2F_1F_3^3)cos(ψ-nt).§.§ The second extension of the phase space and the non-singular variables Consider the canonical transformation ψ' = ψ-nt, J' = J,and define Ĵ = J'/nC, δ = ϵ-1.where Ĵ is a dimensionless quantity (Wisdom 2004), and δ can be interpreted as being the distance of the exact resonance 1/1 secondary resonance.After collecting the terms in (<ref>) up to order J'^2, neglecting the constant terms, and factoring the parameter n^2C, we have the final form in Angle-Action variables of the hamiltonian (<ref>) for 1:1 secondary resonance:H”'_1:1(ψ',Ĵ,t)=n^2C [ δĴ-Ĵ^2/4+3/8e√(ϵ)(√(2Ĵ))^3-e√(ϵ^3)√(2Ĵ)]cosψ'. Define H≡H”'_1:1/n^2C.It is straightforward to prove that H is dimensionless so that the H depends only on two parameters: ϵ and the orbital eccentricity e.Let us consider the canonical transformation:x = √(2Ĵ)cosψ'y = √(2Ĵ)sinψ'. Thus, the hamiltonian (<ref>) becomes:H(x,y)=δ/2(x^2+y^2)-1/16(x^2+y^2)^2+3/8e√(ϵ)(x^2+y^2)x-e√(ϵ^3)x.§.§ Atlas of the phase space Keeping in hand a time-independent one-degree-of-freedom hamiltonian for our problem, we can now explore the rotation phase space by computing the level curves of Equation (<ref>). We also aim to compare the level curves with the surfaces of sections and a suitable plane of variables which can be the same utilized in Fig. (<ref>), (θ,θ̇/n). To relate them to the variables (x,y) (Equation (<ref>)), we first recall that in the linear approximation of harmonic oscillator the relation between variables (ϕ,Φ) (Equation (<ref>)) and the Angle-Action (ψ,J) are given by the canonical transformationϕ = (2J/α)^1/2sinψ, Φ = (2Jα)^1/2cosψ,α = ϵ nC.By fixing a instant of time (t=0 for instance), manipulating the Equations (<ref>), (<ref>), (<ref>), and by noting that at t=0, ψ'=ψ (Equation (<ref>)), we can show that,θ = y/√(ϵ), θ̇/n = x√(ϵ)+1. Fig. <ref> shows the level curves of the hamiltonian (<ref>) for six values of ϵ. Let ϵ_c be a critical value ofϵ. For ϵ<ϵ_c, only the equilibrium point associated with the α-regime exists. The rising of the β-regime occurs at values of ϵ slightly larger than ϵ_c, and now three equilibria centers share the y-axis: two of them are stable points, namely those associated to α and β-regimes, and the third one is an unstable equilibrium point associated to the separatrix of the β-regime. The plot in bottom-middle panel in Fig. <ref> reproduces with good agreement the surface of section given in Fig. <ref>. Next we will study in detail the bifurcation of the regimes in the phase space and determine ϵ_c numerically.§.§ The forcing of the β-regime and the quasi-synchronous regime. The loci of the equilibria centers (fixed points) of the α and β-regimes in the phase space can be obtained analytically from equations of motion: dx/dt = +∂ H/∂ y=-1/4(x^2+y^2)y + δ y + 2γ_1yx dy/dt = -∂ H/∂ x=+1/4(x^2+y^2)x - δ x - 3γ_1x^2 - γ_1y^2 +γ_2,where γ_1=3/8e√(ϵ), γ_2=e√(ϵ^3).The equilibria conditions (dx/dt,dy/dt)=(0,0) lead to a system of non-linear algebraic equations, and equilibria solutions in the x-axis where y=0 must satisfy the equation: x^3/4 - 3γ_1x^2 - δ x +γ_2=0. The three solutions of Equation (<ref>) are only real for values of ϵ larger than ϵ_c. For ϵ<ϵ_c, only one root is real and there is only one fixed point in the phase space, namely, that one related to the α-regime. At ϵ=ϵ_c occurs the rising of the β-regime and the unstable point. Considering the current ϵ=1.077 of Methone, the solution of Equation (<ref>) is (x_1,x_2,x_3)=(-0.5616685,0.02034028, 0.5478663). Substituting these roots into Equation (<ref>) we obtain θ̇/n=0.4171 (x_1: α-regime); θ̇/n=1.0211 (x_2: β-regime); and θ̇/n=1.5686 (x_3: separatrix). The three values agree with those obtained after inspection of the level curves given Fig. <ref>. Fig. <ref> shows the real solutions calculated in this way in the range of 0.96<ϵ<1.08. We can see that the bifurcation occurs at ϵ_c∼1.0149. Note also the quasi-symmetry of the roots x_1 and x_3.The fixed point of the β-regime is not centered at the origin as it is indicated by the dashed line in Fig. (<ref>), bottom-right. The forced component calculated above deviates from the unit such that θ̇/n=1.0211. More generally note that in the case of non-circular orbits, x=0 is never a solution of Equation (<ref>), so that from Equation (<ref>) the forcing is never null in this case (recall the definition of γ_1).The forced component in the SOS can also be explained by the classical linear theory of the average solution of the spin-orbit dynamics within the synchronous resonance. The forced harmonic oscillator-analogue model gives θ̇/n=1+A where the amplitude of the forced component A is given by: A=2ω^2e/ω^2-n^2=2e/1-ϵ^-2,(Callegari and Rodríguez 2013; see also equation 5.123 in Murray and Dermott 1999). From Equation (<ref>) we obtain θ̇/n=1.0203, showing good agreement with the value obtained above. § ADDITIONAL APPLICATION: AEGAEON AND PROMETHEUS. Fig. (<ref>) shows the rotational phase space around the synchronous regime in the cases of Aegaeon (ϵ∼1.5236), and Prometheus (ϵ∼1.1727). As pointed out in Section 1, both rotational phase spaces display the α and β-regimes. Inspection of the left and middle panels shows us that the position of the three fixed points of the α and β-regimes obtained from level curves (middle) agree with surfaces of section in the case of Prometheus, but the same is not true in the case of Aegaeon[Note that our model does not allow to obtain the loci of the separatrix of the synchronous regime since it is valid, by construction, only in the interior of the resonance.].To improve the model we include terms of order higher than J^2 in the actions so that the hamiltonian (<ref>) becomes: H(x,y)=δ/2(x^2+y^2)-1/16(x^2+y^2)^2+3/8e√(ϵ)(x^2+y^2)x-e√(ϵ^3)x-1/128ϵ(x^2+y^2)^3+17/96√(ϵ)e(x^2+y^2)^5/2,where terms up to J^3 have been included (corresponding to the last two terms in Equation (<ref>). A significant improvement can be seen, mainly in the case of Aaegeon. As discussed in Lei (2023), and first shown in Gkolias et al. (2019), in the specific case of the 1:1 secondary resonance, the analytical and numerical mapping of the associated fixed points diverge for values of ϵ>1.2.§ CONCLUSIONSThis work shows the results of the dynamics of rotation of out-of-round close-in small Saturnian satellite Methone and Aegaeon discovered by the Cassini spacecraft. Their shapes and physical parameters have been updated recently by Thomas and Helfenstein (2020). As pointed out in Section 1, the main problem we are considering in this work is related to the nature of the α and β-regimes of motion in the phase space of rotation of these satellites already detected for Amalthea, Metis, and Prometheus (and probably others - see Melnikov and Shevchenko 2022, and references therein).We first conduct our investigation with the well-known model of rotation developed by Goldreich and Peale (1966) and applied by many authors in distinct sorts of problems through decades (e.g. Wisdom et al. 1984, Wisdom 2004, Callegari and Rodriguez 2013). The exact equations of the 1.5 degree-of-freedom model are solved numerically Everhart (1985) code, given very accurate rotational trajectories able to identify the main properties of the phase space with the surface of section technique. The β-regime is located close to the origin and corresponds to the “classical” fixed point associated with the synchronous rotation motion where the amplitude of oscillation of the physical angle Ψ=θ-f is null at the equilibrium. In the case of Methone, its center is slightly forced by an amount ∼0.02 such that θ̇/n∼1.02, a value which can be confirmed by linear theory (Wisdom et al. 1984; Callegari and Rodriguez 2013). The α-regime engulfs the β-regime and it is located far from origin at θ̇/n∼0.45 such that θ̇/n oscillates around 1. Also interestingly the amplitude of variation of Ψ is not null in the equilibrium point of the α-regime keeping an amplitude of the order of 30 degrees.To understand the complexity of such rotation phase space of Methone and Aegaeon, we apply the model developed by Wisdom (2004) to study the rotation of Enceladus. Thus, we follow exactly his steps and reproduce his results; having completed this task, we could apply the methodology to our case. Our hamiltonian (<ref>) generalizes Wisdom's theory showing all terms at that order of expansion (second order in action). Hamiltonian (<ref>) includes also terms up to J^3 in the actions. The terms in the expansion involving those which are proportional to 1:1 resonance between the frequency of physical libration Ψ and the mean motion n are responsible for the co-existence of the α and β regimes in the rotation of the current Methone and Aegaeon's phase spaces. Therefore, we have shown the existence of a secondary resonance within the synchronism, completing our initial goals.Figures (<ref>) and (<ref>) show the dependence of the domains of the α and β regimes with the shape parameter of the Methone. The β-regime occurs at ϵ=ϵ_c∼1.0149 such that for ϵ<ϵ_c only the α-regimes exists. While there is good agreement between numerical and analytical calculations in the case of Methone, the same is not true for Aegaeon, where the model is useful only when higher order terms are included in the hamiltonian since ϵ>1.2, a superior limit for convergence of analytical estimative (Gkolias et al. 2019).We revisit the previous works on the rotation of Prometheus, Amalthea, and Metis given the current theory. The case of Metis is similar to that Methone since ϵ∼1.0742 (see Table 1).The case of Amalthea is similar to that of Prometheus since ϵ∼1.214 (see Table 1).As we pointed out above, trajectories within this regime of motion are forced and suffer “long-term” oscillations even in the case of initial conditions located over the fixed point. Being that the current rotational states of Saturnian close-in small satellites are probably synchronous (Thomas and Helfenstein 2020), our results can be taken into account in evolutionary studies of the rotation of these bodies. Tidal models (e.g. Ferraz-Mello et al. 2008, Ferraz-Mello 2013) can be applied to estimate the final destiny of the rotation of such small bodies and also the role of the secondary resonances on its thermal emission (Wisdom 2004). ACKNOWLEDGEMENTS.The São Paulo Research Foundation (FAPESP) (process 2020/06807-7). Prof. Adrián Rodríguez (Valongo Observatory, UFRJ/Brazil).Part of this work has been presented at the `XXI Brazilian Colloquium on Orbital Dynamics' (INPE, from December 12 to 16, 2022); and at the `New Frontiers of Celestial Mechanics: theory and applications' (Department of Mathematics Tullio Levi-Civita, University of Padua, from February 15 to 17, 2023). CallRodr Callegari Jr., N., Rodríguez, A.. Dynamics of rotation of super-Earths. Celest Mech. Dyn. Astr. 116, 389–416 (2013).CallRib Callegari Jr., N., Ribeiro, F. B.. Computational and Applied Mathematics 34, 423-435 (2015).Call2021 Callegari Jr., N., Rodríguez, A., Ceccatto, D. T.. The current orbit of Methone (S/2004 S 1). Celest. Mech. Dyn. Astr. 133:49 (2021). Call2023 Callegari Jr., N., Rodríguez, A..The orbit of Aegaeon and the 7:6 Mimas-Aegaeon resonance. Celest. Mech. Dyn. Astr. 135:21 (2023). Cec2021 Ceccatto, D. T.; Callegari, N., Jr.; Rodríguez, Adrián (2021). The current orbit of Atlas (SXV). Proceedings of the International Astronomical Union, IAU Symposium, 15, 120 - 127.DT Dermott, S. F., Thomas, P. C.. The Determination of the Mass and Mean Density of Enceladus from its Observed Shape. Icarus 109, 241- (1994).Everhart 1985 Everhart, E.. An efficient integrator that uses Gauss-Radau spacings. In: IAU Coloquium 83, 185-202 (1985). Ferraz-Mello 2007 Ferraz-Mello, S.. Canonical Perturbation Theories - Degenerate Systems and Resonance. Series: Astrophysics and Space Science Library, Vol. 345. Springer US (2007).Ferraz-Mello et al. 2008 Ferraz-Mello, S., Rodríguez, A., Hussmann, H.. Tidal friction in close-in satellites and exoplanets: The Darwin theory re-visited. Celest. Mech. Dyn. Astr. 101, 171-201 (2008).Ferraz-Mello 2013 Ferraz-Mello, S.. Tidal synchronization of close-in satellites and exoplanets. A rheophysical approach. Celest. Mech. Dyn. Astr. 116, 109-140 (2013).GK 2016 Gkolias, I., Celletti, A., Efthymiopoulos, C., Pucacco, G.. The theory of secondary resonances in the spin–orbit problem. MNRAS, 459, 1327 (2016).GK 2019 Gkolias, I., Efthymiopoulos, C., Celletti, A., Pucacco, G.. Accurate Modelling of the low-order resonances in the spin-orbit problem. Communications in Nonlinear Science and Numerical Simulation, 77, 181 (2019). GP Goldreich, P., Peale, S.. Spin-orbit coupling in the solar system. The Astronomical Journal, 71, 425-437 (1966). Lei Lei, H.. Dynamical structures associated with high-order and secondary resonances in the spin-orbit problem. https://arxiv.org/pdf/2312.14413.pdf (2023).Melnikov Shevchenko 2008 Melnikov, A. V., Shevchenko. On the rotational dynamics of Prometeus and Pandora. Celest Mech. Dyn. Astr. 101, 31-47 (2008).Melnikov Shevchenko 2022 Melnikov, A. V., Shevchenko. Rotational Dynamics and Evolution of Planetary Satellites in the Solar and Exoplanetary Systems. Solar System Research, 56, No. 1, pp. 1–22 (2022). Murray and Dermott 1999 Murray, C. D, Dermott, S. F.. Solar System Dynamics, Cambridge University Press (1999).Pashkevich et al. 2021 Pashkevich, V. V. ; Vershkov, A. N. ; Mel'nikov, A. V.. Rotational Dynamics of the Inner Satellites of Jupiter. Solar System Research, 55, Issue 1, p.47-60 (2021). Peale 1986 Peale, S. J.. Rotation Histories of the Natural Satellites. In: Planetary Satellites. Proceedings of IAU Colloq. 28, held in Ithaca, NY, August 1974. Edited by J. A. Burns. University of Arizona Press, 1977, p.87. Porco 2004 Porco, C. C.. S/2004 S 1 and S/2004 S 2. IAU Circ. 8401 (2004 August 16) (2004).Porco 2009 Porco, C. C. S/2008 S 1. IAU Circ. 9023 (2009 March 3) (2009).Press et al. 1996 Press, W. H., Teukolsky, S. A., Vetterling, W. T., B. P. Flannery: `Numerical Recipes in Fortran 77'. Cambridge University Press (1996). Press et al. 1996 Sussman, G. J., Wisdom, J. with Mayer, M. E.: Structure and Interpretation of Classical Mechanics. The MIT Press. Cambridge, Massachusetts (2001). Thomas1998 Thomas, P.C., J.A. Burns, L. Rossier, D. Simonelli, J. Veverka, C.R. Chapman, K. Klaasen, T.V. Johnson, M.J.S. Belton, Galileo Solid State Imaging Team. The Small Inner Satellites of Jupiter. Icarus, 135, 360-371 (1998).Thomas2020 Thomas, P. C.; Helfenstein, P.. The small inner satellites of Saturn: Shapes, structures ans some implications. Icarus, 344, 113355 (2020).Wisdom1984 Wisdom, J., Peale, S. J., Mignard, F.. The chaotic rotation of Hyperion. Icarus, 58, 137-152 (1984). Wisdom2004 Wisdom, J.. Spin-Orbit Secondary Resonance Dynamics of Enceladus. The Astronomical Journal 128, 484-491 (2004).
http://arxiv.org/abs/2312.16137v2
{ "authors": [ "Nelson Callegari Jr" ], "categories": [ "astro-ph.EP", "math-ph", "math.MP", "nlin.CD" ], "primary_category": "astro-ph.EP", "published": "20231226175116", "title": "A Hamiltonian for 1:1 Rotational Secondary Resonances, and Application to Small Satellites of Saturn and Jupiter" }
Reshaping the ISAC Tradeoff Under OFDM Signaling: A Probabilistic Constellation Shaping Approach Zhen Du, Member, IEEE, Fan Liu, Senior Member, IEEE, Yifeng Xiong, Member, IEEE,Tony Xiao Han, Senior Member, IEEE, Yonina C. Eldar, Fellow, IEEE, and Shi Jin, Fellow, IEEE (Corresponding author: Fan Liu) An earlier version was partly presented at the IEEE Global Communications Conference (GLOBECOM), Kuala Lumpur, Malaysia, Dec 2023 <cit.>. Z. Du is with the School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, China. F. Liu is with School of System Design and Intelligent Manufacturing, Southern University of Science and Technology, Shenzhen, China. Y. Xiong is with the School of Information and Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing, China. T. X.-Han is with Huawei Technologies Co., Ltd, Shenzhen, China. Y. C. Eldar is with the Faculty of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot, Israel. S. Jin is with National Mobile Communications Research Laboratory, Southeast University, Nanjing, 210096, China. Received ...; accepted... ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The multi-channel Kondo effect was first discussed by Nozières and Blandin <cit.>to take account of orbital degrees of freedom into the Kondo effect. The effect of multiple channels may lead to the non-Fermi liquid (NFL) ground statein contrast to the local Fermi liquid (LFL) state <cit.> in ordinary Kondo systems.Since then,many studies have been performed on the multi-channel Kondo effect,and its singular behavior has been theoretically elucidatedusing various approaches such as numerical renormalization group (NRG) <cit.>,conformal field theory <cit.>,and Bethe ansatz <cit.>.However, it is not easy to find model materials modeling a two-channel Kondo effect,hindered by the instability of the NFL state against channel asymmetries <cit.>. Thus, the design of physical systems realizing such states is an important challenge in strongly correlated systems;as well as experimental verification of NFL state. For realizing the two-channel Kondo effect, D. L. Cox proposed the use of a quadrupole momentof an ion with f^2-electron configuration such asU^4+ or Pr^3+,and this is called the quadrupolar Kondo effect.The Kondo coupling works in the orbital degreesof freedom, while conduction electron spinsσ∈{↑ , ↓} correspondsto the channel degrees of freedom. The double-fold degeneracy of their ionic ground stateis a nonmagnetic one of non-Kramers typeand guaranteed when their crystal field has a high symmetry<cit.>.Recent experiments reported a few behaviors suggestingthe quadrupolar Kondo effect in the 1-2-20 compoundsPrV_2Al_20 <cit.> andPrIr_2Zn_20 <cit.>.However, large intersite interactions modifytheir physical properties from the behaviors of the 2-channel impurity model.They are candidate materials realizing the quadrupolar Kondo lattice (QKL),and several of their properties are consistentwith the theoretical predictions for the QKL's <cit.>. Furthermore, a recent experiment reportedthe impurity quadrupole Kondo effect in the diluted Pr systemY_1-xPr_xIr_2Zn_20 <cit.>. One should note one important point about the orbital symmetryof an impurity quadrupole in a lattice. We consider here the case of cubic lattice. Despite a discrete symmetry of the crystal field,the quadruple has the continuous O(2) orbital symmetry, if the Γ_3 doublet alone is considered.Taking into consideration the singletexcited state,this elevated symmetry is reduced to Z_3, which is equivalentto the lattice symmetry. As for quadrupole lattices, it was found that thisZ_3 orbital anisotropy plays an important roleof stabilizing various typesof quadrupole orders <cit.>. However, this effect has been neglectedin most of the theoretical studies on the quadrupole Kondo effect,except for a few studies which have investigated their effects on the NFL statefor a quadrupole model in either tetragonal or hexagonal crystal field. They used the NRG method <cit.> ora variational approach <cit.>and found a non-vanishing part of the parameter spacewhere the NFL state is stable.This contrasts to the instability of the NFL fixed pointagainst an infinitesimal channel anisotropy. In this paper, we study the quadrupole Kondo effect with a specialattention on the effect of orbital Z_3 anisotropy. To this end, we consider an impuritywith the excited Γ_1 singlet and ground-state Γ_3 doublet{|Γ_3u⟩,|Γ_3v⟩, |Γ_1⟩}. Our model differs from those used in the previous studies. Koga and Shiba used a realistic model consideringthe total angular momentum j_z = 5/2 of conduction electrons <cit.>. Their model is quite complicated and also has a lower symmetry since they considered hexagonal or tetoragonal crystal field.We will study an extended quadrupole Kondo model (EQKM)which includes the singlet excited state Γ_1. Its NRG Hamiltonian reads H_N= Λ^1/2 H_N-1 +∑_ασ( _N-1,ασ_N ασ+ H.c.), Λ^1/2H_0=J ∑_σ[( _0 u σ_0 v σ + _0 v α_0 u σ) Q_x+ ( n_0 u σ - n_0 v σ) Q_z] + Δ|Γ_1⟩⟨Γ_1|,whereΔ is the excitation energy of the Γ_1 state.The chemical potential is set to 0 corresponding to the half-filledconduction band, and H_N has the partile-hole symmetry. _n ασ is the creation operator ofthe electron in an orbitalα3 ∈3 {u,v } with spinσ3 ∈3 {↑ , ↓}at the n-th site in Wilson chain <cit.>andn_0 ασ^2 = 2 _0 ασ _0 ασ. Λ = 3 is the scaling factor at each RG step. The impurity quadrupole operators aredefined asQ_z = |⟩⟨| -|⟩⟨|+ ( a |⟩⟨| + H.c.)andQ_x =( a |⟩ -|⟩) ⟨| + H.c.with a=√(35)/2, andthey constitute a basis set ofthe Γ_3-irreducible representation (irrep) of the cubic point group.We now discuss the symmetry of H_N. At each NRG step,we decompose the Hilbert space into subspacesusing conserved quantitiesand diagonalize H_N in each subspace.Once H_0 includesthe Q_z,x's matrix elements involving Γ_1, H_N loses the O(2) symmetry in the Q_x-Q_z space,and thus its generator, Q_y^tot =i (|⟩⟨| + ∑_n σ _n u σ_n v σ)+ H.c., becomes non-conserved,while the SU(2) spin rotation symmetry is unaffected. Thus, for subspace decomposition,we usethe pair of quantum numbers{ C } = (C_↑, C_↓), whereC_σ2 = 2∑_0 ≤ n ≤ N∑_α=u,v( _n ασ_n ασ2 - 2 1/2)is the electron number for the spin direction σminus (N 2 + 2 1). In our NRG calculations,we retained about 1620 total states at each iteration N. We have also used a special trick to reduce numerical errors. The SU(2) spin symmetry guaranteesfor each multiplet of S^ >0 the eigenenergy degeneracyin the subspaces with-S^≤ S_z^12 (C_↑2 - 2 C_↓) ≤ S^. Thus, after diagonalization at each iteration,we reset the corresponding eigenenergies by their average value. This drastically improved the stability of NRG procedure. Without this trick, the system sometimes flows towardsa false fixed point. Identification of the phase requires the analysis of the RG fixed point.For each (J,Δ)-point, we start from H_0 and repeatNRG iterations until the low-energy spectrum converges.Figure <ref>(a) plots the RG energy flowsfor (J,Δ) 2 = 2 (1.0,1.0)and this is the behavior of the LFL fixed point.The ground state is singlet with { C } (0, 0).The spectrum can be described by quasi-particle excitations,and thus the levels are equally separated bythe single-particle energy η_10.80041. For even N, the first excited multiplet consists of the states with{ C } (± 1, 0) or (0,± 1), andeach { C }-state is doubly degenerate. The second (third) excited multiplet consists of the state withdoble- (triple-)particle excitationsand has 28- (56-)fold degeneracy. For odd N, the multiplets specified by { C } have now different energy.All of these results perfectly agree with the fixed-point spectrumof the spin-1 two-channel Kondo model <cit.>, and the ground state is identified as the LFL state.Figure <ref>(b) plots the energy flows for (J, Δ)(0.1, 10.0)and the results show the behavior of the NFL fixed point for N 2 > 2 38. In this case, each multiplet does not change { C } with N,and the ground state is now doublet with { C } (0,0). The first excited multiplet has 4 states with { C } (±1,0)or (0,±1), whilethe second excited multiplet has 10 states with{C } (0,0), (±1,0), or (0,±1). This agrees with the spectrum at the NFL fixed point <cit.>.Thus, when the Γ_1 excitation energy becomes large Δ2 ≫2 J, is effectively decoupled and the system is reduced tothe ordinary quadrupole Kondo model withtwo symmetric spin channels <cit.>. The J-Δ phase diagram is determined following the above procedureand the result is shown in Fig. <ref>. The LFL and NFL phases are separated by a smooth boundary J_c (Δ) that becomes straight at large Δ. This can be understood by considering the strong coupling limit,which is described by the local term H_0.We can easily diagonalize it and find a singlet ground statefor J/Δ > 2/27 = R_cand doublet ground state for J/Δ < R_c.The degeneracy is elevated to triplet at the level crossing point. It is clear that the asymptotic form in the large-Δ regionis J_c ∼ R_cΔ.The small coupling part (small Δ and J) is mostinteresting, but it is not easy to precisely determine J_c (Δ) due to slow convergence in the NRG iterations that is enhanced by a small energy scale.The boundary shows a singular Δ-dependence near the origin. If this dependence is a simple powerJ_c ∼Δ^α,the exponent may be determined byanalyzing the three data points for the smallest Δ values, and the result is 0.15 < α < 1.4 despite rough estimatedue to large error bars. For fitting the boundary in the whole range of 0 < Δ < 10,we have tried several functions andthe result is quite nice withJ_c = ( a_2 Δ^2 + a_3 Δ^3 + a_4 Δ^4 ) ^1/4, wherea_20.0013(1),a_30.58(9) × 10^-4, anda_40.27(12) × 10^-4.Its small-Δ asymptotic form isa power with the exponent α =1/2, which fallsin the window determined above.Finally, the J=0 part of the phase diagram is the free electron phase where the local impurity is completely decoupledfrom conduction electrons. The EQKM has two phases in the parameter spaceeven though the system does not change its channel symmetry,and this differs from the canonical behavior of the two-channel spin Kondo model.In order to explore other novel behaviors, we investigate in detailthe scaling of the Kondo temperature (J) in the LFL phasenear the phase boundary.To define , we follow Wilson's original idea of crossoverand keep track of the NRG flow of some energy levels <cit.>.We here choose the first excited statein the subspace of {C} =(0,0) and record its energy flowE_1 (N) with the NRG step N. We then define the crossover step N^∗by the relationE_1 (N^∗) = (1 ±1/5 ) E_1^∞, where E_1^∞ is the value at the strong coupling fixed point.In practice, we fitted E_1 (N) by the function E_1^∞ + b_1 Λ^-N/2 + b_2 Λ^-N and obtained a fractional value for N^∗and defined the Kondo temperature by tΛ^-N^∗/2. The energy unit is set to t =1 as explained before. Figure 3 shows the energy flow of E_1(N) for 101-points in the region 0.010< J-J_c(Δ)<0.111.All the plots fall on the universal curve within small numerical errors, and this confirms that the low-energy physics is governed by the same fixed point.The scaling form of the Kondo temperature is∼ A' J^b'exp ( - c' /J)with b'1/2 for the simplest case, i.e., the spin-1/2 single-channel Kondo model (SCKM) <cit.>.Let us first examine whetherin the EQKM follows this scaling. Note that J should be replaced by the distance from the phase boundary δ J.Figirue <ref> (a) shows the scaled formf(δ J) = δ J log [(J) /δ J^b^' ]for the fixed value Δ =1. If the above scaling works, f (δ J) should show a straightline, but the result for b^'=1/2 is very nonlinearand this indicates a different scaling form. To check this point, we repeated the same procedure for the SCKMto determine itsand plotted the result in the same panel.This time, the result shows a nice linear behavior confirmingthe expected scaling form for the SCKM.We have tried many scaling functions for fitting (J) of the EQKM.It turned out that the most promising one was a stretchedexponentialF(δ J) exp ( -c/√(δ J) )with the prefactorF(δ J)A (δ J)^b.The parameters are optimizedby minimizing fitting error for log,and the results are listed in Table <ref> for three Δ values. Using these optimized parameters,Fig. <ref>(b) showsY- log [/ F(δ J) ],which is expected close to√(δ J)/c. If this is the case, the double-logarithmic plot of Y (δ J)should show a straight line with slope 1/2,and one can see that this works nicely in the panel (b).Another promising scaling function is an ordinary Kondo form with a generalized exponent shown in Fig. <ref>(a). The fitting error is minimized to 0.463by using the exponent b^' =2.25(1) together withlog A'2.59(4), c'0.0244(2),and J_c0.210(1),but this form is unacceptable.Recall 1/c' corresponds tothe effective conduction electron density of states,and its value seems unphysically large. Another reason is thatthe fitting error is more than 3 timeslarger than the case of stretched exponential form. These facts strongly support the proposedstretched exponential for the best scaling function. In order to consider the origin of the stretched exponential scaling,it is instructive to recall that this type of scaling also appears atthe Kosterlitz-Thoules transition <cit.>,which is governed by two parameters(i.e., stiffness δ KK-K_cand vortex fugacity g). Their RG equations have the β-functions whichstart from the second-order terms, and this is common tothose of the SCKM with spin anisotropy <cit.>.As the bare couplings change with temperature T andcross the separatrix line at T_KT,the scaling function showsthe factor exp (±const./√(T-T_KT))in the disordered phase. Thus, we may expect that the RG equations of multiplecoupling constants explain the observed scaling behaviorif their β-functions have a proper form. It is natural to consider three effective coupling constantsfor the present EQKM. One is the excitation energy Δ and the other two arequadrupole exchange constants J_33 and J_31. The impurity quadrupole operators in H_0 consist of two parts:Q_μ Q_μ^(33)2 + 2 a Q_μ^(31)(μ z, x)where a √(35)/2 is the amplitudeof Γ_3-Γ_1 transition. At each RG operation, their exchange couplingsJ_33 and J_31 are renormalizedand the renormalization is generically different between them.Construction and analysis of their RG equations is an importantstep for a better understanding of the unconventional scaling in the EQKM.However, it is beyond the scope of the present studyand we leave it for future study. Following the same procedure,we also analyzed the Kondo temperaturein the NFL phase at Δ=2.0.We once again used the first excited energy E_1(N)in the subspace { C } = (0, 0) to determine using the same definition <cit.>.The energy flow is shown in Fig. 3 for 40 points in the region 0.012<δ J<0.051. All the plots once again fall on the universal curve. This time, however, some plots move away for large N-N^∗(δ J),but this is due to numerical errors. In the NFL phase, the ground state is a doubletand they appear in the same { C } subspace. Numerical errors lift this degeneracy and destabilize the RG flows. Figure <ref> shows the asymptotic slope -0.2473(2) different from the value -1/2 in the LFL region.This indicates the RG eigenvalue y ∼ -1/4 for the leading irrelevant operatoraround the NFL fixed point. Figures <ref>(a) and (b) show that the proposed scalings work quite reasonably. However, the determined value of the exponents b^'=-6.4(5) and b=-19(1),seem unphysical. This may indicate the possibility of another new scaling behavior distinct,but we also leave this point for future studies. We can further confirmthe differentscaling in the LFL phase of the EQKMby analyzing an effective β-function. Supposeis determined by the RG equation of one parameterδ J (N) alone,d δ J /d N = β (δ J),as in the case of the SCKM. Then, the definition T_K = Λ^-N^∗/2leads to the relationβ(logΛ /2) [d (log T_K)/d J]^-1evaluated at JJ_c 2 + 2 δ J. We numerically calculated this andthe determined β (δ J) is shown Fig. <ref>. As for the SCKM, it is known that the β-functionstarts from a marginally relevant termβ (J) ρ J^2 2 + 2 c_3 J^3 ⋯<cit.>. The determined β (δ J ) apparently differs fromthis conventional form,as shown by a divergent behavior of β /(δ J)^2. The small-δ J part is approximated by a simple power(δ J)^ν with the fractional exponentν 1.43(3). This nonanalyticity of the effective β-functionexhibits a failure of a single RG equation. Thus, one needs to consider coupled RG equations of multiplecoupling constants, which was also anticipatedfrom the determined scaling functionof stretched exponential form. In summary, we have performed the NRG study on the quadrupole Kondo effect with taking into accountthe crystal-field Γ_1 singlet state with excitation energy Δ. The determined phase diagram hastwo phases,local Fermi liquid for J≫Δand non-Fermi liquid for J ≪Δ. We have also found a new scaling form ofthe Kondo temperature in the local Fermi liquid phase,and it is a stretched exponential function. To achieve a full understanding of this behavior, we need to analyzethe RG equations of multiple coupling constantsand explore a related boundary conformal theoryas well as perform numerical calculations with higher precision.They are important future studies. AcknowledgementsThe main part of numerical calculations was performed usingthe facility at the Supercomputer Center, ISSP, the University of Tokyo. 10Nozieres1980 P. Nozières and A. Blandin, J. Phys. (Paris) 41, 193 (1980).nozieres1974 P. Nozières, J. Low Temp. Phys. 17, 31 (1974).wilson1975 K. G. Wilson, Rev. Mod. Phys. 47, 773 (1975).Bulla2008 R. Bulla, T. A. Costi, and T. Pruschke, Rev. Mod. Phys. 80, 395 (2008).Pang1991 H. B. Pang and D. L. Cox, Phys. Rev. B 44, 9454 (1991).Affleck1990 I. Affleck, Nucl. Phys. B 336, 517 (1990).Affleck1991a I. Affleck and A. W. Ludwig, Nucl. Phys. B 352, 849 (1991).Affleck1991b I. Affleck and A. W. W. Ludwig, Phys. Rev. Lett. 67, 161 (1991).Affleck1991c I. Affleck and A. W. Ludwig, Nucl. Phys. B 360, 641 (1991).Andrei1984 N. Andrei and C. Destri, Phys. Rev. Lett. 52, 364 (1984).Tsvelick1985 A. M. Tsvelick and P. B. Wiegmann, J. Stat. Phys. 38, 125 (1985).Schlottmann1995 P. Schlottmann and P. D. Sacramento, Physica B 206, 95 (1995).Cragg1980 D. M. Cragg, P. Lloyd, and P. Nozières, J. Phys. B 13, 803 (1980).Affleck1992 I. Affleck, A. W. Ludwig, H. B. Pang, and D. L. Cox, Phys. Rev. B 45, 7918 (1992).Cox1987 D. L. Cox, Phys. Rev. Lett. 59, 1240 (1987).Cox1988 D. L. Cox, Physica C 153, 1642 (1988).Cox1988a D. L. Cox, J. Magn. Magn. Mater. 76, 53 (1988).Sakai2011 A. Sakai and S. Nakatsuji, J. Phys. Soc. Jpn. 80, 063701 (2011).Onimaru2011 T. Onimaru, K. T. Matsumoto, Y. F. Inoue, K. Umeo, T. Sakakibara, Y. Karaki, M. Kubota, and T. Takabatake, Phys. Rev. Lett. 106, 177001 (2011).Onimaru2016 T. Onimaru, K. Izawa, K. T. Matsumoto, T. Yoshida, Y. Machida, T. Ikeura, K. Wakiya, K. Umeo, S. Kittaka, K. Araki, T. Sakakibara, and T. Takabatake, Phys. Rev. B 94, 075134 (2016).Tsuruta2015 A. Tsuruta and K. Miyake, J. Phys. Soc. Jpn. 84, 114714 (2015).Yamane2018 Y. Yamane, T. Onimaru, K. Wakiya, K. T. Matsumoto, K. Umeo, and T. Takabatake, Phys. Rev. Lett. 121, 077206 (2018).Hattori2014 K. Hattori and H. Tsunetsugu, J. Phys. Soc. Jpn. 83, 034709 (2014).Koga1995 M. Koga and H. Shiba, J. Phys. Soc. Jpn. 64, 4345 (1995).Koga1996 M. Koga and H. Shiba, J. Phys. Soc. Jpn. 65, 3007 (1996).Koga1997 M. Koga and H. Shiba, J. Phys. Soc. Jpn. 66, 1485 (1997).Kusunose1998 H. Kusunose, J. Phys. Soc. Jpn. 67, 61 (1998).kosterlitz1973 J. M. kosterlitz and D. J. Thouless, J. Phys. C 6, 1181 (1973).Hewson1993 A. C. Hewson, The Kondo Problem to Heavy Fermions (Cambridge University Press, 1993).Cox1998 D. L. Cox and A. Zawadowski, Adv. Phys. 47, 599 (1998).Anderson1970 P. W. Anderson, J. Phys. C 3, 2436 (1970).
http://arxiv.org/abs/2312.15936v1
{ "authors": [ "Yuki Kaneko", "Hirokazu Tsunetsugu" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20231226080134", "title": "Numerical renormalization group study of Quadrupole Kondo effect with the crystal-field excited state" }
=1 =1 𝒜 ℬ 𝒞𝒟 ℰ ℱ 𝒢 ℋ ℐ 𝒥 𝒦 ℒ ℳ 𝒩 𝒪 𝒮 𝒯 𝒫 𝒥 𝒲 𝒳 𝒴 𝒵 tr Õ A B C D E F G H I J K R S Va b c gr v ·/
http://arxiv.org/abs/2312.16408v1
{ "authors": [ "Anjie Gao", "Hai Tao Li", "Ian Moult", "Hua Xing Zhu" ], "categories": [ "hep-ph", "nucl-th" ], "primary_category": "hep-ph", "published": "20231227043430", "title": "The Transverse Energy-Energy Correlator at Next-to-Next-to-Next-to-Leading Logarithm" }
[email protected] 0000-0002-4558-6610National University of Singapore With growing capabilities of large language models (LLMs) comes growing affordances for human-like and context-aware conversational partners. On from this, some recent work has investigated the use of LLMs to simulate multiple conversational partners, such as to assist users with problem solving or to simulate an environment populated entirely with LLMs. Beyond this, we are interested in discussing and exploring the use of LLMs to simulate multiple personas to assist and augment users in educational settings that could benefit from multiple interlocutors. We discuss prior work that uses LLMs to simulate multiple personas sharing the same environment, and discuss example scenarios where multiple conversational agent partners could be used in education.<ccs2012><concept><concept_id>10010405.10010455.10010461</concept_id><concept_desc>Applied computing Sociology</concept_desc><concept_significance>500</concept_significance></concept></ccs2012> [500]Human-centred computingThe Use of Multiple Conversational Agent Interlocutors in Learning Samuel Rhys Cox================================================================== firstpage§ INTRODUCTION With advances in large language models (LLMs), conversational agents (CAs) can simulate distinct characteristics across multiple factors, such as personality and gender <cit.>. This added affordance coupled with the growing availability and accessible nature of using LLMs could be used to create complete and believable conversational partners for users to converse with. On from this, the use of LLMs shows promise within education <cit.> (such as for medical education <cit.> and language learning <cit.>), and LLMs could be used to provide conversational assistance and rapport building with learners.Recent work has also investigated the use of LLMs to simulate multiple personas that could be used within the same session <cit.>. For example, a user could take part in a conversation that involves multiple CAs with varying personalities, social roles, and expertise.Building on this, an area of interest could be the use of multiple (two or more) CAs interacting with a user to aid in learning activities. For example, CAs could act as other students or teachers when talking to the learner (user). Multi-party conversations such as this would allow for added affordances present in group environments, such as social comparison (e.g., counterfactual comparison <cit.>: “You would have scored higher than Bob if you scored one more correct”), social support (e.g., encouragement or advice giving), and the adoption of different personas <cit.> (e.g., personalities: relaxed, strict, friendly; or social roles).Depending on the design and make-up of such a group, various learning theories could be harnessed such as cooperative learning, collaborative learning and peer learning. In this workshop paper, we wish to create a discussion around the potential use of such environments where multiple CAs could interact with a user to assist in learning activities.We will first outline some prior work that used multiple conversational agents, before highlighting some potential scenarios where the use of multiple conversational partners could be incorporated in an educational setting.§ CHATTING ENVIRONMENTS WITH MULTIPLE CONVERSATIONAL AGENTS The use of LLMs to adopt diverse personas <cit.> has been harnessed both in environments that consist only of groups of LLMs communicating with each other <cit.>, and environments where users can interact with multiple LLM-driven personas <cit.>. In environments consisting exclusively of LLM-driven agents, agents have been shown to: behave socially when communicating, cooperate to complete tasks, and adjust their own behaviour based on the actions of other agents. For example, https://chirper.ai/Chirper.ai is a social network populated entirely by LLM-driven social bots, where analysis by Li et al. <cit.> found that agents were able to exhibit individual personas and behave socially, as well as influence the behaviour of other agents; and Park et al. <cit.> found that a group of LLMs could simulate members of a community that completed daily activities (such as gardening and cooking), and collaborated to achieve tasks (such as planning a party for one of the agents). Additionally, prior work has harnessed multiple LLMs as a prompting framework to solve problems by dividing work into microtasks or having agents debate amongst themselves <cit.>. For example, Chan et al. <cit.> found that multiple LLM agents could use debate to provide higher quality evaluations than an agent acting alone. Environments where a user can interact with multiple CAs simultaneously have been used to simulate social environments, and assist users in problem-solving and decision-making tasks <cit.>. For example, CommunityBots <cit.> found that (when answering survey questions) users had higher levels of enjoyment and engagement when (sequentially) talking to three domain specific chatbots rather than talking to one chatbot. In ChoiceMates, Park et al. <cit.> compared product search tasks using web search, single LLM agent and multi-LLM agents, and found that using multiple agents helped users explore more breadth and depth of options compared to web search.§ POTENTIAL SCENARIOS FOR MULTIPLE CA INTERLOCATORS IN EDUCATION Now that we have given an overview of some prior work that used multiple CAs, we will describe some potential scenarios where multiple CAs could be used in education. §.§ Differing Social Roles in Conversations Prior work has investigated the effect of varying the social role of an agent <cit.>, such as comparing a social robot in a classroom helping students as either a “co-solver” or a “knower” <cit.>. Building on this, by adopting multiple conversational agents into a chatting session with a learner, we could take advantage of the affordance for diverse social roles. For example, in a scenario where a learner is required to explain and discuss a complex scientific concept, the learner could be conversing with multiple agents, such as: a sceptic agent who could challenge a learner's responses to ensure a robust subject understanding that withstands scrutiny; an encouraging agent that could provide positive reinforcement and highlight a learner's progress; and a mentor agent that could offer additional guidance and insights or connect concepts to real-world examples. §.§ Conversational Partners with Different Cultural Background and Knowledge Multiple conversational partners could also prove beneficial in learning fields where diverse perspectives would aid learners in becoming more well-rounded and aware of nuance and variation. For example, within language learning the same language could have slight variations due to the different cultural and linguistic backgrounds. For example, in Singapore people may speak a colloquial form of Singaporean Mandarin that combines Mandarin, English and Malay vocabulary; or they may speak Standard Mandarin depending on their situation and interlocutor. By allowing for group conversations that incorporate multiple different linguistic variations, learners could be aware of different contexts and vocabulary that could be used depending on the context of a given situation. §.§ Virtual Environments with LLM Interlocutors Drawing comparison to Park et al.'s work where a virtual AI Town was populated entirely by LLMs that could interact with each other and the environment <cit.>, virtual environments could be developed that are populated by embodied LLM-driven CAs to allow for an immersion-learning environment.Human users could then observe interactions between agents (that could act as exemplar interactions in training exercises such as conflict resolution <cit.> or language learning), or users themselves could interact with agents (to gain experience in practical interaction skills that the user may be pursuing).However, depending on the level of LLM control and capabilities, it should be ensured that agents do not mislead or discomfort users, such as via hallucinations, contradicting information disclosure from LLMs, or inappropriate utterances that make users uncomfortable (see <cit.> for guidelines to design conversational interactions with LLM-driven computer-controlled characters).§ DISCUSSION AND CONCLUSIONIn conclusion, we have discussed recent work that uses LLMs to simulate multiple conversational partners that would have the potential to aid learners. While the scope of such interactions is very broad (allowing for the adoption of multiple pedagogical techniques, or interactions paradigms that rely on the presence of three or more interlocutors), we hope to raise discussion.Attention should also be drawn to well-known issues surrounding LLMs such as potential for hallucinations (that could prove counter-productive in an educational environment if the learner is provided with mistruths), and bias or stereotypes in LLM responses that could prove harmful to learners.The utilisation of multiple CAs (in not just education, but any application domain) adds additional layers of complexity, whereby agents need to be designed such that the CA personas (or teach methods) harmonise in a manner proving helpful and appropriate to learners. Additionally, controls are needed to ensure CAs do not provide contradictory information. Interaction dynamics between the agents themselves also need to be crafted carefully to ensure that they simulate productive and appropriate human interactions. Is it also important to discuss potential of over-reliance on such systems. Should systems be developed that promote independent thought, and avoid environments where learners become passive. On from this, additional ethical concerns should be considered, such as handling of personal data, potential for bias in agent responses (that could prove either uncomfortable and inappropriate to learners, or could introduce biased utterances as exemplar interactions to learners depending on the learning environment), and the impact of environments and interactions on learner's well-being.ACM-Reference-Format
http://arxiv.org/abs/2312.16534v1
{ "authors": [ "Samuel Rhys Cox" ], "categories": [ "cs.HC" ], "primary_category": "cs.HC", "published": "20231227113033", "title": "The Use of Multiple Conversational Agent Interlocutors in Learning" }
Assigning Stationary Distributions toSparse Stochastic MatricesNicolas GillisDepartment of Mathematics and Operational Research, Université de Mons,Rue de Houdain 9, 7000 Mons, Belgium.Email: [email protected]. NG acknowledges the supportby the European Union (ERC consolidator, eLinoR, no 101085607), by theFonds de la Recherche Scientifique - FNRS and the Fonds Wetenschappelijk Onderzoek - Vlaanderen (FWO) under EOS Project no O005318F-RG47, and by the Francqui Foundation. Paul Van DoorenUCLouvain, Email: [email protected]. January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The target stationary distribution problem (TSDP) is the following: given an irreducible stochastic matrix G and a target stationary distribution , construct a minimum norm perturbation, Δ, such thatĜ = G+Δ is also stochastic and has the prescribed target stationary distribution, .In this paper, we revisit the TSDP under a constraint on the support of Δ, that is, on the set of non-zero entries of Δ. This is particularly meaningful in practice since one cannot typically modify all entries of G.We first show how to construct a feasible solution Ĝ that has essentially the same support as the matrix G.Then we show how to compute globally optimal and sparse solutions using the component-wise ℓ_1 norm and linear optimization. We propose an efficient implementation that relies on a column-generation approach which allows us to solve sparse problems of size up to 10^5 × 10^5 in a few minutes.We illustrate the proposed algorithms with several numerical experiments.Keywords: stochastic matrix, stationary distribution, support, sparsity, linear optimization. AMS subject classifications: 60J10, 93C73, 65F15, 90C05 § INTRODUCTIONThe target stationary distribution problem (TSDP) was introduced recentlyin <cit.>, and is defined as follows. We are given * G, an n× n irreducible stochastic matrixwith positive stationary distribution > 0, therefore satisfyingG ≥ 0,G_n = _n, ^⊤ G= ^⊤, ^⊤_n=1,where _n is the n-dimensional vector of all 1's.* > 0, a positive target distribution such that ^⊤_n=1.The TSDP requires to find a minimum norm correction, Δ, such that Ĝ :=G+Δ is still stochastic and has the targetas its stationary distribution. The setof admissible candidate matrices, 𝒟,is thus described by tree conditions: 𝒟:= {Δ∈^n× n | ^⊤ (G+Δ)= ^⊤, Δ_n=_n,G+Δ≥ 0 }, where _n is the n-dimensional vector of all 0's; see <cit.> for more details on this feasible set. The set 𝒟 is convex and the TSDP requires to solve the following convex optimization problemΔ∈𝒟min Δ, TSDPfor a given norm ·. Stochastic matrices G are used to model hyperlink networks, social networks, or queuing networks, to name a few possible applications. Their stationary distributions contain important information on the nodes in the network, such as their centrality or other types of rankings. The target stationary distribution then captures some desired properties of the system. In practice, one is interested in reaching the target distribution with minimum effort, that is, in finding minimum norm solutions Δ yielding a perturbed model Ĝ=G+Δ with the prescribed distribution .This line of research is quite different from the body of literature on the sensitivity of the stationary distribution of a stochastic matrix G with respect to a perturbation Δ that preserves the stochasticity of G+Δ. This is a field of active research and various sensitivity bounds have been proposed in the literature <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. Such bounds are of interest in a wide range of application areas, such as mathematical physics, climate modeling, Bayesian statistics, and bio-informatics. But the aim of the present paper is to assign a target distribution, which is an inverse problem rather than a sensitivity problem. Solving the TSDP for large-scale problems directly using commercial solvers might be intractable for large n, with O(n^2) variables and constraints; see the discussion in section <ref>.This motivated the introduction of a heuristic approach to find an approximate minimizer using a set of rank-1 corrections <cit.>.An algorithm was given in <cit.> for constructing a minimum norm rank-1 solution when such a solution exists. However, for sparse matrices, there might not exist feasible rank-1 perturbations. Moreover, rank-one perturbations are typically dense, which is not desirable: in practice, it is typically not possible to modify most of the links in a network, but rather one would like to modify only a few. For example, in a road network <cit.>, it is not possible to add links between distant roads, while one would like to reduce the congestion by modifying as few existing links as possible.In this paper, we therefore consider the TSDP with support constraints while trying to promote sparse solutions. More precisely, we consider the following variant: Δ∈𝒟min Δ_1such that (Δ) ⊂Ω,where* Δ_1 = ∑_i,j |Δ_i,j| is the component-wise ℓ_1 norm which promotes sparsity as it is the convex envelope (that is, the tightest convex lower bound) of the ℓ_0 norm on the ℓ_∞ ball; see, e.g., <cit.> and the references therein. In fact, we will prove in Theorem <ref> that there is an optimal solution of (<ref>) with less than min(|Ω|,(G)+2n) non-zero entries.* (Δ) = { (i,j)|Δ_i,j≠ 0 } is the support of Δ, that is, the set of indices corresponding to its non-zero entries.* Ω is a given support that provides the links that can be modified.For example, one can only modify existing links, that is, Ω = (G), which is referred to as a non-structural perturbation <cit.>.The above problem is related to the feasible set { G |G ≥ 0, G _n = _n,(G) ⊂Ω} which wasstudied in <cit.>, <cit.>, and where authors tried to describe all possible stationary vectors of irreducible matrices in this set, for a given Ω.Contribution and outline of the paper In section <ref>, we first show that the TSDP with support constraint (<ref>) always admits a feasible solution for Ω = (G+I), and provide a feasible solution that is an optimal diagonal scaling of I-G.We then discuss and analyze this feasible solution, in particular its distance to optimality, when it is rank-one, and how the ordering ofaffects the solution.In section <ref>, we propose an efficient linear optimization formulation of (<ref>), with 2n equalities and O(|Ω|) variables, that allows us to solve (<ref>) in O(n^3) operations, in the worst case. When Ω is sparse, the run time is empirically observed to be comparable tothe time of constructing a sparse2n × O((G)) matrix with 2 nonzeros per column, where (G) is the number of non-zero entries in G, allowing us to solve sparse problems of size 10^4 × 10^4 within seconds, and of size 10^5 × 10^5 within minutes.When Ω is dense and G is sparse, we propose a column-generation approach that allows us to solve problems of similar sizein a comparable computational time, by initializing the solution with the feasible case Ω = (G+I). We report numerical experiments in section <ref>. Notation We use uppercase for matrices, boldface lowercase for vectors and normal lowercase for scalars.We use _n and _n to denote the n-vectors of all 0's and 1's, respectively, and _i to denote the i-th basis vector in ^n. The n× n matrix of all 1's is denoted by _n,n.We drop the index n when the dimension is clear from the context.The elements of a matrix M will be denoted by M_i,j and those of a vectorby v_i, M_i,: and M_:,j denote the ith row and jth column of M, respectively.By (M), we mean the set of indices for which the matrix M is nonzero, and by (M) = |(M)| the number of nonzeros in M where | · | denotes the cardinality of a set.By (α_1,…,α_n), we mean the n× n diagonal matrix with the parameters α_i on its main diagonal,and( · )^⊤ denotes the transpose.The inequalities > and ≥ applied to vectors and matrices are elementwise inequalities.We say that an n-by-n matrix M is stochastic if M ≥ 0 and M _n = _n. It is irreducible if it is not reducible, that is, if there does not exist two sets of indices, ℐ and 𝒥,such that ℐ∪𝒥 = {1,2,…,n}, ℐ∩𝒥 = ∅ andM(ℐ,𝒥)=0. Irreducibility of M implies that there exists a unique positive vector > 0 such that ^⊤ G= ^⊤ and ^⊤_n = 1; this is a consequence of the Perron-Frobenius theorem. This vectoris called the stationary distribution of G. See <cit.> for more details. § TSDP WITH SUPPORT Ω = (G+I_N) In this section, we analyze a special case where we allow the non-zero entries of G and entries on its diagonal to be modified, that is, Ω = (G+I_n). We first show that the TSDP (<ref>) isalways feasible in this case (section <ref>).In fact, we provide an explicit closed-form solution.We then discuss properties of this solution: optimality (section <ref>), sparsity and rank-1 case (section <ref>), and how the ordering ofcompared to that ofaffects the solution (section <ref>). §.§ Closed-form feasible solutions for Ω = (G+I_n)Let us consider the solutions to the TSDP of the form Δ() = D() (I_n-G),where D():= (α_1,…,α_n) and _n ≤≤_n so that (Δ) ⊆(G+I_n). This leads to the perturbed matricesG() := G + Δ() = (I_n- D())G + D() I_n.The following lemma formalizes the fact that G() is stochastic, and irreducible if < _n. Let G be an irreducible stochastic matrix. Then the family of matrices G_:={G()|_n ≤≤_n }is a closed convex set of stochastic matrices, and the subset G_<_n:={G()|_n ≤ < _n}contains the irreducible stochastic matrices of G_.Each row of the matrix G() can be written as_i^⊤ G()= (1-α_i) _i^⊤ G + α_i _i^⊤ I_n,0≤α_i≤ 1,and is thus a closed convex combination of the corresponding rows of G and I_n, which are both stochastic matrices. Therefore the set G_ is a closed convex set of stochastic matrices. The nonnegativity of G() follows from the nonnegativity of the factors of each term of the expression (<ref>) of G(). The stochasticity of G() follows from G() _n =(I_n- D())G_n + D() I_n_n = (I_n- D())_n+ D() _n = _n. When constraining the parameters to _n ≤ < _n, the matrices G() and G (which is actually equal to G(_n)) have the same off-diagonal zero pattern, and hence G() is irreducible if and only if G is irreducible (see, e.g., <cit.>). Finally, in order to show that G_<_n contains all irreducible matrices of G_, it suffices to see that if any α_i=0 then _i^⊤ G()=_i^⊤, which implies that G() is reducible. The solutions Δ() lead to a set of stochastic matrices, G_.It remains to choose the free parameters to assignas the stationary distribution of an irreducible stochastic matrix G() in G_<_n, that is, choosesuch that ^⊤ G()= ^⊤. The following theorem shows that the set G_<_n always contains feasible solutions for arbitrary G and , and provides the minimum norm solution among these feasible solutions.Let (G,) be an irreducible stochastic matrix G with stationary distribution >0, ^⊤_n=1, and let >0, ^⊤_n=1, be a given target stationary distribution.Then the set of parametersso that thematrices G() ∈ G_<_n have the targetas stationary distribution, that is, G()^⊤=, is given by (c)= _n - c (./),0 < c ≤ 1/./_∞,where ./ denotes the element-wise division[This is the MATLAB notation.] of the vectorsand . The parameter vector ^* = (c^*) of minimum normis obtained for c^*=1/./_∞,and it also minimizes the norm of the perturbation matrix Δ(^*) = D(^*)(I_n-G), for any norm ·.If the vectoris the stationary distribution of G(), it must satisfy ^⊤(I_n-G())=0. Since G()= (I_n-D())G+D() I_n for some ,must satisfy^⊤(I_n-D())(I_n-G)=0.But since G is irreducible, the rank of (I_n-G) is n-1 and its left null vector must be proportional to ^⊤. Therefore, all candidate solutions formust satisfy, for some non-zero scalar c :^⊤(I_n-D()) =c^⊤, _n ≤ < _n, which can be rewritten as a system of linear equations and inequalities :μ̂_i(1-α_i) = c μ_i, 0 ≤α_i < 1,i=1,…,n,in the unknown variables c and α_i, i=1,...,n. Using the ∘-notation for the elementwise product of vectors, this is equivalent to ∘ (_n -)= c , _n<(_n-) ≤_n.Sinceandare positive, and the vector (_n-) is constrained to be positive, c must be positive as well. The upper bound (_n-)≤_n implies that c must be smaller thanor equal to c^*:=1/./_∞.The set of solutions is then the line segment(c) = _n -c ./,0< c ≤ c^*:=1/./_∞,and the parameter vector (c)of minimum norm is obtained for c=c^* since all entries of (c) decrease linearly with increasing c in the interval 0 < c ≤ c^*. The perturbation matrix Δ() = D()(I_n-G) is linear in , and its absolute value decreases with increasing c, and hence its norm also decreases with increasing c, reaching its minimum for c=c^*.This holds true for any norm which are nonnegative convex functions, equal to zero only at the origin. The following example illustrates the method. Consider the stochastic matrix G of a ring network, with stationary distribution :G=[[ 1/2 1/4 0 1/4; 1/4 1/2 1/4 0; 0 1/4 1/2 1/4; 1/4 0 1/4 1/2 ]],^⊤=[1/4,1/4,1/4,1/4].Let the target distribution be the vector ^⊤=[1/8,1/8,1/4,1/2]. We have ^⊤./^⊤ = [2,2,1,1/2], with optimal parameter c^*=1/2, so that ^* = (c^*)=[0, 0, 1/2,3/4]^⊤, leading to Δ(^*) = [[ 0 0 0 0; 0 0 0 0; 0-1/8 1/4-1/8; -3/16 0 -3/16 3/8 ]] andG(^*)= [[1/21/401/4;1/41/21/40;01/83/41/8; 1/160 1/167/8 ]]. In the following sections, we discuss several properties of the solutions G(). Before doing so, let us introduce some additional notation. For given stationary distributionsand , we define the ratio :̊=./ whose entries belong to the interval 0 < r_*≤ r_i ≤ r^*, wherer_*:=min_i μ_i/μ̂_i >0,andr^*:=max_i μ_i/μ̂_i >0. Note that the maximal admissible value for c is c^* = 1/r^*. The elements of (c^*) are bounded by 0 ≤α_i = 1- r_i/r^* ≤1-r_*/r^*=r^*-r_*/r^*, and|Δ()| = D()|I-G|≤ (1- r_*/r^* )|I-G| =(r^*-r_*)/r^*|I-G|.Hence the norms (c^*) and Δ(^*) decrease with a decreasing gap r^*-r_* and become zero if and only if r^*=r_*, and hence =.§.§ How good is 𝒢_ ? In this subsection we compare the optimal solution in 𝒢_ with the lower bound for the optimalsolution in the larger set 𝒟 given in (<ref>). The following lemma gives such a lower bound. Every feasible solution Δ∈𝒟 of the TSDP satisfiesΔ≥^⊤(I-G)/^⊤ = (-)^⊤(I-G)/^⊤,for any induced norm, and also holds for the component-wise ℓ_1 norm.The inequality x^⊤· A ≥x^⊤ A for the vector-matrix product x^⊤ A, holds for any induced norm, and also holds for the component-wise ℓ_1 norm. Applying this to ^⊤·Δ = ^⊤(I-G) then yields the required inequality forΔ. The equality then follows trivially from ^⊤(I-G)=0.For the perturbation Δ() = D()(I_n - G),the boundΔ() ≤ (1-r_*)/r^*) I-G was obtained earlier; see (<ref>).Putting this bound together with Lemma <ref>, we obtain the following interval(-)^⊤(I_n-G) /^⊤ ≤ Δ(^*)≤(1-r_*)/r^*I_n - G , for the optimal Δ(^*) in 𝒢_. This interval goes to zero when 𝐝:=- goes to zero, which indicates that the optimal solution constrained to 𝒢_ is approaching a globally optimal solution. This will be illustrated on numerical examples in section <ref>.In the next section, wefocus on the case of a rank-1 perturbation.§.§ Support of Δ(^*)and optimal rank-1 solutions Using Theorem <ref>, it is straightforward to describe the support of the feasible solutions of the TSDP of the form Δ() = D()(I_n - G). Let (G,) be an irreducible stochastic matrix G, and let us use the same notation as in Theorem <ref>.For all = (c) such that c < c^*, we have(Δ()) = (G+I_n).For ^* = (c^*),(Δ(^*))_i:= {[ ( (G+I_n)_i:) for μ_i/μ̂_i < r^* = 1/c^*,; ∅ for μ_i/μ̂_i = r^* = 1/c^*. ].We have Δ()_i: = α_i (e_i^⊤ - G_i:). Since G is irreducible G_i:≠_i^⊤, and hence the supports of the ith rows of Δ() and I-G coincide unless α_i = 0. For = (c), thishappens only when c = c^* and μ_i/μ̂_i = r^* = 1/c^*. Lemma <ref> implies that thesolution Δ(^*) will have the same support as G+I_n, except for the rows that correspond to the maximum entries of ./ which will be equal to zero.Rank-1 caseLemma <ref> also implies thatΔ(^*)has rank-1 if the vector ./ has only two different values, namely r_* and r^*, and if, moreover, r^* is repeated n-1 times, in which case Δ(^*) has a single non-zero row with index j such thatμ_j=r_*μ̂_jand μ_i=r^*μ̂_ifor alli≠ j.Since ^⊤_n=^⊤_n=1, r^* + μ̂_j(r_*-r^*)=1, we have μ̂_j=(r^*-1)/(r^*-r_*)<1. Finally, we obtainr_*<1<r^*, α_j=1-r_*/r^*, α_i=0fori≠ j, Δ =α_j_j_j^⊤(I_n-G) .Note that _j_j^⊤(I_n-G) is the matrix whose jth row is α_j (I-G)_j: and all the other rows are equal to zero.Whatcan we choose to make Δ(^*) rank-1?We can take=+ λ_j/( + λ_j)^⊤_n= 1/1+λ( + λ_j) for any j and λ > 0.When λ→∞, the state j tends to become absorbing, that is, Δ_j:→_j^⊤, since _j → 1. Intuitively, we increase the probability of the state to return to itself, making it absorbing at the limit.In other words, by modifying the links in one row of G, increasing the weight to oneself while reducing proportionally the weights to the others, we increase _j, while decreasing _i for i ≠ j with the same ratio, 1/1+λ = 1/r^*.Consider Example <ref>: G=[[ 1/2 1/4 0 1/4; 1/4 1/2 1/4 0; 0 1/4 1/2 1/4; 1/4 0 1/4 1/2 ]],^⊤=[1/4,1/4,1/4,1/4]. Choosing^⊤=[2/5,1/5,1/5,1/5]. We have ^⊤./^⊤ = [5/8,5/4,5/4,5/4], with optimal parameter c^*=4/5,so that ^* = (c^*)=[1/2, 0, 0, 0]^⊤, leading to Δ(^*) =[[1/4 -1/80 -1/8;0000;0000;0000 ]] andG(^*)= [[ 3/4 1/8 0 1/8; 1/4 1/2 1/4 0; 0 1/4 1/2 1/4; 1/4 0 1/4 1/2 ]].This turns out to be a globally optimal solution of the TSDP that minimizes the component-wise ℓ_1, as proved in the next paragraph.Optimality of the rank-1 solution Δ(^*) As observed in Example <ref>, it turns out the solution Δ(^*) is optimal for the TSDP in the rank-1 case, under the condition that j ∈_k μ̂_k, as stated in the following theorem. Let (G,) be an irreducible stochastic matrix G, and let us use the same notation as in Theorem <ref>.Assume = 1/1+λ ( + λ_j) for some λ > 0 and some j.Assume also that λ≥μ_i - μ_j for all i ≠ j, that is, j ∈_k μ̂_k.Then Δ(^*) where ^* = (c^*) is an optimal solution of the TSDP for the component-wise and matrix ℓ_1 norms.By <cit.>, any feasible rank-one solution of the TSDP without the nonnegativity constraints has the form Δ(𝐱) = 𝐱^⊤/^⊤𝐱 (I-G)for 𝐱∈ℝ^n,𝐱^⊤≠ 0. In fact, if Δ = 𝐱𝐲^⊤ for some vectors 𝐱 and 𝐲,the constraints ^⊤Δ =^⊤ (I-G) leads to ^⊤𝐱𝐲^⊤ =^⊤ (I-G) ≠ 0 ⇒𝐲^⊤ = ^⊤/^⊤𝐱 (I-G),where ^⊤𝐱≠ 0.Moreover, <cit.> proved that there always exists an optimal rank-onesolution to the TSDP without the nonnegativity constraints, hence solving min_𝐱Δ(𝐱) solves the TSDP without the nonnegativity constraints, because Δ(𝐱) also satisfies the other constraint: Δ(𝐱) _n = 𝐱𝐲^⊤_n = 0. Let us develop the above formula when = 1/1+λ (+ λ e_j):Δ(𝐱) = 𝐱 ( + λ e_j)^⊤/( + λ e_j)^⊤𝐱 (I-G) =λ𝐱_j^⊤/μ^⊤𝐱 + λ x_j (I-G) =λ/^⊤ x + λ x_j𝐱(I-G)_j:, which leads to λ𝐱_j^⊤/μ^⊤𝐱 + λ x_j (I-G) =λ/|^⊤ x + λ x_j|𝐱(I-G)_j:. Let us start with the matrix ℓ_1 norm, that is, A_ℓ_1 = max_𝐳A 𝐳_1/𝐳_1. We have 𝐱(I-G)_j:_ℓ_1=max_𝐳𝐱(I-G)_j:𝐳_1/𝐳_1 =𝐱_1 max_𝐳(I-G)_j:𝐳_1/𝐳_1 = 𝐱_1 (I-G)_j:_∞.For the component-wise ℓ_1 norm, we have 𝐱(I-G)_j:_1= 𝐱_1 (I-G)_j:_1. Since Δ(𝐱) is independent of the scaling of 𝐱, we can assume w.l.o.g. that 𝐱_1 = 1. Hencemin_𝐱Δ(𝐱)_* where * ∈{1, ℓ_1} is equivalent to solving min_𝐱, 𝐱_1 = 11/|^⊤ x+λ x_j| =max_𝐱, 𝐱_1 = 1 |^⊤ x+λ x_j|, which is minimized at ^* = _i fori ∈_k μ̂_k.(Note that this was proved in <cit.> for the matrix ℓ_1 norm.)This means that the row of G corresponding to the largest entry inis perturbed in the direction (I-G)_j:. Intuitively, the most influential node in G will increase its weight towards the jth node while reducing weights to others, proportionally to the jth node weights. This makes sense since we are trying to increase the value of μ_j to μ̂_j: we use the most influential node to do that. Note that this solution is nonnegative if (I-G)_j:⊂ (I-G)_i: which will typically not be the case when G is sparse and i ≠ j.However, if λ≥μ_i - μ_j then μ̂_j is the largest entry inso that ^* = _j and henceΔ(𝐱^*) = 1/μ_j + λ_j ( + λ_j)^⊤ (I-G) = λ/μ_j + λ_j _j^⊤ (I-G). This implies that G + Δ (𝐱^*) ≥ 0 which is therefore an optimal solution of the TSDP for the component-wise and matrix ℓ_1 norms[This case is analyzed in <cit.>. However, authors do no mention the condition λ≥max_i μ_i - μ_j to obtain <cit.>.].Recall that Δ(^*) = α_j _j _j^⊤ (I-G), see (<ref>), and α_j was minimized to obtain the best possible solution of the TSDP of this form. Since the above rank-one solution is optimal, this means that Δ(𝐱^*)= Δ(^*). In fact, noting that r^* = 1/1+λ andr_* = μ_j/μ̂_j= μ_j/μ̂_j= (1+λ)μ_j/μ_j + λ,we have α_j=1 - r_*/r^*=1 - (1+λ) μ_j/μ_j + λ/1+λ= λ/μ_j + λ, which concludes the proof.Let us take the final matrix from Example <ref>: G =[[ 3/4 1/8 0 1/8; 1/4 1/2 1/4 0; 0 1/4 1/2 1/4; 1/4 0 1/4 1/2 ]] with= [2/5,1/5,1/5,1/5]. Let = 1/1+λ ( + λ_2) = [4/11,3/11,2/11,2/11] for λ = 1/10, so that μ̂_1 > μ̂_2. We have Δ(𝐱^*) =[[ -11/160 11/80 -11/160 0; 0 0 0 0; 0 0 0 0; 0 0 0 0 ]] ≠Δ(^*) =[[ 0 0 0 0; -1/12 1/6 -1/12 0; 0 0 0 0; 0 0 0 0 ]],while Δ^* = _Δ∈𝒟Δ_1 =[[ -1/161/16 0 0; 01/12 -1/12 0; 0 0 0 0; 0 0 0 0 ]],whereΔ(𝐱^*) _1 = 11/40 < Δ^* _1 =7/24 < Δ(𝐱^*) _1 = 1/3, but Δ(^*) is not admissible as G+Δ(𝐱^*) ≱ 0. Note that Δ^* was computed using linear optimization; see section <ref>.§.§ Ordering ofIn this section we show how to reduce the differencer^*-r_*by introducing a permutation on the vector . We indicate that this can be achieved by a permutation on the rows of the matrix G, but this is not a similarity transformation. We first derive a result on optimal orderings of the stationary distributionsand .Let P and P̂ be n× n permutation matrices that reorder the elements of and , respectively, meaning that the elements of the permuted vectors P and P̂, each form a non-decreasing series. Then[min_i (Pμ)_i/ (P̂μ̂)_i , max_i (Pμ)_i/(P̂μ̂)_i ] ⊂ [ min_i μ_i/μ̂_i , max_i μ_i/μ̂_i ] See Appendix A.This theorem shows that when the vectors P and P̂ have “coherent" orderings, then the correspondinginterval [r_*(P,P̂),r^*(P,P̂)] is minimal, and hence the norm (c^*) is also minimal. But we also have[r_*(P,P̂),r^*(P,P̂)] = [r_*(P̃,),r^*(P̃,)]since the common permutation P̂^⊤ of the vectorsP and P̂ only affects the order of the ratios, not their extremal values.We now show how we can achieve this reordering in the context of our target distribution. Since P̃:=P̂^⊤ P and G̃ := P̃ GP̃^⊤, we can define the set G̃_ :={G̃(), _n ≤≤_n } usingG̃()=(I_n-D())G̃ +D() I_n, D():= (α_1,…,α_n).Clearly, the matrix G̃ = P̃ GP̃^⊤ is still stochastic, but its stationary distribution is now given byP̃ since (P̃)^⊤P̃ G P̃^⊤ =(P̃)^⊤.If we then choose againas target stationary distribution, and construct the permutations P and P̂ such that the vectorsP and P̂ are both forming a non-decreasing sequence, then the set of ratios (P)./(P̂) is equal to the set of ratios (P̃)./(), up to the permutation P̂^⊤, and the corresponding difference |r^*-r_*| is minimal. We point out that the transformation involved here is not a similarity transformation since we are assigning the same target distributionto two different systems. But it indicates that the required perturbation Δ will be smaller whenis coherently ordered with . § FORMULATION WITH LINEAR OPTIMIZATIONIn the previous section, we focused on solutions of the TSDP of the form Δ() = ()(I_n - G) with support in (G+I_n).In this section, we propose an efficient linear optimization formulation of the TSDP with support constraint and for the component-wise ℓ_1 norm.This allows us to solve large problems efficiently, especially when Ω or G are sparse.§.§ Linear optimization formulationThe TSDP with support constraint (Δ)⊂Ωand for the component-wise ℓ_1 norm can be formulated as follows: min_Δ∈ℝ^n × nΔ_1such thatΔ_n = _n,^⊤Δ = ^⊤ (I-G), Δ + G ≥ 0, Δ_i,j = 0for(i,j) ∉Ω. We will refer to this problem as ℓ_1-TSDP with support constraint.Since the entries of Δ are non-zero only in the support set Ω, (<ref>) has only |Ω| variables (the zero entries of Δ are not variables). In the dense case, that is, when Ω = {(i,j)|1 ≤ i, j ≤ n}, a naive way to formulate (<ref>) is to introduce n^2 variables, say Z ∈ℝ^n × n_+, one for each entry of Δ, impose Z ≥Δ and Z ≥ -Δ and minimize ∑_i,j Z_i,j; this is the standard way to linearize the ℓ_1 norm.However, on top of introducing n^2 variables, it introduces 2n^2 inequalities which is highly ineffective, since solving linear optimization problems requires, roughly speaking, a cubic number of operations in the number of variables or in the number of constraints[More precisely, it requires Õ(n^ωlog(n/δ)) time where O(n^ω) is the time to multiply two n-by-n matrices,δ is the relative accuracy, and Õ ignores polylog(n) factors.] (solving the dual); see <cit.> and the references therein. For example, the problem was formulated in this way in <cit.>, and the formulation of the authors using a commercial solver, Gurobi, could not solve problems larger than n=500 within 10 minutes.The reason is that <cit.> did not consider the component-wise ℓ_1 norm, but matrix norms for which it is not possible to avoid the introduction of O(|Ω|) inequalities to model the problem via linear optimization. Let us reformulate the problem in a more cost-effective way.First, it is interesting to note that whenever G_i,j = 0, we have Δ_i,j≥ 0 because of the constraints G + Δ≥ 0,and hence there is no need to introduce an additional variable for Δ_i,j when G_i,j = 0 to minimize |Δ_i,j|.Let us denote the support of G as (G), its complement as [0.9](G)(that is, the set of indices corresponding to zero entries of G), and let us decompose Δ using three nonnegative terms:Δ = Δ^0 + ( Δ^+ - Δ^- ), where (Δ^0)⊂ Ω^0 := Ω∩[0.9](G), and (Δ^+), (Δ^-) ⊂ Ω^± := Ω∩(G).Note that Ω^0 and Ω^± is a partition of Ω:Ω^±∪Ω^0 = Ω and Ω^±∩Ω^0= ∅.HenceΔ^0(i,j) ≥ 0 can be non-zerowhen (i,j) ∈Ω and G_i,j = 0,while Δ^+(i,j), Δ^-(i,j) ≥ 0 can be non-zero when (i,j) ∈Ω and G_i,j > 0.We will denote the number of non-zero entries of Δ^0 by p^0 = |Ω∩[0.9](G)| ≤ |Ω|, and the number of non-zero entries in Δ^+ and Δ^- by p^± = |Ω∩(G)|. With these new variables, the TSDP (<ref>) can be reformulated as follows min_Δ^0, Δ^+, Δ^- ∈ℝ^n × n_+ ∑_i,jΔ_i,j^0 + Δ^+_i,j + Δ^-_i,jsuch that( Δ^0 + Δ^+ - Δ^- ) _n = _n,^⊤( Δ^0 + Δ^+ - Δ^- ) = 𝐳 := ^⊤ (I-G), Δ^0 ≤_n,n,Δ^- ≤ G,Δ^+ ≤_n,n-G,(Δ^0) ⊂Ω^0,(Δ^+),(Δ^-) ⊂Ω^±.The constraintΔ^- ≤ G ensures that G + Δ is nonnegative.Since G + Δ is also stochastic, as Δ_n = 0, the constraints Δ^0 ≤_n,nand Δ^+ ≤_n,n-G are redundant. We have not observed much impact on the solver by removing or using these upper bound constraints. Note that we are minimizing the sum of the entries of Δ^+ and Δ^-, and hencewe will always have at optimality that the supports of Δ^+ and Δ^- are disjoint.In any case, (<ref>) has only 2n equalities and less than 2 |Ω| bounded variables.The fact that we have only 2n equalities allows us to solve this problem in roughly O(n^3) operations via the dual. This is a significant improvement. Moreover, each inequality has at most n variables, and hence this is a very sparse linear optimization problem, even sparser when the support, Ω, is sparse.This will allow commercial solvers, such as Gurobi, to solve such problems much faster than the worst case O(n^3) operations; for example, we will be able to solve problems with n = 10^5 with Ω having a few non-zero entries per row in a few minutes; see section <ref> for experiments.§.§ Sparsity of ℓ_1 norm minimization The optimal solution of (<ref>) will have some degree of sparsity. In this section, we make this statement precise by providing an explicit upper bound on the number of non-zero entries in an optimal solution; see Theorem <ref> below.Let us rewrite (<ref>) in a more standard form, by vectorizing the non-zero entries of Δ^0 in 𝐱^0 ∈ℝ^p^0,of Δ^+ in 𝐱^+ ∈ℝ^p^±, and of Δ^- in x^- ∈ℝ^p^±, and let 𝐱 = (𝐱^0,𝐱^+,𝐱^-) ∈ℝ^p:min_𝐱 = (𝐱^0, 𝐱^+, 𝐱^-)∈ℝ^p_n^⊤𝐱 such that A 𝐱 = 𝐛,𝐱≥ 0,𝐱^- ≤𝐠,where p = p^0 + 2p^±, A ∈ℝ^2n × pcontains the coefficient of the 2n linear equalities and has two non-zero entries per column (since each variable appears in exactly twoconstraints: Δ_i,j appears in Δ(i,:) _n = 0 and ^⊤Δ(:,j) = 𝐳),𝐛 = [_n; 𝐳] is the right-hand side, and𝐠∈ℝ^p^± contains the non-zero entries of G in the support Ω.If (<ref>) is feasible, there exists an optimal solution, 𝐱^*, of (<ref>) such that |(𝐱^*)|≤ min(|Ω|,|(G∩Ω)| + 2n). Equivalently, if the ℓ_1-TSDP with support constraint (<ref>) is feasible, there exists an optimal solution of (<ref>), Δ^*, such that |(Δ^*)| ≤min(|Ω|,|(G ∩Ω)| + 2n).The feasible set of (<ref>), 𝒫 = {𝐱 = (𝐱^0, 𝐱^+, 𝐱^-)| A𝐱 = 𝐛, x ≥ 0, 𝐱^- ≤𝐠},is bounded, and hence is a polytope. Therefore, by the fundamental theorem of linear optimization, if 𝒫 is non-empty, that is, (<ref>) is feasible,there exists an optimal solution which is a vertex of𝒫, say 𝐱^*. Let us first show |(𝐱^*)| ≤ |Ω|.For any optimal solution, 𝐱 = (𝐱^0,𝐱^+,𝐱^-), we must have 𝐱^+_i = 0 or 𝐱^-_i = 0 for all i,otherwise we can strictly decrease the objective. This is because the entries of 𝐱^+ - 𝐱^- are equal to the entries of Δ in (G) ∩Ω. This means that |([𝐱^+, 𝐱^-])| ≤ p^± = |(G) ∩Ω|, and hence |(𝐱)| ≤ |Ω|.It remains to prove that |(𝐱^*)| ≤ |(G)| + 2n.Recall that any vertex of a p-dimensional polytope must have p linearly independent active constraints.Since there are (only) 2n equalities in (<ref>), there must be p-2n of the bound constraints active at any vertex of 𝒫. Let 𝐱 = (𝐱^0,𝐱^+,𝐱^-) be a vertex of 𝒫.Each entry in 𝐱^+ ≥ 0 and 0 ≤𝐱^- ≤𝐠 can touch at most one bound constraint (the entries in 𝐱^- cannot touch two of them, since by definition 𝐠 > 0). This is possible if x^+_i = 0 = x^-_i, or if x^+_i = 0 and x^-_i = g_i > 0.This means that x^0 must touch at least N - 2n - 2p^± = p^0 - 2n of the lower bound constraints, x^0 ≥ 0, that is, x^0 must have at least p^0 - 2n entries equal to zero.This implies that |(𝐱^0)| ≤ 2n, and gives the result, since |([𝐱^+, 𝐱^-])| ≤ |(G ∩Ω)| at optimality; see above.Theorem <ref> implies that if G or Ω is sparse, then there is an optimal solution of (<ref>) which is sparse. To attain the upper bound |(G ∩Ω)|+2n, Δ must be negative in all entries of (G) as it requires Δ^- = G.This is unlikely to happen at optimality since we are minimizing the sum of thesevariables. In fact, in all cases we have tested, optimal solutions are significantly sparser; see the numerical experiments in section <ref>.§.§ Efficient algorithms via a column-generation approachThe number of columns of A in (<ref>) is O(|Ω|). IfΩ is sparse, we can formulate the problem directly and solve it very efficiently via commercial solvers; see section <ref> for numerical experiments. However, if Ω is dense, for example Ω = {(i,j)|1 ≤ i,j ≤ n}, constructing the O(n^2) columns of the constraint matrix A is time consuming, even if A is sparse (2 non-zeros per column).This is particularly wasteful if G is sparse since we know only a few entries in the optimal solution Δ will be non-zero, namely at most |(G)| + 2n (Theorem <ref>). This calls for a column-generation approach:First solve the problem for a well-chosen sparse Ω.Then progressively add entries Ω that will reduce the objective function.Choice of initial Ω To the best of our knowledge, it is an open and non-trivial problem to decide for which support Ω the TSDP (<ref>) admits a feasible solution, that is, for which support Ω one can change the stationary distribution fromto ; this question is related to the feasible set studied in <cit.>.Luckily, we know that Ω = (G+I) is always feasible (Theorem <ref>), and hence we will initialize Ω with the support of G+I.For G dense, it is unclear how to initialize Ω with a sparse matrix to make the problem feasible, and this is probably not as useful as the optimal solution is not necessarily very sparse.Choice of entries to add to Ω To add indices in Ω that will reduce the objective, we can resort to the well-known column-generation approach in linear optimization, see <cit.> and the references therein.Let us recall the basic idea of column generation. We are trying to solve a linear optimization problem of the form min_𝐱 = (𝐱_1,𝐱_2) ∈ℝ^p_1 × p_2𝐜_1^⊤𝐱_1 + 𝐜_2^⊤𝐱_2 such that [A_1, A_2] [ [ 𝐱_1; 𝐱_2 ]] = 𝐛,𝐱≥ 0, where p = p_1+p_2, A_i ∈ℝ^m × p_i,𝐛∈ℝ^m,𝐜_i ∈ℝ^p_i for i =1,2. The dual is given by max_𝐲∈ℝ^m𝐛^⊤𝐲 such thatA_1^⊤𝐲≤𝐜_1andA_2^⊤𝐲≤𝐜_2. Assume we solve the primal without the variables 𝐱_2 so that the constraint A_2^⊤𝐲≤𝐜_2disappears from the dual,and obtain (𝐱_1^*,𝐲^*) as an optimal primal-dual solution.If A_2^⊤𝐲^* ≤𝐜_2, that is, the dual solution, 𝐲^*, remains feasible with the additional variables 𝐱_2 being taken into account, then (x_1^*,0) is an optimal solution to the original problem, by strong duality. If this is not the case, that is, A_2(:,i)^⊤𝐲^* > c_2(i) for some i, then allowing the variable x_2(i) to enter the solution will allow us to reduce the objective (this is the rationale of a pivoting step in the simplex algorithm). The quantity 𝐜_2 - A_2^⊤𝐲^* arethe so-called reduced costs of the variables 𝐱_2 which needs to be nonnegative at optimality. Let us apply this idea to our problem.Let 𝐲^0 ∈ℝ^n be the dual variables associated to the constraints Δ_n = 0, and𝐲^μ∈ℝ^n to the constraints Δ^⊤= 𝐳.Since, at the firs step, Ω = (G+I),we will add entries with index (i,j) ∉Ω such that G_i,j = 0.Adding (i,j) in Ω, that is, adding a variable in the primal corresponding to Δ^0_i,j, adds a constraint in the dual, namelyy^0_i + μ̂_j y^μ_j≤ 1. In fact, in the primal, the coefficient of the variable Δ^0_i,j is only non-zero for two constraints: Δ_i,:_n = 0 and Δ_:,j^⊤= z_j. If this dual constraint is violated, that is,y^0_i + μ̂_i y^μ_j> 1, this means that the current primal solution is not optimal for Δ^0_i,j=0 (the reduced cost are not nonnegative for the current basis).Hence adding variables (i,j) in Ω for which y^0_i + μ̂_i y^μ_j > 1 will reduce the objective function value.However, it is not possible for a generic solver to perform such a task without generating explicitly all constraints in the dual (and this takes time when Ω is dense); this is because the solver cannot guess the structure of our problem.To take advantage of these observations,we adopt the following strategy:We initialize Ω^0 = (G+I). We solve (<ref>),and then we add to Ω the |Ω^0| largest entries from the rank-two matrix 𝐲^0 _n^⊤ + (𝐲^μ)^⊤ that are larger than 1. We provide the commercial solver, Gurobi, with the previous basis, for a clever initialization. We stop adding new variables when the decrease in the relative error between two iterations is smaller than some prescribed accuracy δ, that is,Δ^old_1 - Δ^new_1/G_1 < δ, or if the CG has converged, that is, 𝐲^0 _n^⊤ + (𝐲^μ)^⊤≤_n,n.We will use δ = 1% and δ = 0.01% in the numerical experiments.Algorithm <ref> summarizes our strategy, which is a batch column-generation approach. It is more effective in MATLAB because adding only a single variable at each iteration would lead to many calls of the Gurobi solver and would be ineffective. Ideally, one should implement the strategy within Gurobi, in C, but this is out of the scope of this paper.indent=2emComputational cost and key improvement in theselection strategySolving (<ref>) requires to solve linear optimization problems with 2n constraints. This requires roughly O(n^3) operations in the worst case; see section <ref>. However, we will see that that it is empirically much faster as this linear program is very sparse, especially when G is; see section <ref> for numerical experiments. In terms of memory, the constraint matrix A ∈ℝ^2n × O(|Ω|) actually only requires O(|Ω|) memory as there are two non-zeros per column of A.Empirically, we will see that a few iterations of Algorithm <ref> (in most cases, less than 10) are enough to obtain good solutions.Hence |Ω| = O((G)), and the memory requirement is proportional to that of storing G. For large n, we noticed that constructing explicitly the rank-two matrix T = 𝐲^0 _n^⊤ + (𝐲^μ)^⊤ to extract the indices corresponding to the largest entry was a bottleneck in the algorithm (step <ref>): it requires O(n^2) memory and O(n^2 log(n^2)) operations (to sort the entries of T).This rank-two matrix has positive factors in each rank-one term, namely _n and , and hence the largest entries of T depend directly on the largest entries of 𝐲^0, 𝐲^μ and .We have designed a simple heuristic, when n is large, to tackle this problem: for a parameter m ≤ n/2,extract the m largest indices in 𝐲^0 and the m largest indices inand put them in ℐ (possibly remove duplicates),and extract the 2m largest indices in 𝐲^μ and put them in 𝒥. Then perform an exhaustive search on the submatrix T(ℐ,𝒥) which requires O(m^2 log(m^2)) operations. We used m = 100 in our implementation, and use this heuristic as soon as n > 200. This means that, when n is large, we add at most m^2=40,000 entries in Ω at each step.Note that more sophisticated heuristics exist[We actually tried the heuristic from <cit.>, but it does not scale well to extract a large number of indices as it uses a dense n-by-p matrix, where p is the number of indices to extract. In our case, p is of the order of n making this heuristic require O(n^2) memory and operations, which we is what we need to avoid.], e.g., <cit.> and <cit.>.However, our current heuristic performsextremely well on all the tested cases. For example, we tested against the exhaustive search for the real data set moreno (with n = 2155; see section <ref>): Algorithm <ref> with δ = 0 (that is, an optimal solution is sought) took 41 seconds with an exhaustive search in step <ref>, and only 5.6 seconds with our heuristic, to obtain the same optimal objective function value. § NUMERICAL EXPERIMENTS We will compare the following solutions:* D : this is the closed-form solution Δ(^*) described in Theorem <ref>. This solution is computed extremely fast, requiring O((G)) operations. * S: The optimal solution of (<ref>) with Ω = (G+I). S stands for sparse.* GS: The optimal solution of (<ref>) with Ω = {(i,j)|1 ≤ i,j ≤ n} computed by solving directly (<ref>). GS stands for global solution. * CG(δ): The solution of (<ref>) with Ω = {(i,j)|1 ≤ i,j ≤ n} computed with Algorithm <ref>with stopping criterion δ;we will use δ∈{ 10^-2, 10^-4, 0}.CG stands for column generation.The case δ = 0 should generate a solution with the same objective as GS, but hopefully significantly faster when G is sparse. Note that choosing δ sufficiently large generates the solution S since Algorithm <ref> is initialized with Ω = (G+I). By construction, we know in advance that, in terms of objective function values, we have D _1≥S_1 ≥ CG(δ) _1 ≥ CG(0)_1= GS_1.In fact, S is the optimal solution for Ω = (G+I), D is a feasible solution for that support,and CG(δ) is initialized with Ω = (G+I).It will be interesting to compare the gaps between these objectives, and the computational times.We do not compare with the algorithms in <cit.> because they are not competitive. For example, for the 3 real data sets they consider, their fastest approach able to compute a feasible solution in all cases (namely, the rank-1 steps heuristic–R1SH) takes228, 50 and 75 seconds, respectively;see <cit.>.The solution we will generate with CG(0) is computed faster (less than 6 seconds in all cases), is sparse, and is globally optimal; see Tables <ref>, <ref> and <ref>. Moreover, the solutions in <cit.> do not minimize the component-wise ℓ_1 norm, but matrix norms, and do not produce sparse solutions; in fact, they produce dense low-rank solutions.To cite their own words,“The applicability of our heuristics for sparse G is hindered. Being able to perturb only one row in a stochastic matrix, it is not hard to imaginethat the number of reachable stochastic matrices is limited. In other words, finding arank-1 perturbation towards a specific stationary distribution (the main focus of this paper) is often infeasible.”Quality measures We will report the following three quantities for each solution Δ: * obj: the relative objective value Δ_1/G_1, * spars: the relative sparsity |(Δ)|/|(G+I)|, and* time: the computational time in seconds. Softwares and machineAll experiments are performed on a laptop with processor 12th Gen Intel(R) Core(TM) i9-12900H2.50 GHz, 32Go RAM, on MATLAB R2019b. Thecode is available from <https://gitlab.com/ngillis/TSDP>. To solve the linear optimization problems, we use the commercial solver Gurobi[Although it is a closed-source commercial software, see<https://www.gurobi.com/>, free use is possible for academic users.], version 10.00. §.§ Synthetic data setsIn this section, we mostly focus on the run time of the proposed algorithms depending on the size and sparsity of the input matrix, and the gap in the objective function between the different approaches. This will allow us to estimate, empirically, their computational cost and distance to optimality.Getting inspiration from queuing matrices, where each node is connected only to its two neighbors, one on the left and one on the right, we generate irreducible matrices as follows: Given a parameter k, each node is connected to its k neighbors on the right andto its k neighbors on the left. Each non-zero entry is randomly generated using the uniform distribution in [0,1], and G is then normalized to make it stochastic.Hence each row of G has 2k non-zero entries (except for the first k and last rows k that do not have enough left and right neighbors, respectively).Here is an example of such a matrix generated randomly with parameters n=5 and k=2 (and two digits of accuracy): G = [ [0 0.32 0.6800; 0.180 0.39 0.430; 0.26 0.320 0.30 0.12;0 0.33 0.320 0.35;00 0.28 0.720;]]. We set the vector[We also made experiments with = _n/n. It produces somewhat similar results, but the differences between the solutions CG(δ) were slightly less pronounced, and hence we chose to report the results for =G^⊤_n/n.]=G^⊤_n/n,that is, we initializewith equal probability for all nodes, then apply once step of the power iteration. This makessomewhat closer to . §.§.§ Comparison of the different solutions We let n = 1000andk ∈{ 1,2,3,4,5,10,50,100, 500}, and Table <ref>reportsthe relative objective values (obj),the average relative sparsity (spars) and the computational time for 10 randomly generated matrices. We observe the following.* In terms of relative objective values, D performs rather poorly for small values of k. In particular, for k ≤ 10, its relative error is larger than 100% while the globally optimal solution, GS, is below 11% in all cases. For k=500, when G is dense, it actually provides a good solution, as predicted by theory because - _2 is small (<ref>) (the optimal Δ has a very small norm, 0.03% of that of G). The solution S provides rather good solutions, especially when k ≥ 5. This shows that restricting the solution to the support of G+I allows us to recover reasonable solutions, compared to the global one. CG(10^-2) performs similarly as S, while CG(10^-4) provides solutions of similar quality as CG(0) which provides globally optimal solutions, as GS does.* In terms of sparsity, the support of D coincide with that of G+I except for one row, as expected (see Lemma <ref>),explaining the 99.9% relative sparsity in all cases (since n= 1000).All the other solutions are significantly sparser than G+I, especially when k grows. This shows that minimizing the component-wise ℓ_1 norm is able to promote sparsity; see Theorem <ref>. For example, when G is dense (k=500), the optimal solution only has 0.11% of non-zero entries.Even for k=1, where G+I has 3 non-zero per rows (except the first and last row which have only 2), the optimal solution has less than 50% of these zeros, meaning 1.5 non-zeros on average per row for the optimal solution (which could potentially use all the n^2 entries of Δ).* In terms of computational time, computing D is extremely fast, as expected (the reason why the case k=1,2 is slightly slower is explained in Remark <ref>).Computing S is fast for sparse G, and becomes more expensive as k grows; see section <ref> for an extensive numerical experiment on this case.The computational cost of CG(δ) grows as δ decreases, as expected, but remains close to that of S (remember, S is computed at the first step of Algorithm <ref>, that is, S = CG(δ) for any δ sufficiently large). The reason is that the computational cost of each iteration of the column-generation approach (Algorithm <ref>) is proportional to the computation of S: the number of columns added is roughly the same at each step.Finally, CG(0) is significantly faster than GS, as expected, since GS does not leverage the column generation and solves the full problem at once, which requires forming a large constraint matrix.Note that, when G is dense (k=500), S, CG(δ) and GS coincide since they all solve the full problem, on (G+I), which is time consuming.To compute Δ(^*), the stationary distributionof G is needed.For large n and small k, the condition number to calculateis very high; see <cit.> for a discussion on this issue.For example, for n=1000 and k=1, thefunction of MATLAB runs into numerical problems and cannot return a feasible solution (it return NaN for the largest eigenvalue, andwith negative entries).In this case, we resort to 100 iterations of the power method, initialized with = _n/n, for simplicity.§.§.§ Scalability to compute S Empirically, we have observed that the number of iterations of Algorithm <ref> is relatively small, especially when δ is large; see Figure <ref> for an experiment on real data sets.Moreover, we have observed that the computational cost of each iteration is similar, because we add to the support, Ω, roughly the same number of indices. Hence, thecost of computing CG(δ) is the cost of computing S times the number of iterations, which is typically small; for example, for δ = 10^-2, it never exceeded 3 iterations. Hence it makes sense to analyse the computational cost of computing S, to get an idea how Algorithm <ref> scales with nand k. Figure <ref> reportsthe total time to compute S for various values of k ∈{1, 2, 4, 8, 12, 16, 32, 64, 128}and n ∈{1000, 1669, 2783, 4642, 7743, 12916, 21545, 35939, 59949, 10^5, 210^5}with log-spaced values[We usedin MATLAB.]. To limit the computational time, we combine values of n and k only when (G) ≈ 2nk ≤ 450,000. Above 450,000, the computational time exceeds 200 seconds.There is a dependence between the computational time and n that is somewhat linear, in a log-log plot.The average slope for computational times larger than 1 second is 2.2, meaning that the time depends roughlyquadratically on n, for k fixed.Similarly, the average slope in a log-log plot between the computational time and k is 1.98, hence the time also depends quadratically on k, for n fixed.This allows us to solve large problems relatively fast, e.g., n = 2 10^5 and k=1 in about 150 seconds. It turns out that, quite surprisingly, the bottleneck of our algorithm is not to solve the large (sparse) linear optimization problems, but to form the constraint matrix A, of size[Recall the number of variables in the linear programsare 2(G) for Δ^+ and Δ^-, and n for Δ^0 since the diagonal entries of our synthetic data sets G are zeros.] 2n × (2(G)+n), with two non-zeros per column of A. Constructing such a large sparse matrix does not require a linear time in the number of non-zeros (there are about 4 (G)) but larger due to the access to memory, among other things. Figure <ref> reports the time to solve the linear optimization problems to obtain S. Comparing to Figure <ref>, we see that the time to solve the linear optimization problems is an order of magnitude smaller than formulating the problem.Possibly the construction of the sparse constraint matrix could be accelerated by using another language than MATLAB, e.g., C or C++.In summary, the proposed approach to solve the ℓ_1-TDSP with support constraint requires O((G)) memory and, empirically, about O((G)^2) time.Figure <ref> shows the time required to solve (<ref>) with Ω = (G+I) as a function of (G) ≈ 2nk for the synthetic data sets (these are the same values as in Figure <ref> but presented differently), which follows a quadratic trend, as noted above, namely, time≈ζ(G)^2 with ζ = 9 10^-10.§.§ Real data setsIn this section, we use the same three sparse data sets as in <cit.>; see Table <ref>. They represent the following:* email: email-conversation network of university employees (University of Rovira i Virgili).* road: road network between the largest cities in Europe.* moreno: high-school network of student relationships from a survey from 1994/1995 on a high school. See <cit.> and the references therein for more details.We use μ̂ = (1- δ) μ + ϵ1_n/n for ϵ∈{0.01, 0.1, 0.5}.Table <ref> reports the results for ϵ = 0.01, Table <ref> for ϵ = 0.1, and Table <ref>for ϵ = 0.5.We observe the following: * In terms of objective function values, D is significantly worse than GS, especially when ϵ is large: the objective function values of D getssmaller as ϵ gets smaller, that is, as- _2 gets smaller, as expected; see (<ref>). S provides a solution with objective relatively close to that of GS, except for the moreno data set where it is significantly worse (e.g., for ϵ = 0.1, obj of S is 18.23% while it is 5.60% for GS). All CG(δ) variants perform similarly with slight improvements when δ decreases.As expected, the objectives of CG(0) and GS coincide.* In terms of computational cost, CG(0) outperforms GS, as for synthetic data sets since G is sparse.However, CG(0) sometimes requires more time than CG(10^-2) and CG(10^-4) for a negligible improvement in the objective (e.g., for the moreno data set with ϵ = 0.01, from 3.98 seconds to 6.41 second to reduce the relative objective from 0.79% to 0.78%). This is because CG(0) can only stop when it has found an optimal solution. The solution S is computed extremely fast (less than 0.25 seconds in all cases) but sometimes produces high objective function values.Hence, in practice, using CG with a value of δ∈ [10^-2, 10^-4] seems to be a good compromise between the quality of the solution and the run time. Figure <ref> shows the evolution of the objective function values between the iterations of Algorithm <ref> (that computes CG). This confirms that a few iterations of Algorithm <ref> provides good solutions. * In terms of sparsity, D has essentially the same sparsity of G+I, except for one row; this is expected, see Lemma <ref>. The solution S is significantly sparser than G+I, between 3 and 10 times.The global solution (GS and CG(0)) generates surprisingly sparse solutions, slightly sparser than S.In quite a few cases, the globally optimal solutions generated by GS and CG(0) do nothave the same sparsity, meaning that the solution of (<ref>) is not unique.§ CONCLUDING REMARKS In this paper we proposed several algorithms for assigning a target stationary distributionto a perturbedstochastic matrix Ĝ=G+Δ, with a constraint on the support of Δ.We first analyzed the special case where Δ := ()(I_n-G) whose support is restricted to the union of the supports of Gand the identity, which implies that it does not destroy the sparsity of the originalmatrix G. We proved several properties of this solution: its optimality for that class of perturbations,its sparsity, when it is of rank-one, and when its norm is minimized depending the ordering ofcompared to .Unfortunately, numerical experiments show that this proposed solution is in most cases quite far from globally optimal, because the feasible set of perturbations is too constrained, while not being very sparse as its support essentially coincides with that of G+I.We then proposed an effective linear optimization formulation of the problem when minimizing the component-wise ℓ_1 norm of Δ which promotes sparse solutions, as proved in Theorem <ref>. To solve this linear optimization problem efficiently, we designed a dedicated column generation approach; see Algorithm <ref> which can be stopped before global optimally to improve the trade off between solution quality and run time. Algorithm <ref> allows us to solve large sparse problems, with and without support constraint and up to global optimality, extremely fast, for sparse matrices up to size n = 200,000 in a few minutes. This is because empirically the main computational cost of the method is to construct a large sparse matrix of dimension 2n × O((G)) with two non-zeros per column.A limitation of our column generation approach is that it relies on the support of G+I to provide a first feasible solution. If G is dense, this will make the algorithm not scale as well.However, even for dense matrices, the optimal solutions are (often) sparse; see, e.g., the last rows of Table <ref>.Hence it would be interesting to address the following problem: Given G and , provide a sparse set Ω so that the TSDP with support constraints (<ref>) is feasible. This would not only allow us to initialize Algorithm <ref> more efficiently, but also provide sparse solutions to the TSDP. It could be useful even when G is sparse by initializing Ω with a support sparser than G+I, leading to computational gains.We conjecture that Ω needs, generically, at least 2n elements : one per column to satisfy ^⊤Δ = ^⊤ (I-G), and 2 per row to satisfy Δ_n = 0. § APPENDIX A Let 0<a_- ≤ a_+ and 0<b_-≤ b_+, thena_-/b_+≤min( a_-/b_-, a_+/b_+) ≤max( a_-/b_-, a_+/b_+)≤a_+/b_-which implies that [min( a_-/b_-, a_+/b_+) , max( a_-/b_-,a_+/b_+ )] ⊂[a_-/b_+, a_+/b_-]. This inclusion is strict if and only if a_-<a_+ and b_-<b_+. It follows from the assumptions that a_-b_- ≤ a_-b_+, a_+b_- ≤ a_+b_+.Dividing all quantities by the positive product b_-b_+ yieldsa_-/b_+≤a_-/b_-, a_+/b_+≤a_+/b_-,from which the inclusion result follows. It is easy to see that if either a_-=a_+ or b_-=b_+, then both intervals are equal, and that if a_-<a_+ and b_-<b_+, then the inclusion of the intervals is strict. Let the elements of the vectors ,and :̊=./ be positive and assume those ofare ordered in a non-decreasing way, that is,0 < μ̂_1 ≤μ̂_2 ≤…μ̂_n.Let the elements of the permuted vector :=P̃ also be ordered in a non-decreasing way, that is, 0 < μ̃_1 ≤μ̃_2 ≤…μ̃_n.Then [ min_i μ_i/μ̂_i , max_i μ_i/μ̂_i] ⊂[ min_i μ̃_i/μ̂_i , max_i μ̃_i/μ̂_i] . The permutation P̃ that reorders the elements of the vectorto := P̃, can be factored in a sequence of reorderings of just two elements that are not ordered in a non-decreasing way. At each such step, the interval (<ref>) in which the new ratios lie, can only decrease because of Lemma <ref>.spmpsci
http://arxiv.org/abs/2312.16011v1
{ "authors": [ "Nicolas Gillis", "Paul Van Dooren" ], "categories": [ "math.NA", "cs.NA", "math.OC", "math.PR", "stat.CO" ], "primary_category": "math.NA", "published": "20231226114338", "title": "Assigning Stationary Distributions to Sparse Stochastic Matrices" }
Both authors contributed equally to this research. [email protected] University Seoul South [email protected] [1]Yonsei University Seoul South [email protected] University Seoul South [email protected] University Seoul South [email protected] University Seoul South KoreaNoseong Park is the corresponding author. [email protected] University Seoul South KoreaContrastive learning (CL) has emerged as a promising technique for improving recommender systems, addressing the challenge of data sparsity by leveraging self-supervised signals from raw data. Integration of CL with graph convolutional network (GCN)-based collaborative filterings (CFs) has been explored in recommender systems. However, current CL-based recommendation models heavily rely on low-pass filters and graph augmentations.In this paper, we propose a novel CL method for recommender systems called the reaction-diffusion graph contrastive learning model (RDGCL). We design our own GCN for CF based on both the diffusion, i.e., low-pass filter, and the reaction, i.e., high-pass filter, equations. Our proposed CL-based training occurs between reaction and diffusion-based embeddings, so there is no need for graph augmentations. Experimental evaluation on 6 benchmark datasets demonstrates that our proposed method outperforms state-of-the-art CL-based recommendation models. By enhancing recommendation accuracy and diversity, our method brings an advancement in CL for recommender systems. RDGCL: Reaction-Diffusion Graph Contrastive Learning for Recommendation Noseong Park=======================================================================§ INTRODUCTION Contrastive learning (CL) is attracting much attention and is being actively researched in the field of machine learning <cit.>. CL enhances the user/item embedding process with the learning representation principle, which increases the similarity between positive pairs and maximizes the dissimilarity between negative pairs. CL has achieved many successes in a variety of domains, including computer vision <cit.>, natural language processing <cit.> and graph data <cit.>. In the field of recommender systems, recent collaborative filtering (CF) methods are mostly based on it <cit.>.Integrating CL with graph convolutional networks (GCNs) has great potential for solving the data sparsity problem in recommender systems <cit.>. GCN-based CF methods excel at capturing complex dependencies and interactions among entities in graph-structured data, making them suitable for modeling user-item interactions <cit.>. However, the notorious data sparsity problem hinders the approach because most users only interact with a few items, and most items receive only a few interactions. By integrating CL into the GCN-based CF, one can let a model be exposed to more diverse training environments. While utilizing only the sparse interaction leads quickly to overfitting, this enhanced training mechanism greatly stabilizes the overall training process.CL methods for existing recommender systems rely heavily on graph augmentation techniques to generate positive and negative pairs. These graph augmentation techniques involve perturbing graph structures (e.g., stochastic edge/node dropouts) or adding noises to the node embedding. As in Table <ref>, SGL <cit.> perturbs graph structures and then maximizes the representations' consistency under different views. SimGCL <cit.>, which says that it is not necessary to augment graph structures, injects uniform noises into embeddings to augment the node representation. It then learns node representations (embeddings) by maximizing the consistency between different graph augmentations. SimGCL has a disadvantage that one needs to manually adjust the noise magnitude. LightGCL <cit.> reconstructs graph structures through the singular value decomposition (SVD) but additional computational costs are incurred for SVD.In general, the existing paradigm of CL-based CF has limitations. First, the graph augmentation brings noises and redundancies, which can degrade the quality of the learned node representation. Second, graph structure augmentations may not generate sufficient diversity and contrast among node representations (cf. Fig. <ref>). Third, these existing CL-based CF methods are all based on GCNs using only low-pass graph filters, e.g., LightGCN, and overlooked the importance of high-pass filters <cit.>. Using only low-pass filter-based GCNs has a limitation in learning the node representation due to the notorious oversmoothing problem, i.e., node representations become similar to each other (cf. Sec. <ref>).To address these limitations, the key design points in our proposed method are twofold: i) a new GCN-based network is proposed for CF, and ii) a new CL method for it is designed. Our method with the two contributions is called reaction-diffusion graph contrastive learning (RDGCL) since those design points are greatly inspired by the reaction-diffusion equation.One can consider that the diffusion equation under the context of GCNs is for making neighboring nodes' embeddings similar and the reaction equation is for making them dissimilar — another way of interpreting the equation is that the diffusion (resp. reaction) equation describes attractive (resp. repulsive) forces <cit.>. In the perspective of graph signal processing, the diffusion (resp. reaction) equation corresponds to the low-pass (resp. high-pass) filter. We utilize the reaction-diffusion equation in the following ways in RDGCL: * We design our own GCN for CF based on both the diffusion, i.e., low-pass filter, and the reaction, i.e., high-pass filter, equations, whereas existing CL and GCN-based CF methods consider only the low-pass filter.* Our proposed CL-based training occurs between our network's diffusion and reaction-based embeddings (cf. Fig. <ref>). In addition, RDGCL differs from other CL-based CF methods in that it has a single pass. For instance, LightGCL has two GCN instances, one for the main CF task and the other for the augmented graph view purpose, which we call two passes. Since RDGCL has both the diffusion and the reaction layers internally, we can perform the CL training between them. This shows an efficient design choice in our method. This paper presents a comprehensive set of evaluations with 6 benchmark datasets and 13 baselines. Our experimental results demonstrate the superiority of RDGCL in terms of the recommendation accuracy, coverage, and novelty. As shown in Fig. <ref>, RDGCL outperforms existing CL-based CF methods by large margins by accurately recalling more diverse items for recommendation. The main contributions of this paper are summarized as follows: * We propose a novel approach called the reaction-diffusion graph contrastive learning (RDGCL) method for collaborative filtering, which uses both the diffusion equation for low-pass filtering and the reaction equation for high-pass filtering in its neural network design and its CL training method.* To our knowledge, RDGCL is the first to adopt the reaction-diffusion equation for CL-based collaborative filtering.* RDGCL outperforms existing CF methods on 6 benchmark datasets.* We improve performance with the most balanced model in terms of accuracy and diversity metrics (e.g., coverage and novelty). * For reproducibility, our codes and data are available in the supplementary material. § PRELIMINARIES & RELATED WORK §.§ Graph Filters and GCN-based CFsLet 𝐑∈{0,1}^|𝒰| × |𝒱|, where 𝒰 is a set of users and 𝒱 is a set of items, be an interaction matrix. 𝐑_u,v is 1 iff an interaction (u,v) is observed in data, or otherwise 0. Let 𝐀∈ℝ^N × N be the adjacency matrix, where N=|𝒰|+|𝒱| is the number of nodes. 𝐋 is the Laplacian matrix of the graph, defined as 𝐋=𝐃 - 𝐀∈ℝ^N × N, where 𝐃 is the diagonal degree matrix. The symmetric normalized adjacency matrix is defined as 𝐀̃ = 𝐃̅^-1/2𝐀̅𝐃̅^-1/2, where 𝐃̅=𝐃+𝐈, where 𝐀̅=𝐀+𝐈. The symmetric normalized Laplacian matrix is defined in a similar fashion: 𝐋̃ = 𝐈 - 𝐀̅.The operation of multiplying a graph signal 𝐱 by a Laplacian matrix 𝐋̃𝐱 can be understood as a filter that modifies the magnitude of the components of 𝐱 in the frequency domain <cit.>. Each eigenvector 𝐯_i of the Laplacian matrix aligns with a group of interconnected nodes in the graph. The Laplacian filter enhances signal components aligned with basis functions associated with higher eigenvalues γ_i ∈ (1,2) while reducing those aligned with lower eigenvalues γ_i ∈ [0,1]. Specifically, for clusters of nodes strongly aligned with 𝐯_i and γ_i > 1, the projection γ_i 𝐯_i𝐯_i^𝐱 amplifies the signal within the cluster, intensifying the variations among nodes in that cluster. In contrast, for larger clusters aligned with 𝐯_i and γ_i < 1, the projection suppresses the signal within the cluster, leading to reduced differences among nodes in that cluster. Consequently, Laplacian matrices can be thought of as high-pass filters that emphasize differences in node features <cit.>. In contrast, the normalized adjacency matrix function as low-pass filters, diminishing non-smooth signal components <cit.>. This is due to all eigenvalues of adjacency matrices being less than 1, i.e., γ_i ∈ (-1, 1].While most GCN-based collaborative filtering (CF) approaches are constrained to employing low-pass filters, a prominent model in the field, LightGCN <cit.>, has emerged. This model employs a linear GCN, functioning as a low-pass filter to enhance the smoothness of node representations, thereby becoming a standard choice for GCN-based CF.Some studies propose recommendation models using high-pass filters <cit.>, but no studies use them to generate views in contrastive learning for recommendation. Therefore, our key idea is to to apply a high-pass graph filter in the CL framework.§.§ Contrastive Learning for RecommendationDeep learning-based recommender systems have shown remarkable performance in recent years. However, they suffer from the data sparsity and cold-start problems since they heavily rely on labels, i.e., positive user-item interactions <cit.>. To address these problems, self-supervised-based recommendation methods have been proposed to extract useful information from unlabeled interactions <cit.>. Especially, CL-based CF methods, which augment different views and contrast them to align node representations, show promising outcomes.SGL <cit.> first applied the CL to graph-based recommendation, utilizing LightGCN <cit.> as its backbone encoder. It introduces three operators to generate augmented views: node dropouts, edge dropouts, and random walks. By contrasting these augmented views, it improves the recommendation accuracy, especially for long-tail items, and the robustness against interaction noises. For CL, InfoNCE <cit.> is defined as follows:ℒ_CL = ∑_i ∈ℬ -logexp(sim(𝐞_i',𝐞_i”)/τ)/∑_j ∈ℬexp(sim(𝐞_i'𝐞_j”)/τ),where i, j are a user and an item in a mini-batch ℬ respectively, sim(·) is the cosine similarity, τ is the temperature, and 𝐞', 𝐞” are augmented node representations. The CL loss increases the alignment between the node representations of 𝐞_i' and 𝐞_i” nodes, viewing the representations of the same node i as positive pairs. Simultaneously, it minimizes the alignment between the node representations of 𝐞_i' and 𝐞_j”, viewing the representations of the different nodes i and j as negative pairs.SimGCL <cit.> simplifies the graph augmentation process for its CL by perturbing node representations with random noises. XSimGCL <cit.> replaces the final-layer CL of SimGCL with a cross-layer CL approach — our RDGCL also follows this cross-layer CL approach. It only utilizes one GCN-based encoder and contrasts the embeddings of different layers, and this cross-layer CL reduces the computational complexity since it has only one neural network. LightGCL <cit.> proposes a singular value decomposition (SVD)-based graph augmentation strategy to effectively distill global collaborative signals. In specific, SVD is first performed on the adjacency matrix. Then, the list of singular values is truncated to retain the largest values, i.e., the ideal low-pass filter, and then its truncated matrix is used to purify the adjacency matrix. As shown in Fig. <ref>, existing CL-based recommender systems are limited to low-pass filters since i) their backbones are mostly LightGCN and ii) they augment views with low-pass filters.§.§ Reaction-Diffusion EquationsReaction-diffusion equations are partial differential equations that describe how the concentration of substances distributed in space changes under the influence of two processes: local chemical reactions and diffusion <cit.>. In general, a reaction-diffusion equation can be written as∂ u/∂ t=∇^2u+R(u), where u(x,t) is the concentration of a substance at position x and time t,∇^2 is the Laplace operator, and R(u) is the reaction term.There are different types of reaction terms to describe pattern formation phenomena in various biological <cit.>, chemical <cit.> systems, and image processing <cit.>.Reaction-diffusion equations on graphs can be discretized using finite difference methods. For example, using the explicit Euler scheme, we can approximate the reaction-diffusion equation under the context of graph signal processing as follows:u(t+Δ t)=u(t)+Δ t(-𝐋̃u(t)+R(u(t))),where Δ t is the time step size and u(t) ∈ℝ^N is the graph signal at time t. This equation can be interpreted as updating the graph signal by applying the diffusion term and the reaction term at each time step. The diffusion term captures the tendency of the quantity to spread over the graph, while the reaction term accounts for the local interactions or transformations of the quantity at each node. We design the interaction of nodes in this reaction term using the high-pass filter.§.§ Neural Ordinary Differential EquationsNeural ordinary differential equations (NODEs) <cit.> solve the initial value problem (IVP), which involves a Riemann integral problem, to calculate 𝐡(t_i+1) from 𝐡(t_i):𝐡(t_i+1) = 𝐡(t_i) + ∫_t_i^t_i+1 f(𝐡(t_i), t;θ_f) dt, where the neural network parameterized by θ_f approximates the time-derivative of 𝐡, i.e., 𝐡̇def=d𝐡(t)/dt. We rely on various ODE solvers to solve the integral problem, from the explicit Euler method to the 4th order Runge–Kutta (RK4) method. For instance, the Euler method is as follows:𝐡(t + s) = 𝐡(t) + s · f(𝐡(t)), where s is a pre-configured step size. Eq. (<ref>) is identical to a residual connection when s=1 and therefore, NODEs are a continuous generalization of residual networks. § PROPOSED METHODWe describe our RDGCL which consists of a reaction-diffusion equation and a CL framework. We first review its overall architecture and then introduce details. §.§ Overall ArchitectureIn Fig. <ref>, we show the overall architecture of RDGCL. The initial embedding 𝐄(0) is fed into the reaction-diffusion graph (RDG) layer. Then we have the embedding 𝐄(t) evolving over time t ∈ [0,T]. The embedding evolutionary process can be written as follows:𝐄(T)= 𝐄(0) + ∫_0^Tf(𝐄(t))dt, where 𝐄(t) ∈ℝ^N × D is the node embedding matrix at time twith D dimensions. f(𝐄(t)) is a RDG layer which outputs d 𝐄(t)/dt.As shown in Fig. <ref>, the iterative workflow of the RDG layer is as follows: * The RDG layer applies a low-pass filter (e.g., a diffusion process) to 𝐄(t_i) to derive its low-pass filtered embedding 𝐁(t_i),* It then applies a high-pass filter (e.g., a reaction process) to 𝐁(t_i) to derive 𝐄(t_i+1).RDGCL uses 𝐄(T)for the recommendation task while contrasting the two representations, 𝐁^CL and 𝐒^CL, each of which is the sum of the low-pass or high-pass information only. This approach called as cross-layer CL or single-pass CL.§.§ Reaction-Diffusion Graph LayerOur reaction-diffusion graph (RDG) layer can be written as the following equation and solved by the ODE solver:f(𝐄(t)):= d 𝐄(t)/dt = -𝐋̃𝐄(t) + α R(𝐄(t)),where R(·) is a reaction term, and α is a reaction rate coefficient to (de-)emphasize the reaction term. The reaction term corresponds to the high-pass filter in the perspective of graph signal processing.§.§.§ Diffusion Process as Low-pass FilterMultiplying with the adjacency matrix (𝐀 or 𝐀̃) can be regarded as a low-pass filtering operation. Many GCNs can be generalized to the following diffusion process <cit.>:𝐁(𝐭) = 𝐄(t)-𝐋̃𝐄(t)=𝐄(t) + (𝐀̃-𝐈)𝐄(t) = 𝐀̃𝐄(t). §.§.§ Reaction Process as High-pass FilterThe reaction process is designed to act as a high-pass filter. Multiplying with the Laplacian matrix (𝐋 or 𝐋̃) can be regarded as a high-pass filter <cit.>. We apply it to 𝐁(t) as follows:R(𝐄(𝐭)):= 𝐋̃𝐁(t)=𝐋̃(𝐀̃𝐄(t)).We emphasize that the embedding made from the reaction process is not independent from it from the diffusion process since we apply the reaction process to 𝐁(t) and therefore, our contrastive learning, which will be described layer, addresses the potential sparsity issues in 𝐁(t) and R(𝐄(𝐭)). §.§ Cross-layer Contrastive LearningWe extract the low-pass (or diffusion) and the high-pass (or reaction) views from Eq. (<ref>) along the time domain [0,T] as follows:𝐁^CL = 𝐄(0)+∑_i=1^K𝐁(t_i), 𝐒^CL = 𝐄(0)+∑_i=1^K R(𝐄(t_i)), where we solve Eq. (<ref>) via K steps with an ODE solver.After generating the diffusion and reaction views, we employ a contrastive objective that enforces the filtered representations of each node in the two views to agree with each other. We perform the CL training by directly contrasting the reaction's augmented view 𝐬^CL with the diffusion's view 𝐛^CL using InfoNCE <cit.> loss: ℒ_CL = ∑_i ∈ℬ -logexp(sim(𝐛^CL_i,𝐬^CL_i)/τ)/∑_j ∈ℬexp(sim(𝐛^CL_i,𝐬^CL_j)/τ), where i, j are a user and an item in a sampled batch ℬ, and 𝐛^CL_i,𝐬^CL_i, and 𝐬^CL_j are node representations from Eq. (<ref>).As shown in Eq. (<ref>), therefore, our joint learning objective is as follows:ℒ =ℒ_BPR+λ_1 ·ℒ_CL+λ_2 ·‖Θ‖^2_2, which consists of the Bayesian personalized ranking (BPR) loss ℒ_BPR and the CL loss ℒ_CL. The hyperparameters λ_1 and λ_2 control the trade-off among the two losses and the regularization term. Θ denotes the embeddings to learn, i.e, Θ = 𝐄(0) in our framework.After minimizing the joint loss in Eq. (<ref>), we use the output of the RDG layer, i.e., 𝐄(𝐓), as the final representation. The exact training method is described in Alg. <ref>.§.§ Relationship to Existing ModelsComparison to CL-based methods RDGCL uses a single unified pass for the CL task and the CF task while SimGCL and LightGCL use separate passes for the two different tasks. This makes RDGCL more efficient and consistent in learning representations. XSimGCL is similar to RDGCL in that it is designed with as single pass using a cross-layer CL method, but is still confined to low-pass filters only. RDGCL adopts the reaction-diffusion system to contrast the information from the different graph signals. However, SimGCL and LightGCL perform the CL training for their final embeddings only, overlooking their intermediate embeddings. This enables RDGCL to capture more diverse and complementary features from the graph structure. The main difference between RDGCL and these methods is that RDGCL uses the reaction-diffusion-based system to generate embeddings, instead of the noise injection, the edge/node drop, which can capture more fine-grained information from different graph frequency domains — note that the diffusion (resp. reaction) system corresponds to the low-pass (resp. high-pass) graph filter. Comparison to differential equation-based methods LT-OCF is a continuous-time generalization of LightGCN. If the reaction term of RDGCL is removed and its CL is not used, RDGCL reduces to LT-OCF. BSPM is yet another CF method which uses the diffusion and the reaction systems separately. It first applies its diffusion process directly to the interaction matrix 𝐑 without learning embeddings and then separately apply the reaction process to the diffusion outcome — GF-CF also applies a graph signal process method directly to 𝐑. Therefore, BSPM does not follow Eq. (<ref>) where the diffusion and the reaction processes happens at the same time.§.§ Model Complexity This section analyzes the time complexity of RDGCL and compares it with the baselines, LightGCN, SGL, and SimGCL. We discuss the time complexity within a single batch. We define |𝐀| as the edge numbers, |ℬ| denotes the batch size, M represents the node numbers in a batch, K is the number of layers (resp. ODE steps) in baselines (resp. RDGCL), and ρ denotes the edge keeping-rate in SGL. Table <ref> summarizes the time complexity which tells us the following findings: * LightGCN, SimGCL, and RDGCL do not need to perform graph augmentation but only require the time complexity of 𝒪(2|𝐀|) to construct the adjacency matrix. In contrast, SGL requires nearly three times the cost because it needs to perform the graph augmentation twice.* SGL requires three forward computations on the original graph and the two subgraphs in a graph convolution step. So, the time cost is almost three times that of LightGCN. RDGCL requires only two matrix multiplication operations in Eq. (<ref>), reducing the computational cost in a step.* For the CL loss computation, the complexity of RDGCL is the same as those of other models, which is 𝒪(|ℬ|D + |ℬ|MD), where 𝒪(|ℬ|D) and 𝒪(|ℬ|MD) are for positive and negative views, respectively. For brevity, we show it as 𝒪(|ℬ|MD). § EXPERIMENTSIn this section, we describe our experimental environments and results. The following software and hardware environments were used for all experiments: Ubuntu 18.04 LTS, PyTorch 1.9.0, torchdiffeq 0.2.2, CUDA 11.3, i9 CPU, and RTX 3090. §.§ Experimental Environments §.§.§ Datasets and Baselines We evaluate our model and baselines with 6 real-world benchmark datasets: Yelp, Gowalla, Amazon-Books, Amazon-Electronics, Amazon-CDs, and Tmall <cit.>. We summarize the dataset statistics in Table <ref>. We compare our model with the following 13 baselines with diverse technical characteristics: * Graph-based CFs include LightGCN <cit.>, LT-OCF <cit.>, HMLET <cit.>, GF-CF <cit.>, and BSPM <cit.>; * Graph CL methods for other tasks include SimGRACE <cit.> and GCA <cit.>; * Hyepergraph-based CFs include HCCF <cit.> and SHT <cit.>;* Graph CL methods for CF include SGL <cit.>, SimGCL <cit.>, XSimGCL <cit.>, and LightGCL <cit.>. §.§.§ Evaluation Protocols and Hyperparameters We use the widely used ranking metrics: Recall@20/40 and NDCG@20/40. For Yelp, Gowalla, and Amazon-Books, we reuse the train/valid/test splits taken from <cit.>. For Amazon-Electronics and Amazon-CDs, we use the dataset settings used by <cit.>. We also further search the best hyperparameters for baselines based on their recommended hyperparameters. For our method, we test the following hyperparameters: * For solving the integral problem, we consider the following ODE solvers: the Euler method and RK4. However, we found that Euler and RK4 produce almost the same results in our preliminary experiments, so we test only the Euler method;* The number of steps K is in {1, 2, 3}, and the terminal time T is in {1.0,1.1,⋯,3.0}; * The reaction rate coefficient α is in {0.1,⋯,1.0};* The size of embedding D is in {64, 128, 256};* The learning rate is in {1.0e-5, 1.0e-4, 1.0e-3, 1.0e-2};* The regularization weight for the InfoNCE loss λ_1 is in {0.1, 0.2,⋯, 1.0};* The regularization weight λ_2 is in {1.0e-8,1.0e-7,1.0e-6,1.0e-5}. The best configuration set in each data is as follows:In Yelp, K=2, T=2, α=0.6, τ=0.1, and λ_1=0.3. In Gowalla, K=2, T=2, α=0.2, τ=0.4, and λ_1=0.5. In Amazon-Books, K=2, T=2, α=0.8, τ=0.1, and λ_1=0.2. In Amazon-Electronics, K=2, T=2, α=0.2, τ=1.0, and λ_1=0.2. In Amazon-CDs, K=2, T=2,α=0.1, τ=0.2, and λ_1=0.2. In Tmall, K=2, T=2, α=0.6, τ=0.2, and λ_1=0.5.§.§ Experimental ResultsIn Table <ref>, we summarize the overall accuracy in terms of Recall@20/40 and NDCG@20/40. As reported, our method clearly shows the highest accuracy in most cases. Specifically for Gowalla, RDGCL's NDCG@20 is 5.19% higher than the best baseline.SGL and SimGCL work well in some cases, and only SimGCL is comparable to our method for Amazon-Electronics and Amazon-CDs. However, the accuracy gap between our method and SimGCL is still non-trivial. For Tmall, SGL shows higher accuracies than SimGCL, and for Yelp, SimGCL outperforms SGL and LightGCL. LightGCL shows good performance for Amazon-Books. XSimGCL, known to be surprisingly efficient and performs well, does not perform the best on benchmark datasets we use. To further verify the outstanding performance of RDGCL, we compare it with two other recently proposed graph-based recommendation methods. GF-CF and BSPM show excellent performance, occupying the 2nd and 3rd places, except for Tmall. Except for Amazon-Books, however, RDGCL beats both models in all metrics. In terms of NDCG@20, BSPM and GF-CF slightly outperform RDGCL on Amazon-Books. However, no existing methods are comparable to our proposed method across all datasets. Therefore, we believe that the concept of RDGCL proposed by us opens a new direction for CF. §.§ Trade-off Among Recall, Coverage, and Novelty We analyze the balanced nature of our model in terms of the recall, coverage, and novelty. We use the following two harmonic means:h_RC@k=2×Recall@k ×Coverage@k/Recall@k+Coverage@k,h_RN@k=2×Recall@k ×Novelty@k/Recall@k+Novelty@k,where the coverage <cit.> refers to the range of items that models can recommend, and the novelty <cit.> measures how much unexpected the recommended items are in comparison with their global popularity. Via the two metrics, we can grasp a broader understanding of the impact of our proposed design.As shown in Fig. <ref>, LightGCL, which is limited to low-pass filters, has low h_RC@20 and h_RN@20, while RDGCL has the highest balanced performance. In the case of SimGCL, both metrics are slightly lower than ours. This result suggests that RDGCL is the balanced design with improved accuracy and diversity.§.§ Robustness to Sparsity and Popularity BiasTo measure the robustness for sparse user groups, we divide users into three groups and measure Recall@20 for each group. The users were classified into three groups by interaction degree: the bottom 80%, from the bottom 80% to 95%, and the top 5%. As shown in Fig. <ref>, RDGCL consistently outperforms the other baselines for all groups of users. Specifically, RDGCL shows good accuracy for extremely sparse user groups (<29) in Gowalla.Furthermore, we study the robustness to the item popularity bias with our method and the baselines. Similarly to sparsity analysis, we divide items into three groups based on their degree of interaction and measure Recall@20 on each group g. Following SGL <cit.>, we use the decomposed Recall@k, defined as follows:Recall@k^(g)=|(l^u_rec)^(g)∩ l^u_test|/|l^u_test|,where l^u_rec represents the candidate items in the top-k recommendation list and l^u_test represents the relevant items in the testing set for user u. Fig. <ref> shows that the accuracy of our model is higher for the item group with a low interaction degree compared to other baselines. This indicates that our model recommends long-tail items and has the ability to alleviate popularity bias. §.§ Ablation StudyAs ablation study models, we define the following two models: i) the first ablation model contrasts 𝐄(T) and 𝐁^CL, and ii) the second ablation model contrasts 𝐄(T) and 𝐒^CL. The first (resp. second) model is denoted as “RDGCL-EB” (resp. “RDGCL-ES”) in the tables. In all cases, the ablation study model with 𝐄(T) and 𝐒^CL significantly outperforms that with 𝐄(T) and 𝐁^CL, e.g., Recall@20 of 0.1094 in Yelp by RDGCL-ES vs. 0.1024 by RDGCL-EB. However, RDGCL, which contrasts 𝐁^CL and 𝐒^CL, outperforms them. This shows that we need to perform the CL training between 𝐁^CL and 𝐒^CL to achieve the best model accuracy.§.§ Sensitivity AnalysesIn this section, we report the sensitivity of our model to some selected key hyperparameters: the terminal integral time T, the temperature τ, and the regularization weight for the InfoNCE loss λ_1. If a hyperparameter is not reported in this subsection, it indicates that our model is not significantly sensitive to it.§.§.§ Sensitivity to T We test our model by varying T of the reaction-diffusion process, and the results are shown in Fig. <ref>. T close to 2 yields good outcomes for both Yelp and Gowalla.§.§.§ Sensitivity to τWe test our model for various settings of τ, and the results are shown in Fig. <ref>. In Gowalla, the performance improves as the value of τ increases until reaching out an optimal point around 0.4. As τ becomes too large (e.g., τ≥ 0.5), the performance decreases drastically in both datasets.§.§.§ Sensitivity to λ_1We vary the regularization weight for the InfoNCE loss, denoted λ_1. The results are shown in Fig. <ref>. For both Yelp and Gowalla, our method's performance increases rapidly as λ_1 increases. After that, it shows slight decreases. However, the performance does note decrease as much as that of Fig. <ref>. §.§ Empirical Evaluations for OversmoothingFor our proposed method and existing graph-based methods, we analyze the oversmoothing phenomenon <cit.>. The oversmoothing problem refers to the exponential convergence of all user/item node feature similarities towards the same constant value as the number of GCN layers increases — in this definition, it is important to use appropriate node similarity metrics.Since the commonly used mean-average distance (MAD) <cit.> is not an appropriate node similarity metric <cit.>, we do not use MAD. However, the Dirichlet energy fulfills all the requirements of node similarity <cit.>, so our approach to measure the oversmoothing is based on the Dirichlet energy on graphs as follows:ℰ(𝐄)=1/N∑_i ∈𝒰∪𝒱∑_j ∈𝒩_i ||𝐄_i - 𝐄_j ||^2_2,where 𝒩_i is a one-hop neighborhood of a user/item node i. We calculate the Dirichlet energy of the final embeddings that are used for recommendation by various methods. As shown in Fig. <ref>, RDGCL's Dirichlet energy does not decrease even when the number of layers is high, whereas existing methods' energies quickly decrease. As mentioned earlier, this is because existing methods are based only on low-pass filters, which shows the efficacy of our proposed reaction-diffusion equation-based method. §.§ Empirical Runtime ComplexityWe also report our actual running time during training in Table <ref>. Our method does not involve augmentations on graph structures, and its single pipeline for CL is not separated from the main channel unlike SGL and SimGCL. Thus, our training time is faster than theirs. However, it's comparable and not as fast as XSimGCL and LightGCL. XSimGCL is faster than SimGCL because it only performs the graph convolution of LightGCN in one framework, but it does not outperform our method on all datasets.RDGCL runs marginally longer than XSimGCL and LightGCL because it performs the high-pass filter in the reaction term. Nevertheless, our proposed method is still effective since it outperforms both XSimGCL and LightGCL by large margins in the accuracy.§ CONCLUSION & FUTURE WORKIn conclusion, we presented a novel approach called RDGCL for CF. It uses both the diffusion equation for low-pass filtering and the reaction equation for high-pass filtering in its design and CL training method. To our knowledge, RDGCL is the first to adopt the reaction-diffusion equation for the CL-based CF. We demonstrated that RDGCL outperforms 13 baseline models on 6 benchmark datasets and achieves the best-balanced performance among the recall, coverage, and novelty. Our findings demonstrate the effectiveness of using high-pass filters and self-supervised signals and the potential as an alternative approach for recommender systems. Our work opens up new avenues for future research on investigating other filters and contrastive views and extending our method to other domains that can benefit from graph representation learning.ACM-Reference-Format
http://arxiv.org/abs/2312.16563v1
{ "authors": [ "Jeongwhan Choi", "Hyowon Wi", "Chaejeong Lee", "Sung-Bae Cho", "Dongha Lee", "Noseong Park" ], "categories": [ "cs.IR", "cs.AI", "cs.LG" ], "primary_category": "cs.IR", "published": "20231227130446", "title": "RDGCL: Reaction-Diffusion Graph Contrastive Learning for Recommendation" }
Journal ofClass Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE JournalsAdaNAS: Adaptively Post-processing with Self-supervised Neural Architecture Search for Ensemble Rainfall ForecastsYingpeng Wen, Weijiang Yu, Fudan Zheng, Dan Huang, Nong XiaoJanuary 14, 2024 ==================================================================================================================Previous post-processing studies on rainfall forecasts using numerical weather prediction (NWP) mainly focus on statistics-based aspects, while learning-based aspects are rarely investigated. Although some manually-designed models are proposed to raise accuracy, they are customized networks, which need to be repeatedly tried and verified, at a huge cost in time and labor. Therefore, a self-supervised neural architecture search (NAS) method without significant manual efforts called AdaNAS is proposed in this study to perform rainfall forecast post-processing and predict rainfall with high accuracy. In addition, we design a rainfall-aware search space to significantly improve forecasts for high-rainfall areas. Furthermore, we propose a rainfall-level regularization function to eliminate the effect of noise data during the training. Validation experiments have been performed under the cases of None, Light, Moderate, Heavy and Violent on a large-scale precipitation benchmark named TIGGE. Finally, the average mean-absolute error (MAE) and average root-mean-square error (RMSE) of the proposed AdaNAS model are 0.98 and 2.04 mm/day, respectively. Additionally, the proposed AdaNAS model is compared with other neural architecture search methods and previous studies. Compared results reveal the satisfactory performance and superiority of the proposed AdaNAS model in terms of precipitation amount prediction and intensity classification. Concretely, the proposed AdaNAS model outperformed previous best-performing manual methods with MAE and RMSE improving by 80.5% and 80.3%, respectively.Rainfall Forecasts, Precipitation Prediction, Automated Machine Learning, Neural Architecture Search, Self-supervised Learning,Rainfall-level Regularization.§ INTRODUCTION Heavy precipitation[The rainfall is also regarded as precipitation in our paper.] events can cause severe flooding and further result in economic damage and loss of life <cit.>. Accurate and quantitative forecasting of rainfall help develop effective measures and prevent loss of life and property from floods and landslides. The previous technologies for rainfall forecasting mainly depend on statistic-based paradigms, like the numerical weather prediction (NWP) model systems <cit.>,ensemble prediction systems <cit.> and post-processing techniques <cit.>. Since individual NWP model runs and ensemble systems are subject to biases and dispersion errors, their predictions can be improved by statistical post-processing techniques<cit.>. The traditional post-processing processes use simple operations to synthesize ensemble forecasts. F. Kong et al. propose the ensemble mean (EM) method <cit.> to average multiple ensemble forecasts to predict precipitation. The probability matching (PM) method <cit.> redistributes the precipitation rates in the ensemble mean by the distribution of precipitation rates from the available QPFs. The best percentile (BP) method <cit.> adopts asymmetric normal distribution for parameter estimation to calculate probabilistic quantitative precipitation and percentile forecasts. X. Zhi et al. establish the weighted bias-removed ensemble mean (WEM) <cit.> to predict precipitation based on the weighted average of ensemble forecasts, where the weights are calculated from the historical errors of the ensemble forecasts. Although these methods exhibit different advantages in a variety of test metrics, accuracy is still hardly commendable. Machine learning methods have been applied to the post-processing of precipitation forecasts in recent years. The advantages of machine learning methods include incorporating diverse features and automatically extracting useful information <cit.>. Machine learning-based methods also help to model complex relationships between predictors and predictands to improve forecasting skills <cit.>. W. Li et al. <cit.> develop a convolutional neural network (CNN)-based post-processing method for precipitation forecasts with spatial information and atmospheric circulation variables. M. Ghazvinian et al. propose a hybrid artificial neural network (ANN) that uses the censored, shifted gamma distribution (CSGD) as the predictive distribution and uses (ANN) to estimate the distributional parameters of CSGD <cit.>. F. Xu et al. design a multi-layer network (IC-MLNet)-based post-processing method <cit.> to predict precipitation, showing better performance than traditional methods <cit.> in precipitation amount and precipitation levels prediction. For more details on the available post-processing models, the reader is referred to the relevant books and reviews <cit.>.Although these learning-based studies perform out of statistic-based methods, it is time-consuming to trial and error manually designed networks. Besides, the current learning-based methods are imprisoned due to the high requirement of expert domain knowledge to design the network. These shortcomings of manually designed neural networks have hindered the development and application of post-processing technology for precipitation forecasts.Therefore, we aim to develop efficient learning-based methods for automatically designing networks for the complex post-processing of ensemble rainfall forecasts.Neural architecture search (NAS) aims to search for a robust and well-performing neural architecture by selecting and combining various basic operations from a predefined search space <cit.>. For computer vision tasks, various NAS methods <cit.> have emerged including evolutionary-based <cit.>, reinforcement learning (RL)-based <cit.> and gradient descent (GD)-based <cit.> NAS methods. The differentiable NAS optimized by the gradient descent method is favored for its fast search speed. Most of the gradient descent-based neural network architecture search methods are dedicated to computer vision research, such as MileNAS <cit.>, DARTS<cit.> and PC-DARTS <cit.>.In this paper, a model based on the adaptively self-supervised neural architecture search algorithm called AdaNAS is first proposed to ensemble rainfall forecast post-processing with high accuracy, which can effectively generate deterministic precipitation as well as reduce lots of manual efforts. The self-supervised technology is applied in the neural network architecture search method to make fuller use of the limited data samples. In addition, we customize a rainfall-aware search space to significantly improve forecasting in high-rainfall areas. Since the search space consisting of the residual block (RB) is not successful in predicting heavy rainfall, we design space-aware block (SAB) and channel-aware (CAB) to improve the accuracy of heavy rainfall prediction. Furthermore, a rainfall-level regularization function is presented to eliminate the effect of noise data in the training. The performance of the AdaNAS model is evaluated and the results reveal that the proposed model is superior to previous best-performing manual methods and other NAS methods in terms of precipitation amount prediction and intensity classification.§ STUDY AREA AND CRITERIAThis section describes the details of the data and study area in Section <ref>. In Section <ref>, we describe the evaluation criteria for precipitation prediction.§.§ Study Area and Data Description §.§.§ Study AreaThe study area [21.0^oN ∼ 29.0^oN, 109.5^oE ∼ 117.5^oE] is in southern China, including part of the coastal and inland region, as shown in Fig. <ref> (a).The northern and central region is characterized by a subtropical monsoon climate, while the southern region is a tropical monsoon climate. The annual rainfall in the study area is approximately 1500 mm annually. The rainy season typically occurs from June to October. Extreme weather events such as typhoons are frequent in summer in the coastal region, increasing the difficulty of rainfall forecasting. Therefore, one of the main challenges in the study area is the rainfall forecasting of heavy coastal rainfall. §.§.§ Ensemble Forecast DataWe use ensemble data from the TIGGE[https://apps.ecmwf.int/datasets/data/tigge/] dataset, a classic rainfall prediction dataset used in several works <cit.>. To demonstrate that our post-processing method can be well adapted to ensemble forecasting models (single-model and multi-model ensemble forecasting), our input data are divided into single-model datasets (Smod) and multi-model datasets (Mmod). The single-model dataset is the ensemble forecast from ECMWF Center, generated by 50 random initial conditions, and the multi-model dataset is the deterministic forecast from NWP model system in 4 centers, namely UKMO, NCEP, JMA and ECMWF. TIGGE issues daily weather forecasts for 366 hours at UTC0000 and UTC1200, but only 6 ∼ 30, 12 ∼ 36, 18 ∼ 42 and 24 ∼ 48 forecast hours are used in these datasets.§.§.§ Observation Data We collect observation data from 7247 automatic stations[http://data.cma.cn/en/?r=data/detail&dataCode=A.0012.0001] in the region [21.0^oN ∼ 29.0^oN, 109.5^oE ∼ 117.5^oE]. As there are many missing data in the observed data in 2014, we only use forecasts and observations in 2013, 2015 and 2016. We use the mean interpolation method to make the observed data format consistent with the ensemble forecast, where the ensemble forecast data are the input data as shown in Fig. <ref> (b) and the observation data are the labels. We strictly screen the data to ensure their availability. The following three types of data are excluded: observations and ensemble forecasts that do not correspond to each other in time, those missing one or more ensemble members, and those with all zero values for either ensemble members or observations. The remaining single-model data contains 4160 samples and the multi-model data contains 3785 samples.§.§.§ Training Dataset and Validation DatasetWe divide the training dataset and validation dataset by timeline at a ratio of 9:1, which is the same as in IC-MLNet <cit.> does. For the validation dataset to adequately verify the effectiveness of the post-processing method, the distribution of precipitation values should be consistent with the entire dataset. Thus, we chose the validation data from 2016-04-16 to 2016-06-13, which have a precipitation distribution closer to the overall data (2 winters and 3 flood seasons). §.§ Evalutaion Criteria Bias = ∑_i=1^n ỹ_i / ∑_i=1^n y_i,To fully demonstrate the outstanding performance of our post-processing methods, we use several evaluation criteria as in IC-MLNet <cit.>, including bias, mean absolute error (MAE), root mean square error (RMSE), Nash-Sutcliffe model efficiency coefficient (NSE), precipitation classification Accuracy (ACC) and Heidke skill score (HSS). Bias is the ratio of the prediction sum as Equation (<ref>) to the label sum and is used to measure the overall deviation of the prediction from the label. The perfect score of bias is 1 when there is no deviation of the prediction sum from the label sum. MAE and RMSE are routinely used to measure the distance between predictions and labels, ranging from 0 to positive infinity, with a perfect score of 0, when predictions are equal to labels. The specific equations for MAE and RMSE are so commonly used that they are no longer provided here. Nash-Sutcliffe model efficiency coefficient (NSE) reflects the numerical relationship between the variance of the prediction error and the variance of the observed data as Equation (<ref>), which is used to assess the predictive skill of hydrological models. In the case of a perfect model with zero estimation error variance, the obtained NSE = 1. NSE = 1 - ∑_i=1^n ||ỹ_i - y_i ||^2_2 / ∑_i=1^n || y_i - y̅_i||_2^2,where ỹ denotes the predicted value, y denotes the observed value (labels), and y̅ denotes the mean of the observed value.We introduce rainfall prediction ACC and HSS to demonstrate whether it can correctly predict the level of rainfall to verify the practical applicability of our post-processing method since the weather center does not need to report specific precipitation values but reports the level of rainfall when reporting forecast results to the public. We classify rainfall into five categories according to its magnitude: 1. None [0.0, 0.1)mm/day, 2. Light [0.1, 10.1)mm/day, 3. Moderate [10.1, 25.1)mm/day, 4. Heavy [25.1, 50.1)mm/day and 5. Violent [50.1, ∞)mm/day. We create a multi-category contingency table according to the levels for all forecast maps as Table <ref>. L=5 is the number of rainfall categories; n_i,j denotes the number of cases where the observation level i is predicted to be level j; N'_i denotes the number of rainfall predicted to be level i; N_j represents the number of observations to be level j, and N_T denotes the total number of predictions.Based on this table we can calculate the following evaluation criteria: ACC and HSS. ACC measures how well the predicted and observed data rainfall levels match, as shown in Equation (<ref>), ranging from 0 to 1. In the perfect case, all predicted levels are the same as observed levels, and ACC is 1. ACC reflects the percentage of correct predictions, but the uneven distribution of the 5-category sample affects ACC's ability to reflect the real situation. Therefore, we also introduce another evaluation criterion, HSS as Equation (<ref>), which can exclude correct predictions due to random chance and obtain a more realistic percentage of correct predictions. HSS ranges from negative infinity to 1 and the perfect score is 1.ACC = 1/N_T∑_i=1^L n_i,i, HSS = 1/N_T∑_i=1^L n_i,i - 1/N_T^2∑_i=1^L N'_i N_i/1 - 1/ N_T^2∑_i=1^L N'_i N_i.§ METHOD DESCRIPTIONIn this section, we describe the task and the post-processing methods used in the research. Firstly, we describe the precipitation post-processing problem In Section <ref>. Section <ref> introduces our proposed adaptive neural architecture search method, including self-supervised search, search space and regularization function.§.§ Problem Definition The ensemble forecast post-processing is aim to generate an accurate deterministic forecast based on the ensemble forecast data, i.e. average rainfall for the next 24h predicted by the NWP model. In other words, we integrate the ensemble forecast data generated from multiple NWP models or a single NWP model with multiple initial conditions, to output the average rainfall forecast for the next 24h in the same area.The post-processing process can be expressed as follows:y = f(x;W),where x ∈ R^c× w × h denotes the ensemble forecast data; f denotes the forecast model; W is the model parameter; y ∈ R^w × h is the deterministic forecast; w = 33 and h = 33 are respectively the width and height of the region after gridding; c denotes the number of channels, determined by the number of NWP models and the number of random initial states; c is 50 and 4 in Smod and Mmod respectively. §.§ Adaptive Neural Architecture Search We propose an advanced AdaNAS method to design suitable network architectures automatically, avoiding a mass of manual effort and achieving excellent performance. In summary, our approach is divided into two steps, including searching for architecture and training the searched model. To better adapt the AdaNAS to the rainfall forecast, we present a self-supervised search strategy, a rainfall-aware search space and a rainfall-level regularization function.§.§.§ Self-supervised SearchWe adopt a block-wise self-supervised comparative learning approach to search the network architecture, instead of using shared weights as in one-shot NAS. To reduce the computational effort, the weight-sharing rating scheme in the one-shot NAS method is adopted by most NAS methods. However, the architecture ranking estimated with shared weights is not necessarily the true architecture ranking because there must be a huge gap between the shared weights and the optimal weights of sub-networks. Some studies <cit.> pointed out that the evaluation method of shared weights has low accuracy. In addition, some theoretical and empirical studies <cit.> demonstrated that reducing shared weights can effectively improve the accuracy of evaluating architecture ranking.A block-wise method is ideal to reduce shared weights by splitting the depth of the network, keeping the original search space and solving the dilemma of shared weights. Each block of the hyper-network is trained separately before being connected to the overall search. Thus, each of our blocks has a separate structure, instead of being stacked with the same structure, which makes our network architecture more malleable.The outline of the neural network architecture search algorithm is shown in Algorithm <ref>. It uses a self-supervised contrastive learning approach to update the architecture weights θ and model weights W separately, and ultimately outputs the optimal architecture a^*, where u is a factor to balance the update frequencies of θ and W; T denotes the number of epochs for search process; N denotes the number of building block; A is a collection of architectures. Self-supervised comparative learning uses auxiliary tasks to mine information from unsupervised data for improving the quality of downstream tasks, i.e., rainfall forecast. Concretely, we replicate a target network that has the same architecture as the online network. As shown in Figure <ref>, the initialized parameters of the target network are the same as those of the online network. The exponential moving average (EMA) <cit.> update is done during the search according to the parameters of the online network. Whenever data is input, it will be randomly cropped into four seeds x_1,x_2,x'_1 and x'_2, two for each network input. The backbone of the whole architecture contains multiple blocks, and x_1 and x_2 will choose different paths when passing through the backbone to search for the optimal architecture. The online network performs a gradient update <cit.> according to the contrastive loss of output data. The pseudo-code of the search process is shown as Algorithm <ref>. §.§.§ Search SpaceSearch space is an important part of the NAS, which determines the search scope of the NAS. Inspired by the existing works <cit.>, we design our search space consisting of suitable CNNs and transformer operations, including residual block (RB), space-aware block (SAB) and channel-aware block (CAB). RB is the main structure of Resnet <cit.> and is used to extract features. SAB and CAB are transformer operations that focus attention on pixels with salient features.To ensure a more accurate prediction of the less frequent cases such as heavy rainfall, we introduce the transformer-based blocks CAB and SAB. As illustrated in Figure <ref>, the proportion of Light and None accounts for more than 80%, resulting in difficulty to predict heavy rain. CAB and SAB are used to capture the heavy rainfall pixels in the ensemble forecast data and assign a greater weight to them to highlight outstanding features. This enables the model to focus on these heavy rainfall pixels as shown in Figure <ref> and to predict the heavy rainfall areas more accurately. In Figure <ref>, the visualization of rainfall distribution also demonstrates that our method obtains significant results in coastal areas with heavy rainfall. As a more robust transformer-based block, the channel-aware block determines the importance of a neuron by its inhibitory effect on the surrounding space, with the following expression:z' = zsigmoid((z - z̅)^2/4 × (sum((z - z̅)^2)/n + λ) + 0.5),where z denotes the input data; z̅ denotes the expectation of z; sum(·) denotes the internal summation; n = w × h - 1, w and h denote the width and height of input data; λ=10^-4 is hyperparameter. sigmoid((z - z̅)^2/4 × (sum((z - z̅)^2)/n + λ) + 0.5) is equivalent to the weight given to a pixel by CAB based on the difference between the pixel points, where pixels with significant differences are given a higher weight. The more detailed pipeline of our SAB and CAB can be seen in Figure <ref>. §.§.§ Retrain ProcessWe use the self-supervised search method to search for a suitable neural network model in the search space we have designed. The searched neural architecture is shown in Figure <ref>. This model mainly includes a stem layer, a search backbone and a projector. The stem layer includes a convolutional layer, a batch normalization layer, an active layer and a max-pooling layer. The projector includes a pooling layer and a fully connected layer. The model is trained fully supervised on the dataset with the input being the ensemble forecast data and the labels being the observations. Different from the normal fully supervised training, we design a novel regularization equation to improve the accuracy.§.§.§ Regularization FunctionThe regression tasks like rainfall prediction usually use a distance metric such as mean square error (MSE) as a regularization function. However, the MSE is a constraint term for normal rainfall prediction rather than the fine-grained rainfall level classification. Although existing methods optimized by the MSE can predict some examples, it still restrains the performance of the prediction in rainfall levels due to the lack of effective constraints. Therefore, we design a regularization function by introducing a categorical criterion, Heidke skill score (HSS), to improve the accuracy of the rainfall level classification. HSS is a categorical criterion that measures the accuracy of prediction by excluding the case where the random prediction is correct, as shown in Equation <ref>. The reason we introduce HSS instead of Accuracy (ACC) is that the general ACC scores cannot truly reflect the predictive power of the model on a dataset with a severely uneven distribution of categories. For example, the percentage of light precipitation events reaches 76.4% (As shown in Figure <ref>, light, moderate, none and others precipitation levels account for 76.4%, 12.7%, 8.1% and 2.4% in Mmod, respectively.). The model only needs to predict light for any input to guarantee an ACC of 76.4%, which is significantly higher than all the models in Table <ref>. However, such a model does not have qualified predictive power. HSS rules out this possibility, so we introduce the HSS into the regularization function with the following expression:Loss = loss_MSE + c_H/max(loss_HSS,ϵ),where loss_MSE and loss_HSS are the loss of precipitation and intensity classification respectively; c_H is a coefficient; ϵ is a tiny constant, here we take 10^-10. Experiments <ref> show that the performance of the regularization function introduced with loss_HSS is improved notably, and its performance is affected by coefficients c_H.§ EXPERIMENTSWe conduct experiments and analysis on five aspects, namely precipitation amount prediction, precipitation intensity classification, regularized equation analysis, predictive distribution analysis and ablation study. Some important hyperparameter configurations are shown in Appendix <ref>. §.§ Precipitation Amounts Prediction AdaNAS-hss achieves the best performance in all rainfall evaluation criteria as shown in Table <ref>, where the regularization function of AdaNAS-hss includes HSS while AdaNAS-mse does not. Our model improves 5.4%, 80.5%, 80.3% and 29.2% in Bias, MAE, RMSE and NSE, respectively, on Smod compared with the previous best-performing IC-MLNet. Consistently, our model improves 24.5%, 55.3%, 53.2% and 59.6% in Bias, MAE, RMSE and NSE, respectively, on Mmod compared with the previous best-performing IC-MLNet. The auto-designed architecture, as shown in Figure <ref>, consists of four blocks and three operations. Our AdaNAS also outperforms other NAS methods (MiLeNas, PC-DARTS and NSAS <cit.>) customized by normal RGB images. It is significantly improved in NSE compared to MiLeNAS, PC-DARTS and NSAS with more than 56.1% improvement on both Smod and Mmod. In Table <ref>, all NAS methods have excellent performance in Bias, MAE and RMSE, outperforming IC-MLNet. However, it shows a clear deficiency in the NSE metrics. These results indicate that the NAS approach has a natural and significant advantage in Bias, MAE and RMSE metrics, but this advantage is lost in the NSE. In addition, the comparison of AdaNAS-mse and AdaNAS-hss shows that introducing HSS into the regularization equation improves MAE, RMSE and NSE scores on both datasets, while Bias decreases a bit.In addition, we find that due to the increasing difficulty in extracting features of Mmod, its performance is less promising in several metrics compared to Smod. This finding is consistent with the previous IC-MLNet. This is evidenced by the fact that Smod significantly outperforms Mmod on the MAE and NSE metrics, and this trend is also observed on other metrics. There are two main reasons for this result: on the one hand, the ensemble forecast has already extracted some features relative to the original data, resulting in information loss; on the other hand, the small number of ensemble forecast members in Mmod makes it impossible to cope with multiple influences. In other words, the multiple random initial conditions in Smod provide more additional information compared to Mmod, which compensates for the information loss to some extent.§.§ Precipitation Intensity Classification AdaNAS-hss also achieves a surprising performance in all rainfall level criteria in Table <ref>. Our model improves 17.1% and 8.2% in ACC and HSS on Smod, 11.6% and 1.1% on Mmod, respectively, compared to the previous best-performing IC-MLNet. Besides, our AdaNAS performs better than other NAS methods. The comparison of AdaNAS-mse and AdaNAS-hss shows that introducing HSS into the regularization equation improves both ACC and HSS scores on both datasets.Next, we will demonstrate a more in-depth study of regularization to explore its impact on ACC and HSS.§.§ Regularization Function AnalysisWe further explore the performance of AdaNAS in relation to HSS coefficients in the regularization function. We show model performance for c_H = 0, 1, 2, 5, and 10 in Figure <ref>. In Figure <ref>,the model performs best when c_H = 10, reaching the highest scores for both ACC and HSS on both Smod and Mmod. This indicates that adding HSS to the regularization function is indeed effective in changing focus and deflecting optimization of model toward precipitation levels, thus improving ACC and HSS. Figure <ref>.(a) shows that HSS and ACC steadily improve and the speed of improvement gradually slows down with the increase of c_H on Smod. However, in Figure <ref>.(b), HSS and ACC boost a little but they are not stable on Mmod, with the increase of c_H. It indicates that the HSS coefficients have limited improvement on model, and when c_H exceeds a threshold, the performance of model is almost no longer improved. Besides, in Table <ref>, it can be found that Bias decreases on both Smod and Mmod for c_H=10 (AdaNAS-hss) compared to c_H=0 (AdaNAS-mse). This indicates that when c_H is greater than 10, it may adversely affect the other precipitation criteria. This may be caused by an excessive deflection of the optimization direction toward precipitation intensity classification. To study the effect of the HSS coefficients, several sets of experiments with different HSS coefficients are performed, and our experimental results on HSS coefficient show that when the coefficient increases, the bias will be adversely influenced. By observing the penultimate two columns (ACC and HSS) in Table <ref>, it can be found that the accuracy of precipitation intensity classification gradually improves with the increase of HSS coefficient. The observation of the first column (Bias) shows that the bias achieves the best performance when the HSS coefficient is 0. §.§ Precipitation Distribution Analysis We also compared the MAE and HSS performance of AdaNAS with the other two NAS methods and IC-ML by visualizing the spatial distribution of precipitation. The results reveal that our AdaNAS performs significantly better than other methods, both in precipitation prediction and intensity classification. Figure <ref> shows that MAE performance of AdaNAS is significantly better than that of artificially designed IC-ML, especially on Smod. As shown in Figure <ref> and Figure <ref> AdaNAS predicts precipitation intensity (ACC and HSS) more accurately than IC-MLNet and other NAS methods in coastal areas (the areas in the black box). This suggests that CAB and SAB enhance the predictive capability of our method for heavy rainfall areas (The average precipitation is greater in coastal areas compared to inland.)§.§ Ablation StudyTo confirm that our proposed rainfall-aware search space can indeed improve the performance of the model, we design five sets of operations for ablation experiments: (1) DARTS Ops <cit.> (2)RB; (3)RB&SAB; (4) RB&CAB; (5)all 3 Ops. As shown in Table <ref>, our method containing all three operations (3 Ops) performs best on all evaluation criteria except MAE. We improved 4.5%, 5.7%, 1.9%, 6.9%, 1.4%, 2.2% on Bias, MAE, RMSE, NSE, ACC, HSS evaluation criteria respectively than using RB&SAB operation. In particular, the comparison of the first line and the fifth line shows that our search space performs better than that of DARTS when only the operations in the search space are replaced. By comparing the third and fourth rows, we can find that replacing SAB with our CAB leads to an improvement of all evaluation criteria with 0.08%, 6.3%, 1.3%, 4.4%, 0.9%, 1.8%, proving that the proposed CAB can capture more information in ensemble forecasts than SAB. In addition, by comparing the fourth and fifth rows, we can find that using SAB causes all metrics except MAE to improve, which means that SAB can capture some information that CAB fails to capture.To demonstrate the effectiveness of self-supervised search, we compare it with random search and supervised search, which uses observed data as labels to supervise the search process. The experimental results are shown in Table <ref>, and we can see that self-supervised search outperforms supervised search for all evaluation criteria on Smod, with an especially improvement of 14.0% on NSE. The reason is that self-supervised search can unbind the initialized network structure and have a broader and freer search space.§ CONCLUSIONIn this work, we propose a novel AdaNAS, which can adaptively design efficient network architectures without excessive manual effort, for the customization of precipitation forecast. We design a rainfall-aware search space and a persistence-aware regularization function respectively for the searching and training process, which improves the accuracy of precipitation forecasting especially in coastal areas. The stable and consistent performance of our AdaNAS on all evaluation criteria outperforms current advanced methods on the large-scale precipitation benchmark TIGGE.§ ACKNOWLEDGEMENTSThis research was supported by the Natural Science Foundation of China under Grant No. U1811464, and was also supported in part by the Guangdong Natural Science Foundation under Grant No. 2018B030312002, in part by the Program for Guangdong Introducing Innovative and Entrepreneurial Teams under Grant NO. 2016ZT06D211, in part by the CCF-Baidu Open Fund OF2021032.§ EXPERIMENTAL SETUP The main experimental hyperparameter Settings are shown in the following Table <ref>:IEEEtran
http://arxiv.org/abs/2312.16046v1
{ "authors": [ "Yingpeng Wen", "Weijiang Yu", "Fudan Zheng", "Dan Huang", "Nong Xiao" ], "categories": [ "cs.LG", "cs.AI", "physics.ao-ph" ], "primary_category": "cs.LG", "published": "20231226132303", "title": "AdaNAS: Adaptively Post-processing with Self-supervised Neural Architecture Search for Ensemble Rainfall Forecasts" }
Learnable Chamfer Distance for Point Cloud Reconstruction [ January 14, 2024 ========================================================= Zero-Shot Object Counting (ZSOC) aims to count referred instances of arbitrary classes in a query image without human-annotated exemplars. To deal with ZSOC, preceding studies proposed a two-stage pipeline: discovering exemplars and counting. However, there remains a challenge of vulnerability to error propagation of the sequentially designed two-stage process. In this work, we propose an one-stage baseline, Visual-Language Baseline (VLBase), exploring the implicit association of the semantic-patch embeddings of CLIP. Subsequently, we extend the VLBase to Visual-language Counter (VLCounter) by incorporating three modules devised to tailor VLBase for object counting. First, we introduce Semantic-conditioned Prompt Tuning (SPT) within the image encoder to acquire target-highlighted representations. Second, Learnable Affine Transformation (LAT) is employed to translate the semantic-patch similarity map to be appropriate for the counting task. Lastly, we transfer the layer-wisely encoded features to the decoder through Segment-aware Skip Connection (SaSC) to keep the generalization capability for unseen classes. Through extensive experiments on FSC147, CARPK, and PUCPR+, we demonstrate the benefits of our end-to-end framework, VLCounter. Code is available at https://github.com/seunggu0305/VLCounter § INTRODUCTION Object counting, which was initially studied for specific targets, e.g., crowds <cit.>, cells <cit.>, animals <cit.>, and cars <cit.>, has shown that the number of objects can be counted even within a dense image. Furthermore, recent works have shown significant advances to infer the number of arbitrary objects with several human-annotated exemplar patches. However, such a strong prerequisite that every cumbersome guidance must be equipped is undoubtedly the main challenge to overcome to grant applicability to object counting methods. In this context, Zero-Shot Object Counting (ZSOC) was proposed to mitigate the need for human labor. Current ZSOC approaches commonly adopt a two-stage pipeline as illustrated in Fig. <ref>. These works primarily focus on identifying exemplar patches within the image and subsequently adopt the counting framework from the literature of few-shot object counting <cit.>. To identify the exemplar patches, RepRPN <cit.> considered the repetition score to detect object patches that frequently appear within the image. Requirement for counting the desired classes over frequent ones, ZSC <cit.> utilized the class names to enable the class specification. They localize exemplars by identifying the k-nearest neighbors of the class name embeddings among randomly cropped patches. Despite their progress, the potential localization error propagation in the two-stage training pipeline <cit.> is an untapped problem in ZSOC frameworks. Indeed, they utilized additional datasets to train decent exemplar discovery networks. This paper pursues a simplified zero-shot object counting framework. We instantiate an end-to-end ZSOC counter namely Visual-Language Baseline (VLBase), which consists of a CLIP <cit.> encoder and counting decoder. By leveraging the embedding space of CLIP which enables the implicit association of the semantic and patch embeddings to localize the target object <cit.>, VLBase eliminates the need for an exemplar discovery process. Additionally, we introduce VLCounter which is built upon VLBase by incorporating three modules devised to tailor VLBase for object counting. First, we propose Semantic-conditioned Prompt Tuning (SPT) which extends the visual prompt tuning (VPT) to efficiently finetune CLIP for the counting task. Instead of utilizing naïve learnable prompts, SPT employs conditioning via semantic embedding to generate patch embeddings that emphasize the region of interest. Subsequently, based on our observation that the similarity maps between patch embeddings obtained using SPT and semantic embeddings already provide a decent approximation of object locations, we employ simple Learnable Affine Transformation (LAT) to adjust only the finer details. Finally, to equip the decoder with the generalization capability and provide rich clues, we exploit intermediate features across different encoding layers of CLIP through Segment-aware Skip Connections (SaSC). With all components combined, our simple end-to-end one-stage framework records new state-of-the-art results on the FSC147 <cit.> dataset validating its superiority over the previous ZSOC methods. Moreover, we provide additional evidence of cross-dataset generalization by evaluating performance on the car counting dataset CARPK <cit.>.Our contributions are three-fold: * We instantiate an end-to-end baseline for ZSOC, VLBase, by exploiting the vision-language association capability of CLIP. * We propose a VLCounter consisting of SPT, LAT, and SaSC that allows the model to utilize the generalization capability of CLIP in a counting-specific manner.* Our experiments on FSC147 and cross-dataset validation verify the effectiveness of VLCounter.§ RELATED WORKS §.§ Object Counting Class-specific Object Counting focuses on quantifying specific class samples, e.g., crowds <cit.>, cars <cit.>, animals <cit.>, and cells <cit.>. Most works fall into two main categories each employing detection <cit.> or regression <cit.> mechanism to measure the number of instances. The former predicts the bounding box for every instance using an object detector, whereas the latter predicts the density distribution of the image instead, thereby being recognized as a more robust stream against partially occluded objects <cit.>. Few-shot Object Counting To overcome the lack of generality of being constrained to a specific class,Generic Matching Network (GMN) <cit.> first formalized class-agnostic object counting to count the desired objects provided by the human-annotated exemplar patches. They introduced a two-stream architecture to encode each image and exemplar to handle the difference in their resolution.Following them, CFOCNet <cit.> and BMNet <cit.> also adopted and enhanced the two-stream approach by adding a layer-wise matching procedure and bilinear similarity metric. Other works adhere to single-stream architecture.To be specific, FamNet <cit.> and RCAC <cit.> use ROI pooling after feature extraction to obtain exemplar prototypes. However, the aforementioned studies suffer from the limitation that every inference requires human-annotated exemplars. Zero-shot Object Counting has been proposed by RepRPN <cit.> to discard the duty of annotating target exemplars for counting. To be specific, they trained the region proposal network (RPN) to capture the patches containing the most frequently appeared objects to replace human-annotated exemplars. Then, to further grant more applicability to exemplar-free object counter, ZSC <cit.> presented a method that takes guidance from semantic information. By matching semantic information to randomly generated patches, they sampled the most semantically relevant patches to obtain target exemplars. Our work shares the goal with ZSC in that we aim to train the counter that can count user-specified classes with only class names. Yet, as mentioned methods adopt a two-stage pipeline that is prone to error propagation, we focus on mitigating such issues by proposing an end-to-end framework that localizes and counts at once. §.§ Prompt Tuning Prompt tuning is a popular strategy to adapt pre-trained large models for downstream tasks due to its efficiency compared to conventional fine-tuning methods <cit.>. Whereas fine-tuning updates all parameters, prompt tuning freezes the pre-trained large models and introduces only a small set of learnable prompts to optimize <cit.>. Following these works, we utilize prompt tuning to efficiently exploit the quality of the visual-language understanding capability of pre-trained CLIP. Yet, our work differs in using semantic information from the semantic embeddings to condition the prompts in the visual encoder to concentrate more on specification-relevant information.§ PRELIMINARIES §.§ Problem Formulation: ZSOC ZSOC aims to predict the density map D∈ℝ^H× W × 1 for image I∈ℝ^H× W ×3 that belongs to unseen classes C^u (f:(I, C^u)↦ D) without any visual exemplar clues. In the training stage, the model is trained with 𝒟_train = {(I_i, C^s_i, D_i)}_i=1^i=ℕ where C^s_i denotes the seen class names during training. Then in the testing stage, the model is to yield a density map for 𝒟_test={(I_i, C^u_i, D_i)}_i=ℕ+1^i=𝕄, where C^s ∩ C^u=∅.§.§ Overview of CLIP This section introduces the underlying motivation behind our proposed method. CLIP is composed of two encoders: an image encoder ϕ_V(·) and a text encoder ϕ_T(·). The text encoder takes prompted class name t e.g., A photo of [kiwi] and produces a semantic embedding 𝒯∈ℝ^1 × d where d represents an embedding dimension. The image encoder takes a learnable class token [cls] along with embedded patch sequences V as inputs and encodes global and local semantics in the class token [cls] and patch tokens 𝒱 respectively. Note that V = [v_1, v_2,...,v_N] ∈ℝ^N × (P^2 · d) where N is the number of embedded patches, and (P · P) is the resolution of each patch. Formally, this process can be expressed as follows:𝒯 = ϕ_T(t); [[cls],𝒱] = ϕ_V([[cls], V]).These encoders are trained collaboratively to map 𝒯 and [cls] into a shared representation space. Recently, there exist studies suggesting the implicit localization capability of CLIP, where each patch embedding preserves local image semantics <cit.>. And this property, coupled with the powerful image-text joint embedding space of CLIP, has provided a clear motivation for utilizing CLIP as a robust tool for zero-shot segmentation (localization). <cit.>. Taking similar inspiration yet focused on object counting, we aim to leverage the implicit localization capability of CLIP to achieve precise and efficient object counting in an end-to-end manner. § VISUAL-LANGUAGE COUNTER: END-TO-END FRAMEWORK FOR ZERO-SHOT OBJECT COUNTINGThis section presents Visual-Language Counter (VLCounter), an efficient end-to-end ZSOC framework. We first establish a baseline model referred to as Vision-Language Baseline (VLBase), which exploits the visual-language localization capacity of CLIP in Sec. <ref>. Then, we bring three improvements on top of VLBase to introduce VLCounter. Specifically, we emphasize the regions of interests (Sec. <ref>), learn task-specific visual-language similarity (Sec. <ref>), and exploit semantic-relevant information across the multi-level representations (Sec. <ref>). The overall architectures of the two models are illustrated in Fig. <ref>. §.§ Visual-Language Baseline VLBase is a standalone baseline, eliminating the need for few-shot counting techniques that previous ZSOC approaches heavily rely on. Given input query image I and class name C, VLBase obtains patch embedding 𝒱 and semantic embedding 𝒯 using CLIP encoders ϕ_V(·) and ϕ_T(·), respectively. By calculating the cosine similarity between 𝒯 and 𝒱, the similarity map S∈ℝ^H × W is yielded:S_ij(𝒱,𝒯) = v_ij𝒯^𝖳/||v_ij||||𝒯||,where S_ij corresponds to the value at position (i,j) in matrix S and v_ij represents the embedding at position (i, j) of 2D-reshaped 𝒱.As mentioned in prior studies <cit.>, we observed that the similarity map between CLIP-encoded semantic and patch embeddings provides an adequate indication of the degree of semantic similarity between the patch and semantic embedding. We find that this similarity map is a decent clue for a decoder to localize the target objects. Consequently, the CNN-based counting decoder predicts the density map D_pred by utilizing features of 𝒱 and S:D_pred = ϕ_decoder([𝒱, S]),where [·,·] denotes channel-wise concatenation. Finally, the object count prediction is derived by summing all values in D_pred. Counting Loss For training, we adopt a conventional MSE loss:ℒ_count = ||D_pred - D_gt ||^2_2,where D_gt denotes the ground truth density map.§.§ Semantic-conditioned Prompt Tuning (SPT) To grant task-specificity to the CLIP image encoder without sacrificing its generalization capability, a straightforward approach is to employ visual prompt tuning (VPT) <cit.>. However, the naïve VPT, which simply concatenates a few learnable tokens to the input sequence of each encoding layer does not take the semantic information into account. Hence, we introduce Semantic-conditioned Prompt Tuning (SPT), which utilizes semantic information along with the learnable tokens to assist the image encoder to extract target-semantic-highlighted visual features. Specifically, as illustrated in Fig. <ref>, SPT has new learnable tokens for each encoding layer. Learnable tokens for l^th layer are defined as 𝒫^l = [p^l_1, p^l_2, ..., p^l_M] where the number of learnable tokens is denoted as M. These tokens are then, supplemented with the linearly projected semantic embedding 𝒯̂ to generate semantic-conditioned prompts 𝒫̂. The semantic-conditioned prompts for the l^th layer are defined as follows:𝒫̂^l = [p_1^l + 𝒯̂, p_2^l + 𝒯̂, p_M^l + 𝒯̂],where 𝒯̂ = ϕ_c(𝒯) and ϕ_c denotes the parameters of the projection layer. Consequently, with the conditioned prompts 𝒫̂, the patch embedding process in l^th layer of the image encoder can be expressed as:[[cls], 0.25cm0.15mm, 𝒱^l+1] = Layer^l_enc([[cls], 𝒫̂^l, 𝒱^l]),where initial input 𝒱^1=[v^1_1, v^1_2, ⋯, v^1_N] is a sequence of embedded patches through the patch embedding layer prior to the encoder. Be aware that we follow VPT <cit.> to discard output tokens of 𝒫̂ (represented as 0.25cm0.15mm) and do not propagate to the subsequent layer. §.§ Learnable Affine Transformation (LAT) Through the adoption of the SPT, we obtain visual representations in which the corresponding regions of the target class are highlighted. Nevertheless, due to the nature of object counting, discovering the central points of the objects rather than encompassing the entire object area, a discrepancy might arise between the information contained in the similarity map S and the loss that needs to be backpropagated during training. In light of this, we propose learnable affine transformation matrix (LAT) to facilitate the conversion of similarity map S to counting map Ŝ and establish a more task-specific visual-semantic linkage centered around individual objects as follows: Ŝ = W ⊗ S + B,where W, B ∈ℝ^H × W are learnable matrices for affine transformation and ⊗ indicates element-wise multiplication. In addition, we directly optimize the counting map Ŝ with the rank-aware contrastive loss to learn the proper degree of activation for object counting.Details of rank-aware contrastive loss are elaborated in Sec. <ref>. With LAT, the input to the decoder [𝒱, S] in Eq. <ref> of VLBase is replaced by [𝒱, Ŝ].§.§ Segment-aware Skip Connection (SaSC) For ZSOC, where the model encounters unseen classes during inference, it is important to train a decoder that is tailored for object counting while maintaining a generalization ability. Sharing the motivation with VLBase in Sec. <ref> that CLIP features inherently preserve local semantics, we adopt skip connections that incorporate intermediate features of the encoder to its counterpart in the decoder. As shown in Fig. <ref>, the l^th encoder patch features are spatially concatenated and projected to yield decoder-assistive representations. Then, we multiply the affine transformed similarity map C to emphasize the object-relevant patches. Finally, these patch features are added to the corresponding k^th layer features of the decoder. Formally, the k^th decoding layer with SaSC, receiving l^th encoder features, operates as follows:ℱ^k = Layer^k_dec(ℱ^k-1 + ϕ_proj^k(𝒱^l) ⊗Ŝ),where ϕ_proj^k(·), ℱ^k, and ⊗ stand for the parameter of feature projection block, the output of the k-th decoding layer, and Hadamard products per channel, respectively.§.§ Training ObjectivesTo facilitate precise local visual-language alignment between patch embedding and semantic embeddingIn addition to the counting loss described in Eq. <ref>, VLCounter additionally employs rank-aware contrastive loss <cit.>. Whereas the ℒ_count trains the whole model to learn the counting objective,our focus in SPT and LAT is learning to yield the counting-tailored similarity map in the encoder. In this regard, we adopt rank-aware contrastive loss in the counting map Ŝ to assign higher activations on the patches that are nearby the object centers. To design the hierarchical guidance for a rank-aware contrastive loss, we first normalize the ground truth density map D_gt to be mapped between 0 and 1. Then, we iterate the batch for K times with different thresholds to prepare positive and negative sets; patches are gathered as positive if the corresponding patch in D_gt has a higher value than the threshold, and if not, as negative. Formally, the rank contrastive loss with the positive set Ŝ_r^pos and the negative set Ŝ_r^neg is formulated as follows:ℒ_rank=-∑_k=1^Klog∑_Ŝ_i∈Ŝ_r^posexp(Ŝ_i/τ)/∑_Ŝ_j∈ (Ŝ_r^pos∪Ŝ_r^neg)exp(Ŝ_j/τ) ,where τ is a temperature scaling parameter. With the objectives in Eq. <ref> and Eq. <ref> combined, VLCounter's final objective is as follows:ℒ_total = ℒ_count + λ·ℒ_rank,where λ is a hyperparameter to balance between the losses. § EXPERIMENTS In this section, we provide a comprehensive explanation of experimental details. First, we delve into the implementation details, datasets, and evaluation metrics in Sec. 5.1, followed by a comparison of our model with existing state-of-the-art methods in Sec. 5.2. Then, we conduct an in-depth exploration of each component further in Sec. 5.3.§.§ Experimental DetailsImplementation Details. For all experiments, we employed CLIP ViT-B/16 as our encoders followed by a decoder consisting of 4 repeated units. Each of these units consists of one feature projection block in Fig. <ref> and one additional convolutional layer. Regarding the image input, each image is resized to 384×384, and augmentations such as gaussian noise, gaussian blur, horizontal flip, and color jittering were applied. We trained the model using AdamW <cit.> optimizer with a learning rate of 1e^-4 and weight decay of 1e^-2 for 200 epochs with a batch size of 16 on a single NVIDIA RTX A6000. For temperature scaling and loss-balancing hyperparameter λ and τ, we used 1e^-6 and 1.Datasets. To explore the counting capability of models, we use FSC147 <cit.>, the first large-scale dataset for class-agnostic counting. It includes 6135 images from 147 categories mainly composed of foods, animals, kitchen utensils, and vehicles. We also utilize CARPK and PUCPR+ <cit.> datasets. These datasets exhibit different properties from the images in FSC147, so we use them for cross-dataset validation which is to test the model's generality. To be specific, CARPK consists of 1,448 parking lot images with nearly 90,000 cars taken in a drone view at 40 meters height on average. On the other hand, PUCPR+ contains nearly 16,456 cars in total which have 10th-floor-view images. §.§ Comparison with State-of-the-art MethodsWe compare VLBase and VLCounter against previous class-agnostic counting methods in Tab. <ref>. Despite its simple design, the performances of VLBase are comparable to the two-stage methods that even utilize additional training data.On the other hand, VLcounter clearly surpasses other ZSOC baselines. Particularly, when compared to ZSC, VLCounter achieves a relative improvement of 32.94% and 22.81% in terms of validation MAE and test MAE, respectively. Moreover, we remark on the comparable results to the state-of-the-art few-shot counting method: BMNet. This is an especially notable milestone for ZSOC since few-shot methods are generally seen as the upper bound of two-stage ZSOC methods; the counting framework in two-stage works is usually adopted from few-shot methods.On the rightmost columns, we provide the inference speed per image.As our one-stage approaches (VLBase and VLCounter) only require the time to count the objects, it is shown that their inference speeds are much f aster than a two-stage method (ZSC) which needs extra time to discover exemplars (denoted as α since the implementation is not fully publicized). In addition to the inference time, VLBase and VLCounter have much fewer parameters to learn, having their strength in shorter training time (Training time for VLCounter is approximately 2× faster than BMNet+). Following previous class-agnostic counting methods <cit.>, we verify the generalization capability of VLBase and VLCounter by conducting a cross-dataset evaluation on CARPK and PUCPR+ datasets in Tab. <ref>, and VLBase and VLCounter demonstrate their benefits in generalization. Whereas the performance gaps between few-shot methods and VLBase is reduced, we observe the superiority of VLCounter to other methods by boosting MAE up to 38.12% and 27.54% in CARPK and PUCPR+ datasets compared to BMNet+. In particular, we emphasize the single-digit results of VLCounter in terms of both MAE and RMSE are derived without any fine-tuning (The average number of cars in each image of CARPK is 62). We attribute such success in cross-dataset validation to adapting the generality of CLIP to counting-specific and incorporating multi-level features to provide rich semantics into the prediction, each approximately taking 54% and 46% portions in the increase in CARPK MAE. §.§ Ablation Studies on VLCounter Component Analysis. To validate the effectiveness of individual components, we conducted an ablation study as presented in Tab. <ref>.Starting with VLBase (M1), we add SPT, LAT, and SaSC in M2, M3, and M4, respectively. Among the individual components, the effectiveness of SPT demonstrated in M2 is the most pronounced.This significant improvement demonstrates the importance of fine-tuning incorporated with the semantic condition. LAT in M3 is another important component. While it can be seen as not incurring a dramatic increase in performance, the counting map Ŝ derived from LAT is also an essential element in SaSC. Lastly, M4 shows that SaSC not only boosts generalization capability but also task-specific predictions. This is because layer-wise intermediate representations in CLIP encoder are also semantically meaningful <cit.> and SaSC aggregates them to aid counting prediction.Effect of conditioning semantic information.We further conduct ablation studies on semantic conditioning. In Tab. <ref>, we compare conventional VPT with SPT and test the semantic conditioning in SaSC. Along with the benefits of VPT of granting task-specificity, utilizing semantic conditions in VPT allows the prompts to be more semantically specific. In addition, using semantic conditions in filtering the knowledge that is passed to the decoder with residual paths clearly benefits SaSC.We think that the semantic conditioning with the counting map Ŝ suppresses the object-irrelevant information, thereby contributing to the improvements. Effect of plural text prompts. We followed CLIP <cit.> to use different context prompts to encode the semantic embeddings. Yet, since the counting task mainly assumes the existence of multiple instances in every image, we modified text prompts to be in plural form. In Tab. <ref>, we compare the results between using singular and plural forms of text prompts, and text prompts in plural form have the advantage in the counting task.§.§ Qualitative ResultsAlong with the quantitative results, we study how the components of VLCounter affect class-specificity. In Fig. <ref>, we compare both the similarity map and the density map of VLBase and VLCounter. By delivering the semantic condition and fine-tuning the similarity map, we find the similarity map to retain more compact salient regions; the activations in the background are suppressed (1st, 2nd rows) and object regions are clearly localized (2nd, 3rd rows). Then, by aggregating multi-level representations of rich semantics with these similarity maps in the decoder, we observe the clear discrepancy between the predicted density maps from VLBase and VLCounter, especially for densely populated images (4th row). Furthermore, we provide the cross-dataset results in the last two rows in Fig. <ref>. Similar to what we discussed with predictions for FSC147, we verify that VLCounter is a counting-tailored and generalizable model across new categories, shapes, and densities of objects. These results verify the advantage of employing a pretrained vision-language model for capturing the semantics of newly seen objects, i.e., cars.Refer to the appendix for more visualizations. § CONCLUSIONIn this work, we present a simple end-to-end framework VLBase and VLCounter for zero-shot object counting that eliminates the need for the process of discovering exemplars. Simply put, VLBase is built upon the pre-trained vision-language CLIP model. Then, VLCounter introduces three key components that bring task-specificity and object-specificity. Whereas the semantic-conditioned prompt tuning and learnable affine transformation fine-tune the encoding process to obtain counting-tailored representations, the segment-aware skip connection is designed to learn the generalizable decoder with the knowledge. Our thorough experiments on FSC147 and cross-dataset benchmarks validate the effectiveness and efficiency of VLCounter.§ ADDITIONAL IMPLEMENTATION DETAILSAs indicated in the manuscript, we adopted CLIP ViT-B/16 as our encoders. Yet, since the image encoder was trained with images of 224x224 resolution, we resized the position embeddings of the image encoder to adapt CLIP to handle the images of 384x384 resolution. For ϕ_c(·) that projects semantic vectors for SPT, we share one linear layer across all visual encoder layers of CLIP. Also, M, the number of learnable tokens in SPT is designated to 10. For LAT, the learnable matrices W and B are initialized to 1 and 0, respectively, and the thresholds for the rank-contrastive loss are set to [0.8, 0.6, 0.4], establishing the iteration count K to 3. Finally, for SaSC, we extract encoder feature 𝒱 from layers l = [7,8,9], and pass onto the decoder feature ℱ at layers k = [2,3,4]. § EFFECT OF LEARNABLE TOKENS§.§ Effect of the NumberWe determine the optimal number of learnable tokens required to facilitate an effective transfer of CLIP. An extremely small number of learnable tokens might not be sufficient to effectively facilitate the transfer of a pre-trained large model. However, employing an excessive number of visual prompts also can have a detrimental impact on our model's performance due to the loss of generality of CLIP. Based on experiments as reported in Fig. <ref>, we have determined that the optimal number of learnable tokens for our specific task is 10. §.§ Effect of the DepthIn addition to the apparent influence of the number of learnable tokens on ZSOC performance, we also anticipate that the specific placement of these tokens within the encoder layers will have a substantial impact. To provide clearer context, we assign numerical labels to the 12 layers of the vision transformer in the CLIP image encoder, ranging from 1 to 12. We observe that introducing prompt tokens on the earlier layers typically results in improved performance compared to placement on the latter layers as reported in Tab. <ref>. The highest performance is achieved when learnable prompt tokens are inserted into every image encoding layer (layer 1–12), which also serves as the default setting in our experimental setup.§ LAT VALUE DISTRIBUTIONIn our manuscript, we mentioned that LAT is to facilitate the conversion of similarity maps to be more counting-specific: guiding activations to be more compact on object centers and not significantly modifying the similarity map. To substantiate our argument, we plot the distributions of W and B matrices in Fig. <ref>. As we observe that the values of W and B are concentrated around 1 and 0, respectively, we confirm that LAT maintains the localization capability of our encoder and only fine-tunes the similarity map to be more counting-specific.§ EFFECT OF ENCODER FEATURES IN SASCBuilding upon the arguments presented in SaSC, which emphasize the attainment of generalizability and rich semantics through the aggregation of encoder features during decoding, we explore the combinations of the successive layers to yield the best results. Through this investigation, we aim to determine which layer's features are most conducive to enhancing the overall performance of the decoding process. Evidently, Fig. <ref> shows that the shallow encoder layers do not perform well due to their limited acquisition of meaningful patch-level information. In addition, we find a similar tendency to the arguments presented by <cit.>, mentioning thatFeed Forward Networks (FFNs) in the deeper CLIP layers are more likely to bring negative impacts on the vision-language alignment and localization capabilities. In this context, we have chosen the features from the 7th, 8th, and 9th encoding layers and incorporated them into the 2nd, 3rd, and 4th decoding layers.§ CONTEXT PROMPTSIn Tab. 5 in our manuscript, we demonstrated the influence of the form of context prompts. Here, we provide the lists of prompts that were used for the experiments.For singular form, below 15 templates were used:'A photo of a {}.','A photo of a small {}.','A photo of a medium {}.','A photo of a large {}.','This is a photo of a {}.','This is a photo of a small {}.','This is a photo of a medium {}.','This is a photo of a large {}.','A {} in the scene.','A photo of a {} in the scene.','There is a {} in the scene.','There is the {} in the scene.','This is a {} in the scene.','This is the {} in the scene.','This is one {} in the scene.',For plural form, below 11 templates were used:'A photo of a number of {}.''A photo of a number of small {}.''A photo of a number of medium {}.''A photo of a number of large {}.''There is a photo of a number of {}.''There is a photo of a number of small {}.''There is a photo of a number of medium {}.''There is a photo of a number of large {}.''A number of {} in the scene.''A photo of a number of {} in the scene.''There are a number of {} in the scene.'§ ADDITIONAL QUALITATIVE RESULTSIn addition to qualitative results in the manuscript, we provide more results in Fig. <ref>, comparing the vision-language similarity map and the density map produced by our VLBase and VLCounter on the FSC147 dataset. Note that we could not compare with the previous two-stage baselines since their implementations are not fully publicized. § COMPARISON TO CONCURRENT WORKRecently, many efforts have been made to perform pixel-level dense prediction using CLIP. While a concurrent work, CLIP-count <cit.>, requires additional parameters for visual-text interaction layers, we point out that our approach does not charge much memory cost since we leverage the semantic tokens within the image encoding process. In Tab. <ref>, we compare the number of learnable parameters and Multiply–ACcumulate (MACs), revealing that our method shows an advantage in computational efficiency. Moreover, while our performances seem to bring marginal benefits over CLIP-Count on the FSC-147 dataset (Tab. <ref>), we emphasize the large performance gaps in cross-domain scenarios in Tab. <ref> (+44.4% and +5% on CARPK and IOCfish5k datasets in MAE, respectively).
http://arxiv.org/abs/2312.16580v2
{ "authors": [ "Seunggu Kang", "WonJun Moon", "Euiyeon Kim", "Jae-Pil Heo" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227141154", "title": "VLCounter: Text-aware Visual Representation for Zero-Shot Object Counting" }
dvipsnamesxcolor abbrvnatpublicpublic privateblock privateblock dfForestGreen minRoyalPurple vtCerulean srorange[qed=◃,name=Example,style=definition, parent=section]example[qed=◃,name=Exercise,style=definition, parent=section]exercise examplecont[1] example [backgroundcolor=LightBlue,linewidth=5pt, linecolor=orange!25, topline=false, bottomline=false, rightline=false,]whiteblueframe [linecolor=orange,backgroundcolor=LightBlue]orangeblueframe§..5em§.§.5em TH1: .5em #1 #2 (#3) TH1 boxedstylemd skipabove=, skipbelow=, hidealllines=true, innertopmargin=4pt, linewidth=4pt, linecolor=gray!40, singleextra= [line width=3pt,gray!50,line cap=rect] (O|-P) – +(1cm,0pt); [line width=3pt,gray!50,line cap=rect] (O|-P) – +(0pt,-1cm); [line width=3pt,gray!50,line cap=rect] (O-|P) – +(-1cm,0pt); [line width=3pt,gray!50,line cap=rect] (O-|P) – +(0pt,1cm); , firstextra= [line width=3pt,gray!50,line cap=rect] (O|-P) – +(1cm,0pt); [line width=3pt,gray!50,line cap=rect] (O|-P) – +(0pt,-1cm); , secondextra= [line width=3pt,gray!50,line cap=rect] (O-|P) – +(-1cm,0pt); [line width=3pt,gray!50,line cap=rect] (O-|P) – +(0pt,1cm); exerciseStylemdfont= [style = boxedstylemd, ]theoremTheorem[ style = boxedstylemd, ]lemLemma[ style = boxedstylemd, ]assumptionAssumption[ style = boxedstylemd, ]propProposition[ style = boxedstylemd, ]corCorollary[ style = boxedstylemd, ]definitionDefinition[ style = boxedstylemd, ]remRemark[ style=exerciseStylemd, linewidth=3pt, linecolor=RoyalPurple!15, topline=false, bottomline=false, rightline=false, skipabove=5pt, skipbelow=5pt ]exeExercise [ skipabove=, skipbelow=, hidealllines=true, innertopmargin=4pt, linewidth=4pt, linecolor=gray!40, singleextra= [line width=3pt,gray!50,line cap=rect] (O|-P) – +(1cm,0pt); [line width=3pt,gray!50,line cap=rect] (O|-P) – +(0pt,-1cm); [line width=3pt,gray!50,line cap=rect] (O-|P) – +(-1cm,0pt); [line width=3pt,gray!50,line cap=rect] (O-|P) – +(0pt,1cm); , firstextra= [line width=3pt,gray!50,line cap=rect] (O|-P) – +(1cm,0pt); [line width=3pt,gray!50,line cap=rect] (O|-P) – +(0pt,-1cm); , secondextra= [line width=3pt,gray!50,line cap=rect] (O-|P) – +(-1cm,0pt); [line width=3pt,gray!50,line cap=rect] (O-|P) – +(0pt,1cm);]greybox - - - -Umathx45 Umathxmn <5> <6> <7> <8> <9> <10> <10.95> <12> <14.4> <17.28> <20.74> <24.88> mathx10mathxUmathxmn Umathxmn 0mathx"71 0mathx"75#1||[] {} ()⟨⟩ ⌈⌉ ⌊⌋_Lip#1#1#1 #1bb#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1#1#1 #1b#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1c#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1h#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1hc#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1t#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1tc#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1s#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ #1#2equation#2(#1)#3 equation#2(#1)#3figure#2Figure #1#3 assumption#2Assumption #1#3exercise#2Exercise #1#3 exe#2Exercise #1#3 prop#2Proposition #1#3 rem#2Remark #1#3 lem#2Lemma #1#3 <Ref><ref>#1#1#1 #1bb#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1#1#1 #1b#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1sf#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1c#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1h#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1hc#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1t#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ#1tc#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ #1#1#1 #1scr#1 ABCDEFGHIJKLMNOPQRSTUVWXYZ@mathaccentsingle[3] 0"0362#1^H 2"03620pt#1^H 0=2 #3#2kern[1]#1@kernabar[2]@single#1@bar@#1#21@bar@#1#22 bar[2]@single#1@bar@#1#21@bar@#1#22 bar@[3] ##1##2@mathaccent#32 @nucleus@char@@style@nucleus_ @@style@nucleus_ @@ @-@@ 3 tempdima@ tempdima- tempdima 10 @-tempdima@>@ @0pt @kern0.6-@ #31 @kern-0.6@@[email protected]@ @0.4@kerna@kern#2 @<@ @kern1@kern1 -@@kern-0.6@#1@depthne @bgroupempty @egroup@set@skewchar @ @everymath@group @set@skewchar@nested@a#31 @nested@a111#1 @till@marker##1 @char@till@marker#1@char A@char @nested@a111@charbar@[3] ##1##2@mathaccent#32 @nucleus@char@@style@nucleus_ @@style@nucleus_ @@ @-@@ 3 tempdima@ tempdima- tempdima 10 @-tempdima@>@ @0pt @kern0.6-@ #31 @kern-0.6@@[email protected]@ @0.4@kerna@kern#2 @<@ @kern1@kern1 -@@kern-0.6@#1@depthne @bgroupempty @egroup@set@skewchar @ @everymath@group @set@skewchar@nested@a#31 @nested@a111#1 @till@marker##1 @char@till@marker#1@char A@char @nested@a111@char equationsection Foundations of Reinforcement Learning and Interactive Decision MakingDylan J. Foster and Alexander Rakhlin  Last Updated: December 2023 1ptThese lecture notes are based on a course taught at MIT in https://www.mit.edu/ rakhlin/course-decision-making.htmlFall 2022 and https://www.mit.edu/ rakhlin/course-decision-making-f23.htmlFall 2023. This is a live draft, and all parts will be updated regularly. Please send us an email if you find a mistake, typo, or missing reference. § INTRODUCTION §.§ Decision Making This is a course about learning to make decisions in an interactive, data-driven fashion. When we say interactive decision making, we are thinking of problems such as: * Medical treatment: based on a patient'smedical history and vital signs, we need to decide what treatment will lead to the most positive outcome.* Controlling a robot: based on sensor signals, we need to decide what signals to send to a robot's actuators in order to navigate to a goal.For both problems, we (the learner/agent) are interacting with an unknown environment. In the robotics example, we do not necessarily a-priori know how the signals we send to our robot's actuators change its configuration, or what the landscape it's trying to navigate looks like. However, because we are able to actively control the agent, we can learn to model the environment on the fly as we make decisions and collect data, which will reduce uncertainty and allow us to make better decisions in the future. The crux of the interactive decision making problem is to make decisions in a way that balances (i) exploring the environment to reduce our uncertainty and (ii) maximizing our overall performance (e.g., reaching a goal state as fast as possible).fig:decision-making depicts an idealized interactive decision making setting, which we will return to throughout thiscourse. Here, at each round t, the agent (doctor) observes the medical history and vital signs of a patient, summarized in a context xt, makes a treatment decision πt, and then observes the outcomes of the treatment in the form of a reward rt, and an auxiliary observation ot about, say, illness progression. With time, we hope that the doctor will learn a good mapping xt↦πt from contexts to decisions. How can we develop an automated system that can achieve this goal? It is tempting to cast the problem of finding a good mapping xt↦πt as a supervised learning problem. After all, modern deep neural networks are able to achieve excellent performance on many tasks, such as image classification and recognition, and it is not out of the question that there exists a good neural network for the medical example as well. The question is: how do we find it? In supervised learning, finding a good predictor often amounts to fitting an appropriate model—such as a neural network—to the data. In the above example, however, the available data may be limited to what treatments have been assigned to patients, potentially missing better options. It is the process of active data collection with a controlled amount of exploration that we would like to study in this course. The decision making framework in fig:decision-making generalizes many interactive decision making problems the reader might already be familiar with, including multi-armed bandits, contextual bandits, and reinforcement learning. We will cover the foundations of algorithm design and analysis for all of these settings from a unified perspective, with an emphasis on sample efficiency (i.e., how to learn a good decision making policy using as few rounds of interaction as possible).§.§ A Spectrum of Decision Making Problems To design algorithms for general interactive decision making problems such as fig:decision-making, there are many complementary challenges we must overcome. These challenges correspond to different assumptions we can place on the underlying environment and decision making protocol, and give rise to what we describe as a spectrum of decision making problems, which is illustrated in fig:axes. There are three core challenges we will focus on throughout the course, which are given by the axes of fig:axes.* Interactivity. Does the learning agent observe data passively, or do the decisions they make actively influence what data we collect? In the setting of fig:decision-making, the doctor observes the effects of the prescribed treatments, but not the counterfactuals (the effects of the treatments not given). Hence, doctor's decisions influence the data they can collect, which in turn may significantly alter the ability to estimate the effects of different treatments. On the other hand, in classical machine learning, a dataset is typically given to the learner upfront, with no control over how it is collected. * Function approximation and generalization. In supervised statistical learning and estimation, one typically employs function approximation (e.g., models such as neural networks, kernels, or forests) to generalize across the space of covariates. For decision making, we can employ function approximation in a similar fashion, either to generalize across a space of contexts, or to generalize across the space of decisions. In the setting of fig:decision-making, the context xt summarizing the medical history and vital signs might be a highly structured object. Likewise, the treatment πt might be a high-dimensional vector with interacting components, or a complex multi-stage treatment strategy.For simple settings such as multi-armed bandits, however, it is common to assume the decision space is unstructured, and forgo generalization. * Data. Is the data (e.g., rewards or observations) observed by our learning algorithm produced by a fixed data-generating process, or does it evolve arbitrarily, and even adversarially in response to our actions? If there is fixed data-generating process, do we wish to directly model it, or should we instead aim to be agnostic? Do we observe only the labels of images, as in supervised learning, or a full trajectory of states/actions/rewards for a policy employed by the robot? As shown in <ref>, many basic decision making and learning frameworks (contextual bandits, structured bandits, statistical learning, online learning) can be thought of as idealized problems that each capture one or more of the possible challenges, while richer settings such as reinforcement learning encompass all of them. <ref> can be viewed as a roadmap for the course. We start with a brief introduction to Statistical Learning (<ref>) and Online Learning (<ref>); the concepts andresults stated here will serve as a backbone for the rest of the course. We will then study, in order, the problems of Multi-Armed Bandits (<ref>), Contextual Bandits (<ref>), Structured Bandits (<ref>), Tabular Reinforcement Learning (<ref>), General Decision Making (<ref>), and Reinforcement Learning with General Function Approximation (<ref>). Each of these topics will add a layer of complexity, and our aim is to develop a unified approach to all the aforementioned problems, both in terms of statistical complexity (the number of interactions required to achieve the goal), and in terms of algorithm design. §.§ Minimax Perspective For much of the course, we take a minimax point of view. Abstractly, letbe a set of possible models (or, choices for the environment) that can be encountered by the learner/decision maker. The setcan be thought of as representing the prior knowledge of the learner about the underlying environment. Letdenote a learning algorithm, and _T(, M) be some notion of performance of algorithmon model M∈ after T rounds of interaction (or—in passive learning—after observing T datapoints). We would like to develop algorithms that perform well, no matter what the model M∈ is, in the sense thatapproximately solves the minimax problemmin_ max_M∈ _T(, M).Understanding the statistical complexity (or, difficulty) of a given problem amounts to establishing matching (or nearly matching) upper bounds ϕ_T() and lower bounds ϕ_T() on theminimax value in <ref>. While developing such upper and lower bounds for specific model classesof interest might be a simple task, the grand aim of this course is to develop a more fundamental, unified understanding of what makes any model classeasy verus hard, and to give sharp results for all (or nearly all) .On the algorithmic side, we would like to better understand the scope of optimal algorithms that solve (<ref>). While the minimax problem is itself an optimization problem, the space of all algorithms is typically prohibitively large. One of the key insights to be leveraged in this course is that for general decision making problems, we can restrict ourselves to algorithms that interleave a type of supervised learning called online estimation (this will be described in <ref>), with a principled choice of exploration strategy that balances greedily maximizing performance (exploitation) with information acquisition (exploration). As we show, such algorithms achieve or nearly achieve optimality in (<ref>) for a surprisingly wide range of decision making problems. §.§ Statistical Learning: Brief RefresherWe begin with a short refresher on the statistical learning problem. Statistical learning is a purely passive problem in which the learner does not directly interact with the environment, but itcaptures the challenge of generalization and function approximation in the context of fig:axes.In the statistical learning problem, we receive examples (x1,y1),…,(xT,yT) ∈×, i.i.d. from a (unknown) distribution . Here xt∈ are features (sometimes called contexts or covariates), andis the feature space. yt∈ are called outcomes, andis the outcome space. Given (x1,y1),…,(xT,yT), the goal is to produce a model (or, estimator) :→' that will do a good job predicting outcomes from features for future examples (x,y) drawn from .[Note that we allow the outcome spaceto be different from the prediction space '.] To measure prediction performance, we take as given a loss function :'×→. Standard examples include: * Regression, where common losses include the square loss (a,b)=(a-b)^2 when ='=.* Classification, where ='=0,1 and we consider the indicator (or 0-1) loss (a,b)=a≠b.* Conditional density estimation with the logarithmic loss (log loss). Here '=Δ(), the set of distributions on , and for p∈',(p, y) = -log p(y).For a function f:→', we measure the prediction performance via the population (or, “test”) loss:(f) _(x,y)∼*(f(x),y). Letting T{(xt,yt)}_t=1^T denote the dataset, a (deterministic) algorithm is a map that takes the dataset as input and returns a function/predictor:(·; T): →'.The goal in designing algorithms is to ensure that[]() is minimized, where · denotes expectation with respect to the draw of the dataset T. Without any assumptions, it is not possible to learn a good predictor unless the number of examples T scales with(this is sometimes called the no-free-lunch theorem). The basic idea behind statistical learning is to work with a restricted class of functions⊆*f:→in order to facilitate generalization. The classcan be thought of as (implicitly) encoding prior knowledge about the structure of the data. For example, in computer vision, if the features xt correspond to images and the outcomes yt are labels (e.g., “cat” or “dog”), one might expect that choosingto be a class of convolutional neural networks will work well, since this encodes spatial structure.For the problem of conditional density estimation, we shall overload the notation and interchangeably write f(x) and f(·|x) for the conditional distribution. In this setting, the learner is required to compute a distribution for each x rather than form a point estimate (see fig:cond-density). For an outcome y, the loss is the negative log of the conditional density for the outcome.Empirical risk minimization and excess riskThe most basic and well-studied algorithmic principle for statistical learning is Empirical Risk Minimization (ERM). Define the empirical loss for the dataset T as(f) = 1/T∑_i=1^T ( f(xi),yi).Then, the empirical risk minimizer with respect to the classis given by ∈_f∈(f). To measure the performance of ERM and other algorithms that attempt to learn with , we consider excess loss (or, regret)ℰ(f) = (f) - min_f'∈(f').Intuitively, the quantity min_f'∈(f') in eq:excess_risk captures the best prediction performance any function incan achieve, even with knowledge of the true distribution. If an algorithmhas low excess risk, this means that we are predicting future outcomes nearly as well as any algorithm based on samples can hope to perform. ERM and other algorithms can ensure thatℰ() is small in expectation or with high probability over draw of the dataset T. Connection to estimation An appealing feature of the formulation in eq:excess_risk is that it does not presuppose any relationship between the classand the data distribution; in other words, it is agnostic. However, ifdoes happen to be good at modeling the data distribution, the excess loss has an additional interpretation based on estimation.For prediction with square loss, we say that the problem is well-specified (or, realizable) if the regression function (a)[y|x=a] is in . The regression functioncan also be seen as a minimizer of (f) over measurable functions f, for the same reason that _z(z-b)^2 is minimized at b=*z. For the square loss, if the problem is well-specified, then for all f:→, (f) =_x*(f(x)-(x))^2 Adding and subtractingin the first term of (<ref>), we have (f(x)-y)^2 -((x)-y)^2=(f(x)-(x))^2 + 2*((x)-y)(f(x)-(x)).Inspecting eq:excess_loss_l2, we see that any f achieving low excess loss necessarily estimates the true regression function ; hence, the goals of prediction and estimation coincide. Guarantees for ERM We give bounds on the excess loss of ERM forperhaps the simplest special case, in whichis finite.For any finite class , empirical risk minimization satisfies[]()≲(, T),where * For any bounded loss (including classification), (,T)=√(log||/T).* For square loss regression, if the problem is well-specified, (,T)=log||/T.In addition, there exists a (different) algorithm that achieves (,T)=log||/T for both square loss regression and conditional density estimation, even when the problem is not well-specified.Henceforth, we shall use the symbol ≲ to indicate an inequality that holds up to constants, or other problem parameters deemed less important for the present discussion. As an example, the range of losses for the first part is hidden in this notation, and we only focus on the dependence of the right-hand side onand T.The rate (,T)=√(log||/T) above is sometimes referred to as a slow rate, and is optimal for generic losses. The rate (,T)=log||/T is referred to as a fast rate, and takes advantage of additional structure (curvature, or strong convexity) of the square loss. Critically, both bounds scale only with the cardinality of , and do not depend on the size of the feature space , which could be infinite. This reflects the fact that working with a restricted function class is allowing us to generalize across the feature space . In this context the cardinality log should be thought of a notion of capacity, or expressiveness for . Intuitively, choosing a larger, more expressive class will require a larger amount of data, but will make the excess loss bound in eq:excess_risk more meaningful, since the benchmark will be stronger. Throughout these lecture notes, we restrict our attention to finite classes whenever possible in order to simplify presentation. If one wishes to move beyond finite classes, a well-developed literature within statistical learning provides various notions of complexity forthat lead to bounds on (,T) for ERM and other algorithms. These include the Vapnik-Chervonenkis (VC) dimension for classification, Rademacher complexity, and covering numbers. Standard references include <cit.>.§.§ Refresher: Random Variables and Averages To prove prop:iid_finite_class and similar generalization bounds, the main tools we will use are concentration inequalities (or, tail bounds) for random variables. A random variable Z is with variance factor (or variance proxy) σ^2 if ∀η∈,      e^η (Z-Z)≤ e^σ^2 η^2/2. Note that if Z∼(0,σ^2) is Gaussian with variance σ^2, then it is with variance proxy σ^2. In this sense, sub-Gaussian random variables generalize the tail behavior of Gaussians. A standard application of Chernoff method yields the following result.If Z_1,…,Z_T are sub-Gaussian random variables with variance proxy σ^2, then1/T∑_i=1^T Z_i - *Z≥ u≤exp{ -Tu^2/2σ^2} Applying this result with Z and -Z and taking a union bound yields the following two-sided guarantee:*1/T∑_i=1^T Z_i - *Z≥ u≤ 2exp{ -Tu^2/2σ^2}.Setting the right-hand side of (<ref>) to δ and solving for u, we find that for any δ∈(0,1), with probability at least 1-δ, *1/T∑_i=1^T Z_i - *Z≤√(2σ^2log(2/δ)/T).The factor 2 under the logarithm in <ref> is the result of applying union bound to (<ref>).Throughout the course, we will frequently apply the union bound to multiple—say N—high probability events involving sub-Gaussian random variables. In this case, the union bound will result in terms of the form log (N/δ). The mild logarithmic dependence is due to the sub-Gaussian tail behavior of the averages. The following result shows that any bounded random variable is . Any random variable Z taking values in [a,b] is with variance proxy (b-a)^2/4, i.e. ∀η∈,     lnexp{-η (Z-*Z)}≤η^2(b-a)^2/8. As a consequence, for random variables Z_1,…,Z_T taking values in [a,b] almost surely, with probability at least 1-δ, 1/T∑_i=1^T Z_i - *Z≤ (b-a)√(log(1/δ)/2T)In particular, in the setting of sec:sl, Using Hoeffding's inequality, we can prove now prove Part 1 (the slow rate) from prop:iid_finite_class. Let ={f:→} be finite, and assume ∘ f ∈ [0,1] almost surely. Then with probability at least 1-δ, ERM satisfies()-min_f∈(f) ≤ 2√(log(2||/δ)/2T).For any f∈, we can write()-(f) = [ ()-() ] +[ () - (f) ]+ [(f) - (f) ].Observe that for all f:→, we have*(f)-(f) =|(f(X),Y)-1/T∑_i=1^T (f(X_i),Y_i) |. By union bound and <ref>, with probability at least 1-||δ,∀ f∈,  |(f(X),Y)-1/T∑_i=1^T (f(X_i),Y_i) | ≤√(log(2/δ)/2T) To deduce the in-expectation bound of <ref> from the high-probability tail bound of <ref>, a standard technique of “integrating out the tail” is employed. More precisely, for a nonnegative random variable U, it holds that *U≤τ + ∫_τ^∞U≥ zdz for all τ>0; choosing τ∝ T^-1/2 concludes the proof. To prove the Part 2 (the fast rate) from prop:iid_finite_class, we need a more refined concentration inequality (Bernstein's inequality), which gives tighter guarantees for random variables with small variance. Let Z_1,…,Z_T,Z be i.i.d. with variance (Z_i)=σ^2, and range |Z- Z|≤ B almost surely. Then with probability at least 1-δ,1/T∑_i=1^T Z_i -Z ≤σ√(2log (1/δ)/T) + Blog (1/δ)/3T. The proof for Part 2 is given as an exercise in <ref>. We refer the reader to <ref> for further background on tail bounds.§.§ Online Learning and PredictionWe now move on to the problem of online learning, or sequential prediction. The online learning problem generalizes statistical learning on two fronts: * Rather than receiving a batch dataset of T examples all at once, we receive the examples (xt,yt) one by one, and must predict yt from xt only using the examples we have already observed. * Instead of assuming that examples are drawn from a fixed distribution, we allow examples to be generated in an arbitrary, potentially adversarial fashion. In more detail, at each timestep t, given the examplest-1 = {(x1, y1), …, (xt-1, yt-1) } observed so far, the algorithm produces a predictort = t(·|t-1),which aims to predict the outcome yt from the features xt. The algorithm's goal is to minimize the cumulative loss over T rounds, given by ∑_t=1^T (t(xt), yt)for a known loss function :'×→; the cumulative loss can be thought of as a sum of “out-of-sample” prediction errors. Since we will not be placing assumptions on the data-generating process, it is not possible to make meaningful statements about the cumulative loss itself. However, we can aim to ensure that this cumulative loss is not much worse than the best empirical explanation of the data by functions in a given class . That is, we measure the algorithm's performance via regret to : = ∑_t=1^T (t(xt), yt) - min_f∈∑_t=1^T (f(xt), yt).Our aim is to design prediction algorithms that keep regret small for any sequence of data. As in statistical learning, the classshould be thought of as capturing our prior knowledge about the problem, and might be a linear model or neural network. At first glance, keeping the regret small for arbitrary sequences might seem like an impossible task, as it stands in stark contrast with statistical learning, where data is generated i.i.d. from a fixed distribution. Nonetheless, we will that algorithms with guarantees similar to those for statistical learning are available.Let us remark that it is often useful to apply online learning methods in settings where data is not fully adversarial, but evolves according to processes too difficult to directly model. For example, in the chapters that follow, we will apply online methods as a subroutine with more sophisticated algorithms for decision making. Here, the choice of past decisions, while in our purview, does not look like i.i.d. or simple time-series data.The online learning protocol does not require that t lies in(t∈). A method that chooses functions fromwill be called proper, and the one that selects predictors outside ofwill be called improper. It will also be useful to allow for randomized predictions of the formt∼ qt(· |t-1),where qt is a distribution on functions, typically on elements of . For randomized predictions, we slightly abuse notation and write regret as= ∑_i=1^T _t∼qt[](t(xt), yt) - min_f∈∑_i=1^T (f(xt), yt).The algorithms we introduce in the sequel below ensure small regret even if data are adversarially and adaptively chosen. More precisely, for deterministic algorithms, (xt,yt) may be chosen based on t and all the past data, while for randomized algorithms, Nature can only base this choice on qt. In the context of fig:axes, online learning generalizes statistical learning by considering arbitrary sequences of data, but still allows forgeneral-purpose function approximation and generalization via the class . While the setting involves making predictions in an online fashion, we do not think of this as an interactive decision making problem, because the predictions made by the learning agent do not directly influence what data the agent gets to observe.§.§.§ Connection to Statistical Learning Online learning can be thought of as a generalization of statistical learning, and in fact, algorithms for online learning immediately yield algorithms for statistical learning via a technique called online-to-batch conversion. This result, which is formalized by the following proposition, rests on two observations: the cumulative loss of the algorithm looks like a sum of out-of-sample errors, and the minimum empirical fit to realized data (over ) is, on average, a harder (that is, smaller) benchmark than the minimum expected loss in . Suppose the examples (x1,y1),…,(xT,yT) are drawn from a distribution , and suppose the loss function a↦(a, b) is convex in the first argument for all b. Then for any online learning algorithm, if we define(x) = 1/T∑_t=1^Tt(x),we have[]()≤1/T·*. Let (x,y)∼ be a fresh sample which is independent of the history T. First, by Jensen's inequality, *()= *_(x,y)(1/T∑_t=1^T t(x), y )≤*1/T∑_t=1^T _(x,y)(t(x), y ) which is equal to *1/T∑_t=1^T _(xt,yt)(t(xt), yt) since t is a function of t-1 and (x,y) and (xt,yt) are i.i.d. Second, min_f∈(f) = min_f∈*1/T∑_t=1^T(f(xt),yt) ≥*min_f∈1/T∑_t=1^T(f(xt),yt) In light of <ref>, one can interpret regret as generalizing the notion of excess risk from data to arbitrary sequences.Similar to Lemma <ref> in the setting of statistical learning, the regret for online learning has an additional interpretation in terms of estimation if the outcomes for the problem are well-specified. Suppose that the features x1,…,xT are generated in an arbitrary fashion, but that for all t, the variable yt is random with mean given by a fixed function ∈:[yt|xt=x] = (x).Then for the problem of prediction with square loss, *≥*∑_t=1^T (t(xt)-(xt))^2. Notably, this result holds even if the features x1,…,xT are generated adversarially, with no prior knowledge of the sequence. This is a significant departure from classical estimation results in statistics, where estimation of an unknown function is typically done over a fixed, known sequence (“design”) x1,…,xT, or with respect to an i.i.d. dataset. §.§.§ The Exponential Weights Algorithm The main online learning algorithm is the Exponential Weights algorithm, which is applicable to finite classes . At each time t, the algorithm computes a distribution qt∈Δ() viaqt(f) ∝exp{ -η∑_i=1^t-1(f(xi), yi)},where η>0 is a learning rate. Based on qt, the algorithm forms the prediction t. We give two variants of the method here..499 .499 The only difference between these variants lies in whether we compute the prediction t from qt viat = _f∼ qtf,     or     t∼ qt.The latter can be applied to any bounded loss functions, while the former leads to faster rates for specific losses such as the square loss and log loss, but is only applicable when ' is convex. Note that the averaged version is inherently improper, while the second is proper, yet randomized. From the point of view of regret, the key difference between these two versions is the placement of “_f∼ qt”: For the averaged version it is inside the loss function, and for the randomized version it is outside (see (<ref>)). The averaged version can therefore take advantage of the structure of the loss function, such as strong convexity, leading to faster rates. The following result shows that Exponential Weights leads to regret bounds for online learning, with rates that parallel those in prop:iid_finite_class. For any finite class , the Exponential Weights algorithm (with appropriate choice of η) satisfies1/T≲(, T)for any sequence, where: * For arbitrary bounded losses (including classification), (,T)=√(log||/T). This is achieved by the randomized variant.* For regression with the square loss and conditional density estimation with the log loss, (,T)=log||/T. This is achieved by the averaged variant.We now turn to the proof of <ref>. Since we are not placing any assumptions on the data generating process, we cannot hope to control the algorithm's loss at any particular time t, but only cumulatively. It is then natural to employ amortized analysis with a potential function.In more detail, the proof of prop:online_bounds relies on several steps, common to standard analyses of online learning: (i) define a potential function, (ii) relate the increase in potential at each time step, to the loss of the algorithm, (iii) relate cumulative loss of any expert f∈ to the final potential. For the Exponential Weights Algorithm, the proof relies on the following potential for time t, parameterized by η>0:Φtη = -log∑_f∈exp{ - η∑_i=1^t (f(xi), yi)}.The choice of this potential is rather opaque, and a full explanation of its origin is beyond the scope of the course, but we mention in passing that there are principled ways of coming up with potentials in general online learning problems. We first prove the second statement, focusing on conditional density with the logarithmic loss; for the square loss, see <ref> below.Proof for Part 2: Log loss. Recall that for each x, f(x) is a distribution over , and (f(x), y) = -log f(y|x) where we abuse the notation and write f(x) and f(·|x) interchangeably. With η=1, the averaged variant of exponential weights satisfiest(yt|xt) = ∑_f∈ qt(f) f(yt|xt)= ∑_f∈ f(yt|xt) exp{-∑_i=1^t-1(f(xi), yi) }/∑_f∈exp{ -∑_i=1^t-1(f(xi), yi) } ,and thus((xt),yt) = -logt(yt|xt) = Φt1 - Φt-11.Hence, by telescoping∑_t=1^T ((xt),yt) = ΦT1 - Φ01.Finally, observe that Φ01=-log|| and, since -log is monotonically decreasing, we have ΦT1≤ -logexp{-∑_i=1^T((xi), yi) } = ∑_i=1^T((xi), yi),for any ∈. This establishes the result for conditional density estimation with the log loss. As already discussed, the above proof follows the strategy: the loss on each round related to change in potential (<ref>), and the cumulative loss of any expert is related to the final potential (<ref>). We now aim to replicate these steps for arbitrary bounded losses.  Proof for Part 1: Generic loss. To prove this result, we build on the log loss result above. First, observe that without loss of generality, we may assume that ∘ f∈[0,1] for all f∈ and (x,y), as we can always re-scale the problem. The randomized variant of exponential weights (<ref>) satisfies _t∼qt(t(xt), yt) = ∑_f∈(f(xt), yt) exp{-η∑_i=1^t-1(f(xi), yi) }/∑_f∈exp{ -η∑_i=1^t-1(f(xi), yi) }. Hoeffding's inequality (<ref>) implies that η_t∼qt(t(xt), yt)≤ -log∑_f∈exp{-η(f(xt), yt)}exp{-η∑_i=1^t-1(f(xi), yi) }/∑_f∈exp{ -η∑_i=1^t-1(f(xi), yi) } + η^2/8. Note that the right-hand side of this inequality is simply Φtη-Φt-1η+η^2/8, establishing the analogue of (<ref>). Summing over t, this gives η∑_t=1^T _t∼qt(t(xt), yt)≤ΦTη-Φ0η+Tη^2/8. As in the first part, for any ∈, we can upper boundΦTη≤η∑_t=1^T((xt), yt),while Φ0η = - log||. Hence, we have that for any ∈, ∑_t=1^T _t∼qt(t(xt), yt) - ((xt),yt) ≤Tη/8 + log/η. With η=√(8log ||/T), we conclude that ∑_t=1^T _t∼qt(t(xt), yt) - ((xt),yt) ≤√(Tlog||/2). Observe that Hoeffding's inequality was all that was needed for <ref>. Curiously enough, it was also the only nontrivial step in the proof of prop:online_bounds. In fact, the connection between probabilistic inequalities and online learning regret inequalities (that hold for arbitrary sequences) runs much deeper. As in statistical learning, there are (sequential) complexity measures forthat can be used to generalize the regret bounds in prop:online_bounds to infinite classes. In general, the optimal regret for a classwill reflect the statistical capacity of the class <cit.>.We did not provide a proof of prop:online_bounds for square loss. It is tempting to reduce square loss regression to density estimation by taking the conditional density to be a Gaussian distribution. Indeed, the log loss of a distribution with density proportional to exp{-(t(xt)-yt)^2} is, up to constants, the desired square loss. However, the mixture in (<ref>) does not immediately lead to a prediction strategy for the square loss, as the expectation appears in the wrong location. This issue is fixed by a notion known as mixability.We say that a loss ℓ is mixable with parameter η if there exists a constant c>0 such that the following holds: for any x and a distribution q∈Δ(), there exists a prediction (x)∈' such that for all y∈, ((x),y)≤ -c/ηlog(∑_f∈ q(f) exp{-η(f(x),y)}).If loss is mixable, then given the exponential weights distribution qt, the best prediction yt=t(xt) can be written (by bringing the right-hand side of (<ref>) to the left side) as an optimization problem _yt∈'max_yt∈*(yt,yt)+c/ηlog(∑_f∈ qt(f) exp{-η(f(xt),yt)}) which is equivalent to _yt∈'max_yt∈*(yt,yt)+c/ηlog(∑_f∈exp{-η∑_i=1^t (f(xi),yi)}) once we remove the normalization factor. With this choice, mixability allows one to replicate the proof of <ref> for the logarithmic loss, with the only difference being that (<ref>) (after applying -log to both sides) becomes an inequality. It can be verified that square loss is mixable with parameter η=2 and c=1 when ='=[0,1], leading to the desired fast rate for square loss in prop:online_bounds. The idea of translating the English statement “there exists a strategy such that for any outcome...” into a min-max inequality will come up again in the course.For the slow rate in prop:online_bounds, the nature of the loss and the dependence on the function f is immaterial for the proof. The guarantee can be stated in a more abstract form that depends only on the vector of losses for functions inas follows. Let =N. For timestep t, define tf = (f(xt),yt) and t = (tf_1,…,tf_N)∈^N for ={f_1,…,f_N}. For a randomized strategy qt∈Δ(N), expected loss of the learner can be written as_t∼ qt(t(xt),yt) = qt, t,and the expected regret can be written as= ∑_t=1^T qt, t - min_j∈{1,…,N}∑_t=1^T e_j, twhere e_j∈^N is the standard basis vector with 1 in jth position. In its most general form, the exponential weights algorithm gives bounds on the regret in eq:vector_regret for any sequence of vectors 1,…,T, and the update takes the formqt(k) ∝exp{ -η∑_i=1^t-1t(k)}. This formulation can be viewed as a special case of a problem known as online linear optimization, and the exponential weights method can be viewed as an instance of an algorithm known as mirror descent.§.§ Exercises [prop:iid_finite_class, Part 2.] Consider the setting of <ref>, where (x1,y1), …, (xT, yT) are i.i.d., ={f:→[0,1]} is finite, the true regression function satisfies ∈, and Y_i∈[0,1] almost surely. Prove that empirical risk minimizerwith respect to square loss satisfies the following bound on excess risk. With probability at least 1-δ,() log(/δ)/T.Follow these steps: * For a fixed function f∈, consider the random variable Z_i(f) = (f(xi)-yi)^2-((xi)-yi)^2for i=1,…,T. Show that Z_i(f) = (f(xi)-(xi))^2 = (f). * Show that for any fixed f∈, the variance (Z_i(f)) is bounded as(Z_i(f)) ≤ 4(f(xi)-(xi))^2. * Apply Bernstein's inequality (lem:bernstein) to show that with for any f∈, with probability at least 1-δ, (f) ≤ 2((f)-()) + Clog (1/δ)/T,for an absolute constant C, where (f) = 1/T∑_t=1^T (f(xt)-yt)^2.* Extend this probabilistic inequality to simultaneously hold for all f∈ by taking the union bound over f∈. Conclude as a consequence that the bound holds for , the empirical minimizer, implying (<ref>). [ERM in Online Learning]Consider the problem of Online Supervised Learning with indicator loss (f(x),y)=f(x)≠ y, ='={0,1}, and a finite class . * Exhibit a classfor which ERM cannot ensure sublinear growth of regret for all sequences, i.e. there exists a sequence (x1,y1),…,(xT, yT) such that∑_t=1^T (t(xt),yt) - min_f∈∑_t=1^T (f(xt),yt) = Ω(T),where t is the empirical minimizer for the indicator loss on (x1,y1),…,(xt-1,yt-1). Note: The construction must have ≤ C, where C is an absolute constant that does not depend on T.* Show that if data are i.i.d., then in expectation over the data, ERM attains a sublinear bound O(√(Tlog)) on regret for any finite class .[Low Noise]* For a nonnegative random variable X, prove that for any η≥ 0, lnexp*-η(X-X)≤η^2/2X^2.Hint: use the fact that ln x≤ x-1 and exp(-x)≤ 1-x+x^2/2 for x≥ 0.* Consider the setting of prop:online_bounds, Part 1 (Generic Loss). Prove that the randomized variant of the Exponential Weights Algorithm satisfies, for any ∈, ∑_t=1^T _t∼qt(t(xt), yt) - ((xt),yt) ≤η/2∑_t=1^T _t∼qt(t(xt), yt)^2 + log/η.for any sequence of data and nonnegative losses. Hint: replace Hoeffding's Lemma by (<ref>). * Suppose (f(x),y)∈[0,1] for all x∈, y∈, and f∈. Suppose that there is a “perfect expert ∈ such that ((xt),yt)=0 for all t∈[T]. Conclude that the above algorithm, with an appropriate choice of η, enjoys a bound of O(log) on the cumulative loss of the algorithm (equivalently, the fast rate log/T for the average regret). This setting is called “zero-noise.”* Consider the binary classification problem with indicator loss, and supposecontains a perfect expert, as above. The Halving Algorithm maintains a version space t={f∈: f(xs)=ys, s<t} and, given xt, follows the majority vote of remaining experts in t. Show that this algorithm incurs cumulative loss at most O(log). Hence, the Exponential Weights Algorithm can be viewed as an extension of the Halving algorithm to settings where the optimal loss is non-zero.§ MULTI-ARMED BANDITS This chapter introduces the multi-armed bandit problem, which is the simplest interactive decision making framework we will consider in this course.The protocol (see above) proceeds in T rounds. At each round t∈T, the learning agent selects a discrete decision[In the literature on bandits, decisions are often referred to as actions. We will use these terms interchangeably throughout this section.] πt∈Π=1,…,A using the datat-1 = {(π1, r1),…,(πt-1, rt-1) }collected so far; we refer to Π as the decision space or action space, with A∈ denoting the size of the space. We allow the learner to randomize the decision at step t according to a distribution pt=pt(·|t-1), sampling πt∼pt. Based on the decision πt, the learner receives a reward rt, and their goal is to maximize the cumulative reward across all T rounds. As an example, one might consider an application in which the learner is a doctor (or personalized medical assistant) who aims to select a treatment (the decision) in order to make a patient feel better (maximize reward); see fig:mab.The multi-armed bandit problem can be studied in a stochastic framework, in which rewards are generated from a fixed (conditional) distribution, or an non-stochastic/adversarial framework in the vein of online learning (sec:ol). We will focus on the stochastic framework, and make the following assumption. [Stochastic Rewards]Rewards are generated independently viart∼(·|πt),where (·|·) is the underlying model (conditional distribution).We define(π) [r|π]as the mean reward function under r∼(·|π). We measure the learner's performance via regret to the action _π∈Π(π) with highest reward:∑_t=1^T() - ∑_t=1^T_πt∼pt[](πt).Regret is a natural notion of performance for the multi-armed bandit problem because it is cumulative: it measures not just how well the learner can identify an action with good reward, but how well it can maximize reward as it goes. This notion is well-suited to settings like the personalized medicine example in fig:mab, where regret captures the overall quality of treatments, not just the quality of the final treatment. As in the online learning framework, we would like to develop algorithms that enjoy sublinear regret, i.e./T→0asT→∞. The most important feature of the multi-armed bandit problem, and what makes the problem fundamentally interactive, is that the learner only receives a reward signal for the single decision πt∈Π they select at each round. That is, the observed reward rt gives a noisy estimate for (πt), but reveals no information about the rewards for other decisions π≠πt.For example in fig:mab, if the doctor prescribes a particular treatment to the patient, they can observe whether the patient responds favorably, but they do not directly observe whether other possible treatments might have led to an even better outcome. This issue is often referred to as partial feedback or bandit feedback. Partial feedback introduces an element of active data collection, as it means that the information contained in the dataset t depends on the decisions made by the learner, which we will see necessitates exploring different actions. This should be contrasted with statistical learning (where the dataset is generated independently from the learner) and online learning (where losses may be chosen by nature in response to the learner's behavior, but where the outcome yt— and hence the full loss function (·,yt)—is always revealed).In the context of fig:axes, the multi-armed bandit problem constitutes our first step along the “interactivity” axis, but does not incorporate any structure in the decision space (and does not involve features/contexts/covariates). In particular, information about one action does not reveal information about any other actions, so there is no hope of using function approximation to generalize across actions.[Another way to say this is that we take =^A, so that ∈.] As a result, the algorithms we will cover in this section will have regret that scales with (Π)=(A). This shortcoming is addressed by the structured bandit framework we will introduce insec:structured, which allows for the use of function approximation to model structure in the decision space.[Throughout the lecture notes, we will exclusively use the term “multi-armed bandit” to refer to bandit problems with finite action spaces, and use the term “structured bandit” for problems with large action spaces.] It is also reasonable to consider empirical regret, defined asmax_π∈Π∑_t=1^Trt(π) - ∑_t=1^Trt(πt),where, for π≠πt, rt(π) denotes the counterfactual reward the learner would have received if they had played π at round t. Using Hoeffding's inequality, one can show that this is equivalent to the definition in eq:regret_mab up to (√(T)) factors.§.§ The Need for ExplorationIn statistical learning, we saw that the empirical risk minimization algorithm, which greedily chooses the function that best fits the data, leads to interesting bounds on excess risk. For multi-armed bandits, since we assume the data generating process is stochastic, a naturalfirst attempt at designing an algorithm is to apply the greedy principle here in the same fashion. Concretely, at time t, we can compute an empirical estimate for the reward functionviat(π)=1/nt(π)∑_s<trsπs=π,where nt(π) is the number of times π has been selected up to time t.[If nt(π)=0, we will set t(π)=0.] Then, we can choose the greedy actiont=_π∈Πt(π).Unfortunately, due to the interactive nature of the bandit problem, this strategy can fail, leading to linear regret (=(T)). Consider the following problem with Π=1,2 (A=2). * Decision 1 has reward 1/2 almost surely.* Decision 2 has reward (3/4).Suppose we initialize by playing each decision a single time to ensure that nt(π)>0, then follow the greedy strategy. One can see that with probability 1/4, the greedy algorithm will get stuck on action 1, leading to regret (T).The issue in this example is that the greedy algorithm immediately gives up on the optimal action and never revisits it. To address this, we will consider algorithms that deliberately explore less visited actions to ensure that their estimated rewards are not misleading.§.§ The -Greedy AlgorithmThe greedy algorithm for bandits can fail because it can insufficiently explore good decisions that initially seem bad, leading it to get stuck playing suboptimal decisions. In light of this failure, a reasonable solution is to manually force the algorithm to explore, so as to ensure that this situation never occurs. This leads us to what is known as thealgorithm (e.g., <cit.>).Let ∈0,1 be the exploration parameter. At each time t∈T, the algorithm computes the estimated reward function t as in eq:mab_mean. With probability 1-, the algorithm chooses the greedy decisiont = _πt(π),and with probabilityit samples a uniform random action πt∼(1,…,A). As the name suggests, usually plays the greedy action (exploiting what it has already learned), but the uniform sampling ensures that the algorithm will also explore unseen actions. We can think of the parameteras modulating the tradeoff between exploiting and exploring. Assume that (π)∈0,1 and rt is 1-. Then for any T, by choosingappropriately, the algorithm ensures that with probability at least 1-δ,* A^1/3T^2/3·log^1/3(AT/δ). This regret bound has /T→0 with T→∞ as desired, though we will see in the sequel that more sophisticated strategies can attain improved regret bounds that scale with √(AT).[Note that √(AT)≤A^1/3T^2/3 whenever A≤T, and when A≥T both guarantees are vacuous.] Recall that t_πt(π) denotes the greedy action at round t, and that pt denotes the distribution over πt. We can decompose the regret into two terms, representing the contribution from choosing the greedy action and the contribution from exploring uniformly: = ∑_t=1^T_πt∼pt*() - (πt)= (1-)∑_t=1^T() - (t) + ∑_t=1^T_πt∼(A)*() - (t)≤∑_t=1^T() - (t) + T.In the last inequality, we have simply written off the contribution from exploring uniformly by using that (π)∈0,1. It remains to bound the regret we incur from playing the greedy action. Here, we bound the per-step regret in terms of estimation error using a similar decomposition to lem:erm_uniform_dev (note that we are now working with rewards rather than losses): ()-(t)= [()-t()] + [t() - t(t)]≤0 + [t(t) - (t)] ≤ 2max_π∈{,t} |(π)-t(π)| ≤ 2max_π |(π)-t(π)|.Note that this regret decomposition can also be applied to the pure greedy algorithm, which we have already shown can fail. The reason why -Greedy succeeds, which we use in the argument that follows, is that because we explore, the “effective” number of times that each arm will be pulled prior to round t is of the order t/A, which will ensure that the sample mean converges to . In particular, we will show that the event _t = {max_π |(π)-t(π)|√(Alog(AT/δ)/t)}occurs for all t with probability at least 1-δ. To prove that <ref> holds, we first use Hoeffding's inequality for adaptive stopping times (lem:hoeffding_adaptive), which gives that for any fixed π, with probability at least 1-δ over the draw of rewards,|(π)-t(π)| ≤√(2log(2T/δ)/(π)).From here, taking a union bound over all t∈T and π∈Π ensures that|(π)-t(π)| ≤√(2log(2AT^2/δ)/(π))for all π and t simultaneously. It remains to show that the number of pulls nt(π) is sufficiently large. Let et∈0,1 be a random variable whose value indicates whether the algorithm explored uniformly at step t, and let mt(π)=i<t:πi=π,ei=1, which has nt(π)≥mt(π). Let Zt=πt=π,et=1. Observe that we can writemt(π)=∑_i<tZi.In addition, Zt∼(/A), so we have *mt(π)=(t-1)/A. Using Bernstein's inequality (lem:bernstein) with Z1,…,Zt-1, we have that for any fixed π and all u>0, with probability at least 1-2e^-u,*(π) - (t-1)/A≤√(2Z(t-1)u) + u/3≤√(2(t-1)u/A) + u/3≤ (t-1)/2A + 4u/3,where we have used that Z=/A·(1-/A)≤/A,and then applied the arithmetic mean-geometric mean (AM-GM) inequality, which states that √(xy)≤x/2+y/2 for x,y≥0. Rearranging, this gives(π) ≥(t-1)/2A- 4u/3.Setting u=log(2AT/δ) and taking a union bound, we are guaranteed that with probability at least 1-δ, for all π∈Π and t∈T(π) ≥(t-1)/2A- 4log(2AT/δ)/3.As long as tAlog(AT/δ) (we can write off the rounds where this does not hold), this yields (π) ≥(π)t/A.Taking a union bound and combining with eq:mab_egreedy_proof1, this implies that with probability at least 1-δ, for all t,max_π |(π)-t(π)|√(Alog(AT/δ)/t).which leads to the overall regret bound≤∑_t=1^Tmax_π |(π)-t(π)| + T ∑_t=1^T√(Alog(AT/δ)/t) + T≤√(ATlog(AT/δ)/) + T.To balance the terms on the right-hand side, we set∝*Alog(AT/δ)/T^1/3,which gives the final result. This proof shows that the strategy allows the learner to acquire information uniformly for all actions, but we pay for this in terms of regret (specifically, through the T factor in the final regret bound <ref>). This issue here is that the strategy continually explores all actions, even though we might expect to rule out actions with very low reward after a relatively small amount of exploration. To address this shortcoming, we will consider more adaptive strategies.A relative of is the explore-then-commit (ETC) algorithm (e.g., <cit.>), which uniformly explores actions for the first N rounds, then estimates rewards based on the data collected and commits to the greedy action for the remaining T-N rounds. This strategy canbe shown to attain A^1/3T^2/3 for an appropriate choice of N, matching .§.§ The Upper Confidence Bound (UCB) Algorithm The next algorithm we will study for bandits is the () algorithm <cit.>. The algorithm attains a regret bound of the order (√(AT)), which improves upon the regret bound for , and is optimal (in a worst-case sense) up to logarithmic factors. In addition to optimality, the algorithm offers several secondary benefits, including adaptivity to favorable structure in the underlying reward function.The UCB algorithm is based on the notion of optimism in the face of uncertainty, which is a general principle we will revisit throughout this text in increasingly rich settings. The idea behind the principle is that at each time t, we should adopt the most optimistic perspective of the world possible given the data collected so far, and then choose the decision πt based on this perspective.To apply the idea of optimism to the multi-armed bandit problem, suppose that for each step t, we can construct “confidence intervals”t, t:Π→,with the following property: with probability at least 1-δ,∀ t∈[T], π∈Π,    f^*(π) ∈ [t(π), t(π) ]. We refer to t as a lower confidence bound and t as a upper confidence bound, since we are guaranteed that with high probability, they lower (resp. upper) bound . Given confidence intervals, the algorithm simply chooses πt as the “optimistic” action that maximizes the upper confidence bound:πt=_π∈Πt(π).The following lemma shows that the instantaneous regret for this strategy is bounded by the width of the confidence interval; see fig:confidence_width for an illustration. Fix t, and suppose that (π)∈t(π),t(π) for all π. Then the optimistic actionπt=_π∈Πt(π)has() - (πt) ≤t(πt) - (πt) ≤t(πt) - t(πt).The result follows immediate from the observation that for any t∈[T] and any ∈Π, we have()≤t() ≤t(πt)   and     -(πt) ≤ -t(πt). lem:regret_optimistic implies that as long as we can build confidence intervals for which the widtht(πt) - t(πt) shrinks, the regret for the strategy will be small. To construct such intervals, here we appeal to Hoeffding's inequality for adaptive stopping times (lem:hoeffding_adaptive).[While asymptotic confidence intervals in classical statistics arise from limit theorems, we are interested in valid non-asymptotic intervals, and thus appeal to concentration inequalities.] As long as rt∈0,1, a union bound gives that with probability at least 1-δ, for all t∈T and π∈Π,t(π)-(π)≤√(2log (2T^2A/δ)/nt(π)),where we recall that t is the sample mean and nt(π)∑_i<tπi=π. This suggests that by choosingt(π) = t(π) + √(2log (2T^2A/δ)/nt(π)) ,t(π) = t(π) - √(2log (2T^2A/δ)/nt(π)),we obtain a valid confidence interval. With this choice—along with lem:regret_optimistic—we are in a favorable position, because for a given round t, one of two things must happen: * The optimistic action has high reward, so the instantaneous regret is small.* The instantaneous regret is large, which by lem:regret_optimistic implies that confidence width is large as well (and nt(πt) is small). This can only happen a small number of times, since nt(πt) will increase as a result, causing the width to shrink.Using this idea, we can prove the following regret bound. Using the confidence bounds in eq:ucb_confidence_bound, the algorithm ensures that with probability at least 1-δ,√(ATlog(AT/δ)). This result is optimal up to the log(AT) factor, which can be removed by using the same algorithm with a slightly more sophisticated confidence interval construction <cit.>. Note that compared to the statistical learning and online learning setting, where we were able to attain regret bounds that scaled logarithmically with the size of the benchmark class, here the optimal regret scales linearly with Π=A. This is the price we pay for partial/bandit feedback, and reflects that fact that we must explore all actions to learn.Let us condition on the event in eq:ucb_good_event. Whenever this occurs, we have that (π)∈*t(π),t(π) for all t∈T and π∈Π, so the confidence intervals are valid. As a result, lem:regret_optimisticbounds regret in terms of the confidence width:∑_t=1^T()-(πt) ≤∑_t=1^Tt(πt) - t(πt) = ∑_t=1^T2√(2log (2T^2A/δ)/nt(πt))∧1;here, the “∧1” term appears because we can write off the regret for early rounds where nt(πt)=0 as 1.To bound the right-hand side, we use a potential argument. The basic idea is that at every round, nt(π) must increase for some action π, and since there are only A actions, this means that 1/√(nt(πt)) can only be large for a small number of rounds. This can be thought of as a quantitative instance of the pigeonhole principle. We have∑_t=1^T 1/√(nt(πt))∧1 √(AT).We begin by writing.∑_t=1^T 1/√(nt(πt))∧1=∑_π∑_t=1^Tπt=π/√(nt(π))∧1 =∑_π∑_t=1^nT+1(π)1/√(t-1)∧1.For any n∈, we have ∑_t=1^n1/√(t-1)∧1≤ 1 + 2√(n), which allows us to bound byA + 2 ∑_π√(nT(π)).The factor of A above is a lower-order term (recall that we have A≤√(AT) whenever A≤T, and if A>T the regret bound we are proving is vacuous). To bound the second term, using Jensen's inequality, we have∑_π√(nT(π))≤ A√(∑_πnT(π)/A) = A√(T/A) = √(AT). The main regret bound now follows from lem:confidence_width_potential and <ref>. To summarize, the key steps in the proof of prop:ucb were to: * Use the optimistic property and validity of the confidence bounds to bound regret by the sum of confidence widths.* Use a potential argument to show that the sum of confidence widths is small.We will revisit and generalize both ideas in subsequent chapters for more sophisticated settings, including contextual bandits, structured bandits, and reinforcement learning.The (√(AT)) regret bound attained by holds uniformly for all models, and is (nearly) minimax-optimal, in the sense that for any algorithm, there exists a modelfor which the regret must scale as (√(AT)). Minimax optimality is a useful notion of performance, but may be overly pessimistic. As an alternative, it is possible to show that the UCB attains what is known as an instance-dependent regret bound, which adapts to the underlying reward function, and can be smaller for “nice” problem instances.Let Δ(π)()-(π) be the suboptimality gap for decision π. Then, when (π)∈0,1, can be shown to achieve∑_π:Δ(π)>0log(AT/δ)/Δ(π).If we keep the underlying model fixed and take T→∞, this regret bound scales only logarithmically in T, which improves upon the √(T)-scaling of the minimax regret bound. §.§ Bayesian Bandits and the Posterior Sampling Algorithm Up to this point, we have been designing and analyzing algorithms from a frequentist viewpoint, in which we aim to minimize regret for a worst-case choice of the underlying model . An alterative is to adopt a Bayesian viewpoint, and assume that the underlying model is drawn from a known prior μ∈Δ().[It is important that μ is known, otherwise this is no different from the frequentist setting.] In this case, rather than worst-case performance, we will be concerned with average regret under the prior, defined via(μ) _∼μ*,where *· denotes the algorithm's expected regret whenis the underlying reward distribution.Working in the Bayesian setting opens up additional avenues for designing algorithms, because we can take advantage of our knowledge of the prior to compute quantities of interest that are not available in the frequentist setting, such as posterior distribution overafter observing the dataset t-1. The most basic and well-known strategy here is posterior sampling (also known as Thompson sampling or probability matching) <cit.>. The basic idea is as follows. At each time t, we can use our knowledge of the prior to compute the distribution*=·|t-1, which represents the posterior distribution overgiven all of the data we have collected from rounds 1,…,t-1. The posterior sampling algorithm simply samples the learner's action πt from this distribution, thereby “matching” the posterior distribution of . For any prior μ, the posterior sampling algorithm ensures that(μ) ≤√(ATlog(A)). In what follows, we prove a simplified version of <ref>; the full proof is given in <ref>.We will make the following simplified assumptions: * We restrict to reward distributions where (·|π)=((π),1). That is,is the only part of the reward distribution that is unknown.*belongs to a known class , and rather than proving the regret bound in prop:posterior_mab, we will prove a bound of the form(μ) √(ATlog),which replaces the logA factor in the proposition with log.Since the mean reward functionis the only part of the reward distributionthat is unknown, we can simplify by considering an equivalent formulation where the prior has the form μ∈Δ(). That is, we have a prior overrather than .Before proceeding, let us introduce some notation. The process through which we sample ∼μ and the run the bandit algorithm induces a joint lawover (, T), which we call . Throughout the proof, we use *· to denote the expectation under this law. We also define _t*·=*·|t and _t·=·|t.We begin by using the law of total expectation to express the expected regret as(μ) =*∑_t=1^T_t-1*(π_) - (πt).Above, we have written =π_ to make explicit the fact that this is a random variable whose value is a function of .We first simplify the expected regret for each step t. Let μt(f)(=f|t-1) be the posterior distribution at timestep t. The learner's decision πt is conditionally independent ofgiven t-1, so we can write_t-1*(π_) - (πt) = _∼μt,πt∼pt*(π_) - (πt).If we define t(π)=_∼μt*(π) as the expected reward function under the posterior, we can further write this as_∼μt,πt∼pt*(π_) - t(πt).By the design of the posterior sampling algorithm, πt∼pt is identical in distribution to π_ under ∼μt, so this is equal to _∼μt*(π_) - t(π_).This quantity captures—on average—how far a given realization ofdeviates from the posterior mean t, for a specific decision π_ which is coupled to . The expression above might appear to be unrelated to the learner's decision distribution, but the next lemma shows that it is possible to relate this quantity back to the learner's decision distribution using a notion of information gain (or, estimation error). For any function :Π→, it holds that _∼μt*(π_)-(π_)≤√(A·_∼μt_πt∼pt*((πt)-(πt))^2).We will show a more general result. Namely, for any ν∈Δ() and :Π→, if we define p(π)=_f∼ν*π_f=π, then_f∼ν*f(π_f)-(π_f)≤√(A·_f∼ν_π∼p*(f(π)-(π))^2).This can be thought of as a “decoupling” lemma. On the , the random variables f and π_f are coupled, but on the , π is drawn from the marginal distribution over π_f, independent of the draw of f itself.To prove the result, we use Cauchy-Schwarz as follows:_f∼ν*f(π_f)-(π_f) = _f∼ν*p^1/2(π_f)/p^1/2(π_f)*f(π_f)-(π_f)≤*_f∼ν*1/p(π_f)^1/2·*_f∼ν*p(π_f)*f(π_f)-(π_f)^2^1/2.For the first term, we have_f∼ν*1/p(π_f) = ∑_fν(f)/p(π_f) = ∑_π∑_f:π_f=πν(f)/p(π) = ∑_πp(π)/p(π) = A.For the second term, we have_f∼ν*p(π_f)*f(π_f)-(π_f)^2≤_f∼ν*∑_πp(π)*f(π)-(π)^2 = _f∼ν_π∼p*(f(π)-(π))^2.Putting these bounds together yields eq:decoupling_general.Using lem:mab_decoupling_basic, we have that* ≤*∑_t=1^T√(A·_∼μt_πt∼pt*((πt)-t(πt))^2)≤√(AT·*∑_t=1^T_∼μt_πt∼pt*((πt)-t(πt))^2).To finish up we will show that ∑_t=1^T_∼μt_πt∼pt*((πt)-t(πt))^2≤log. To do this, we need some additional information-theoretic tools. * For a random variable X with distribution , (X)≡()∑_xp(x)log(1/p(x)).* For random variables X and Y, (X|Y=y)(_X|Y=y) and (X|Y)_y∼p_Y*(X|Y=y).* For distributionsand , =∑_xp(x)log(p(x)/q(x)). To keep notation as clear as possible going forward, let us use boldface script (t, , , t) to refer to the abstract random variables under consideration, and use non-boldface script (πt, , , t)to refer to their realizations. Our aim will be to use the conditional entropy (|t) as a potential function, and show that for each t,1/2*_∼μt_πt∼pt*((πt)-t(πt))^2 =(|t-1) - (|t).From here the result will follow, because1/2*∑_t=1^T_∼μt_πt∼pt*((πt)-t(πt))^2 = ∑_t=1^T(|t-1) - (|t)= (|0) - (|T) ≤(|0) ≤log,where the last inequality follows because the entropy of a random variable X over a setis always bounded by log.We proceed to prove eq:entropy_potential.To begin, we use lem:pinsker_subgaussian, which implies that1/2((πt)-t(πt))^2 ≤_t|,πt,t-1_t|πt,t-1.and1/2_∼μt_πt∼pt*((πt)-t(πt))^2 = _∼μt_πt∼pt*_t|,πt,t-1_t|πt,t-1Since KL divergence satisfies _x∼_X*_Y|X=x_Y=_y∼_Y*_X|Y=y_X, this is equal to_t-1*_|πt,rt,t-1_|t-1 = _t-1*_|t_|t-1.Taking the expectation over t-1, we can write this as*_t-1*_|t_|t-1 =_t-1_t|t-1*_|t_|t-1.A simple exercise shows that for random variables X,Y,Z,_(x,y)∼_X,Y*_Z|X=x,Y=y_Z|X=x = (Z|X) - (Z|X,Y).Applying this result above (and using that t-1⊂t) gives_t-1_t|t-1*_|t_|t-1 = (|t-1)-(|t)as desired.The analysis above critically makes use of the fact that we are concerned with Bayesian regret, and have access to the true prior. One might hope that by choosing a sufficiently uninformative prior, this approach might continue to work in the frequentist setting. In fact, this indeed the case for bandits, though a different analysis is required <cit.>. However, one can show (sec:structured,sec:general_dm) that the Bayesian analysis we have given here extends to significantly richer decision making settings, while the frequentist counterpart is limited to simple variants of the multi-armed bandit. Using the minimax theorem, it is possible to show that under appropriate technical conditions min_𝖠𝗅𝗀max_^, 𝖠𝗅𝗀* = max_μ∈Δ()min_𝖠𝗅𝗀_∼μ^, 𝖠𝗅𝗀*.That is, if we take the worst-case value of the Bayesian regret over all possible choices of prior, this coincides with the minimax value of the frequentist regret. §.§ Adversarial Bandits and the AlgorithmWe conclude this section with a brief introduction to the multi-armed bandit problem with non-stochastic/adversarial rewards, which dispenses with asm:stochastic_rewards_MAB. In the context of fig:axes, the non-stochastic nature of rewards adds a new “adversarial data” dimension to the problem. As one might expect, the solution we will present for non-stochastic bandits will leverage the the online learning tools introduced in <ref>.To simplify the presentation, suppose that the collection of rewards { rt(π)∈[0,1]: π∈ [A], t∈[T]}for each action and time step is arbitrary and fixed ahead of the interaction by an oblivious adversary. Since we do not posit a stochastic model for rewards, we define regret as in (<ref>).The algorithm we present will build upon the exponential weights algorithm studied in the context of online supervised learning in <ref>. To make the connection as clear as possible, we make a temporary switch from rewards to losses, mapping rt to 1-rt, a transformation that does not change the problem itself.Recall that pt denotes the randomization distribution for the learner at round t. As discussed in rem:online_linear_opt, we can write expected regret as= ∑_t=1^T pt,t - min_π∈[A]∑_t=1^T e_π,twhere t∈[0,1]^A is the vector of losses for each of the actions at time t. Since only the loss (equivalently, reward) of the chosen action πt∼ pt is observed, we cannot directly appeal to the exponential weights algorithm, which requires knowledge of the full vector t. To address this, we build an unbiased estimate of the vectort from a single real-valued observation t(πt). At first, this might appear impossible, but it is straightforward to show thatt(π) = t(π)/pt(π)×πt=πis an unbiased estimate for all π∈[A], orin vector notation_πt∼ pt[]t = t.If we apply the exponential weights algorithm with the loss vectors t, it can be shown to attain regret* =*∑_t=1^T pt, t - min_π∑_t=1^T e_π, t = *∑_t=1^T pt, t - min_π*∑_t=1^T e_π, t≲√(ATlog A).This algorithm is known as Exp3 (“Exponential Weights for Exploration and Exploitation”). A full proof of this result is left as an exercise in <ref>.§.§ Deferred Proofs Let _t*·=*·|t and _t·=·|t. We begin by using the law of total expectation to express the expected regret as(μ) =*∑_t=1^T_t-1*() - (πt).Here and throughout the proof, *· will denote the joint expectation over both ∼μ and over the sequence T=(π1,r1),…,(πT,rT) that the algorithm generates by interacting with . We first simplify the (conditional) expected regret for each step t. Let t(π)_t-1*(π) denote the posterior mean reward function at time t, which should be thought of as the expected value ofgiven everything we have learned so far. Next, let t_π'(π) = _t-1*(π)|=π', which is the expected reward given everything we have learned so far, assuming that =π'. We proceed to write the expression _t-1*() - (πt)in terms of these quantities. For the learner's reward, we observe that is conditionally independent of t given t-1, we have_t-1*(t) = _π∼t().For the reward of the optimal action, we begin by writing_t-1*() = ∑_π∈Π_t-1(=π) _t-1*(π)|=π= ∑_π∈Π_t-1(=π) t_π(π) = _π∼pt*t_π(π),where we have used that pt was chosen to match the posterior distribution over . This establishes that_t-1*() - (πt) = _π∼pt*t_π(π) - t(π).We now make use of the following decoupling-type inequality, which follows from <ref>:_π∼pt*t_π(π) - t(π)≤√(A·_π,∼pt*(t_(π)-t(π))^2).To keep notation as clear as possible going forward, let us use boldface script (t, , , t) to refer to the abstract random variables under consideration, and use non-boldface script (πt, , , t)to refer to their realizations. As in the simplified proof, we will show that the in eq:mab_decoupling is related to a notion of information gain (that is, information aboutacquired at step t). Using Pinsker's inequality, we have_πt,∼pt*(t_(πt)-t(πt))^2≤_t-1*_t|,πt,t-1_t|πt,t-1.Since KL divergence satisfies _X*_Y|X_Y=_Y*_X|Y_X, this is equal to_t-1*_|πt,rt,t-1_|t-1 = _t-1*_|t_|t-1.This is quantifying how much information aboutwe gain by playing πt and observing rt at step t, relative to what we knew at step t-1.Applying eq:mab_decoupling and eq:mab_info_gain, we have ∑_t=1^T_π∼pt*t_π(π) - t(π) ≤∑_t=1^T√(A·_π,∼pt*(t_(π)-t(π))^2)≤∑_t=1^T√(A·_t-1*_|t_|t-1)≤√(AT·∑_t=1^T*_|t_|t-1).We can write*_|t_|t-1 = (|t-1) - (|t),so telescoping gives∑_t=1^T*_|t_|t-1 = (|0) - (|T) ≤log(A).§.§ Exercises[Adversarial Bandits] In this exercise, we will prove a regret bound for adversarial bandits (<ref>), where the sequence of rewards (losses) is non-stochastic. To make a direct connection to the Exponential Weights Algorithm, we switch from rewards to losses, mapping rt to 1-rt, a transformation that does not change the problem itself. To simplify the presentation, suppose that a collection of losses {t(π)∈[0,1]: π∈ [A], t∈[T]}for each action π and time step t is arbitrary and chosen before round t=1; this is referred to as an oblivious adversary. We denote by t=(t(1),…,t(A)) the vector of losses at time t. The protocol for the problem of adversarial multi-armed bandits (with losses) is as follows: Let pt be the randomization distribution of the decision-maker on round t. Expected regret can be written as* = *∑_t=1^T []pt,t - min_π∈[A]∑_t=1^T []e_π,t. Since only the loss of the chosen action πt∼ pt is observed, we cannot directly appeal to the Exponential Weights Algorithm. The solution is to build an unbiased estimate of the vector t from the single real-valued observation t(πt). * Prove that the vector t(·|πt) defined byt(π|πt) = t(π)/pt(π)×πt=πis an unbiased estimate for t(π) for all π∈[A]. In vector notation, this means _πt∼ ptt(·|πt) = t. Conclude that *= *∑_t=1^T _πt∼ pt[]pt, t - min_π∈A*∑_t=1^T _πt∼ pt[]e_π, tAbove, we use the shorthand t=(·|πt).* Show that given π',_π∼ pt[]t(π|π')^2 = t(π')^2/pt(π'),   so that   _πt∼ pt_π∼ pt[]t(π|πt)^2≤ A.* Define pt (π) ∝exp* -η∑_s=1^t-1[]e_π, s(·|πs),which corresponds to the exponential weights algorithm on the estimated losses s. Apply eq:second_order_ewa to the estimated losses to show that for any π∈[A],*∑_t=1^T _πt∼ pt[]pt, t -*∑_t=1^T _πt∼ pt[]e_π, t≲√(AT log A) Hence, the price of bandit feedback in the adversarial model, as compared to full-information online learning, is only √(A).§ CONTEXTUAL BANDITSIn the last section, we studied the multi-armed bandit problem, which arguably the simplest framework for interactive decision making. This simplicity comes at a cost: few real-world problems can be modeled as a multi-armed bandit problem directly. For example, for the problem of selecting medical treatments, the multi-armed bandit formulation presupposes that one treatment rule (action/decision) is good for all patients, which is clearly unreasonable. To address this, we augment the problem formulation by allowing the decision-maker to select the action πt after observing a context xt; this is called the contextual bandit problem.The context xt, which may also be thought of as a feature vector or collection of covariates (e.g., a patient's medical history, or the profile of a user arriving at a website), can be used by the learner to better maximize rewards by tailoring decisions to the specific patient or user under consideration.As with multi-armed bandits, contextual bandits can be studied in a stochastic framework or in an adversarial framework. In this course, we will allow the contexts x1,…,xT to be generated in an arbitrary, potentially adversarially fashion, but assume that rewards are generated from a fixed conditional distribution. [Stochastic Rewards]Rewards are generated independently viart∼(·| xt, πt),where (·|·,·) is the underlying model (or conditional distribution).This generalizes the stochastic multi-armed bandit framework in sec:mab. We define(x,π) [r| x, π]as the mean reward function under r∼(·| x, π), and define (x) _π∈Π(x, π) as the optimal policy, which maps each context x to the optimal action for the context. We measure performance via regret relative to :∑_t=1^T(xt, (xt)) - ∑_t=1^T_πt∼pt(xt, πt),where pt∈Δ(Π) is the learner's action distribution at step t (conditioned on the t-1 and xt). This provides a (potentially) much stronger notion of performance than what we considered for the multi-armed bandit: Rather than competing with the reward of the single best action, we are competing with the reward of the best sequence of decisions tailored to the context sequence we observe. To readers already familiar with reinforcement learning, the contextual bandit setting may appear quite similar at first glance, with the term “context” replacing “state”. The key difference is that in reinforcement learning, we aim to control the evolution of x1,…,xT (which is why they are referred to as state), whereas in contextual bandits, we take the sequence as a given, and only aim to maximize our rewards conditioned on the sequence.Function approximation and desiderataIf , the set of possible contexts, is finite, one might imagine running a separate MAB algorithm for each context. In this case, the regret bound would scale with ,[One can show that running an independent instance of UCB for each context leads to regret (√(AT·)); see ex:unstructured_cb.] an undesirable property which reflects the fact that this approach does not allow for generalization across contexts. Instead, we would like to share information between different contexts. After all, a doctor prescribing treatments might never observe exactly the same medical history and symptoms twice, but they might see similar patients or recognize underlying patterns. In the spirit of statistical learning (sec:intro) this means assuming access to a classthat can model the mean reward function, and aiming for regret bounds that scale with log (reflecting the statistical capacity of ), with no dependence on the cardinality of . To facilitate this, we will assume a well-specified/realizable model. The decision-maker has access to a class ⊂{f:×Π→} such that ∈. Using the class , we would like to develop algorithms that can model the underlying reward function for better decision making performance. With this goal in mind, it is reasonable to try leveraging the algorithms and respective guarantees we have already seen for statistical and online supervised learning. At this point, however, the decision-making problem—with its exploration-exploitation dilemma—appears to be quite distinct from these supervised learningframeworks. Indeed, naively applying supervised learning methods, which do not account for the interactive nature of the problem, can lead to failure, as we saw with the greedy algorithm in <ref>. In spite of these apparent difficulties, in the next few lectures, we will show that it is possible to leverage supervised learning methods to develop provable decision making methods, thereby bridging the two methodologies. §.§ Optimism: Generic Template What algorithmic principles should we employ to solve the contextual bandit problem? One approach is to adapt solutions from the multi-armed bandit setting. There, we saw that the principle of optimism (in particular, the UCB algorithm) led to (nearly) optimal rates for bandits, so a natural question is whether optimism can be adapted to give optimal guarantees in the presence of contexts. The answer to this last question is: it depends. We will first describe some positive results under assumptions on , then provide a negative example, and finally turn to an entirely different algorithmic principle. Optimism via confidence setsLet us describe a general approach (or, template) for applying the principle of optimism to contextual bandits <cit.>. Suppose that at each time, we have a way to construct a confidence sett⊆based on the data observed so far, with the important property that ∈t. Given such a confidence set we can define upper and lower confidence functions t, t:×Π→ viat(x,π) = min_f∈t f(x,π),   t(x,π) = max_f∈t f(x,π).These functions generalize the upper and lower confidence bounds we constructed in <ref>. Since ∈t, they have the property thatt(x,π) ≤(x,π) ≤t(x,π)for all x∈, π∈Π. As such, if we consider a contextual analogue of the UCB algorithm, given byπt=_π∈Πt(xt,π),then as in lem:regret_optimistic, the optimistic action satisfies (xt,) - (xt,πt) ≤t(xt,πt) - t(xt,πt).That is, the suboptimality is bounded by the width of the confidence interval at (xt,πt), and the total regret is bounded as≤∑_t=1^Tt(xt,πt) - t(xt,πt).To make this approach concrete and derive sublinear bounds on the regret, we need a way to construct the confidence set t, ideally so that the width in eq:width_cb shrinks as fast as possible.Constructing confidence sets with least squares We construct confidence sets by appealing to a supervised learning method, empirical risk minimization with the square loss (or, least squares). Assume that f(x,a)∈0,1 for all f∈, and that rt∈[0,1] almost surely. Lett = _f∈∑_i=1^t-1 (f(xi,πi)-ri)^2be the empirical risk minimizer at round t, and with β8log(||/δ) define 1= andt = *f∈: ∑_i=1^t-1 (f(xi,πi) - ri)^2 ≤∑_i=1^t-1 (t(xi,πi) - ri)^2 + βfor t>1. That is, our confidence set t is the collection of all functions that have empirical squared error close to that of t. The idea behind this construction is to set β “just large enough”, to ensure that we do not accidentally exclude , with the precise value for β informed by the concentration inequalities we explored in sec:intro. The only catch here is that we need to use variants of these inequalities that handle dependent data, since xt and πt are not in <ref>. The following result shows that t is indeed valid and, moreover, that all functions f∈t have low estimation error on the history. Let π1,…,πT be chosen by an arbitrary (and possibly randomized) decision-making algorithm. With probability at least 1-δ, ∈t for all t∈[T]. Moreover, with probability at least 1-δ, for all τ≤ T, all f∈τ satisfy ∑_t=1^τ-1_πt∼pt*(f(xt,πt)-(xt,πt))^2≤ 4β,where β=8log(||/δ).lem:valid_CI is valid for any algorithm, but it is particularly useful for UCB as it establishes the validity of the confidence bounds as per (<ref>); however, it is not yet enough to show that the algorithm attains low regret. Indeed, to bound the regret, we need to control the confidence widths in eq:width_cb, but there is a mismatch: for step τ, the regret bound in eq:width_cb considers the width at (xτ,πτ), but eq:confidence_set_estimation only ensures closeness of functions in τ under (x1,π1),…,(xτ-1,πτ-1). We will show in the sequel that for linear models, it is possible to control this mismatch, but that this is not possible in general.For f∈, defineUt(f) = (f(xt,πt) - rt)^2 - ((xt,πt) - rt)^2.It is straightforward to check that[We leave _t-1 on the right-hand side to include the case of randomized decisions πt∼ pt.]_t-1 Ut(f) = _t-1(f(xt,πt)-(xt,πt))^2,where _t-1··|t-1, xt. Then Zt(f) = _t-1 Ut(f) - Ut(f) is a martingale difference sequence and∑_t=1^τ Zt(f) is a martingale. Since increments Zt(f) are bounded as Zt(f)≤ 1 (this holds whenever f∈[0,1], rt∈[0,1]), according to lem:freedman with η=1/8, with probability at least 1-δ, for all τ≤ T, ∑_t=1^τZt(f) ≤1/8∑_t=1^τ_t-1*Zt(f)^2 + 8log(δ^-1).To control the right-hand side, we again use that f,rt∈0,1 to bound_t-1*Zt(f)^2 ≤_t-1**(f(xt,πt) - rt)^2 - ((xt,πt) - rt)^2^2≤ 4_t-1* (f(xt,πt) - (xt,πt))^2= 4_t-1 Ut(f)Then, after rearranging, (<ref>) becomes1/2∑_t=1^τ_t-1 Ut(f) ≤∑_t=1^τ Ut(f)+ 8log(δ^-1).Since the left-hand side is nonnegative, we conclude that with probability at least 1-δ,∑_t=1^τ ((xt,πt) - rt)^2 ≤∑_t=1^τ (f(xt,πt) - rt)^2 + 8log(δ^-1).Taking a union bound over f∈, gives that with probability at least 1-δ,∀ f∈, ∀τ∈ [T],    ∑_t=1^τ ((xt,πt) - rt)^2 ≤∑_t=1^τ (f(xt,πt) - rt)^2 + 8log(||/δ),and in particular∀τ∈ [T+1],    ∑_t=1^τ-1 ((xt,πt) - rt)^2 ≤∑_t=1^τ-1 (τ(xt,πt) - rt)^2 + 8log(||/δ);that is, we have ∈τ for all τ∈{1,…,T+1}, proving the first claim. For the second part of the claim, observe that any f∈τ must satisfy∑_t=1^τ-1 Ut(f) ≤βsince the empirical risk ofis never better than the empirical risk of the minimizer t. Thus from (<ref>), with probability at least 1-δ, for all τ≤ T,∑_t=1^τ-1_t-1 Ut(f) ≤ 2β+ 16log(δ^-1).The second claim follows by taking union bound over f∈τ⊆, and by (<ref>). §.§ Optimism for Linear Models: The LinUCB Algorithm We now instantiate the general template for optimistic algorithms developed in the previous section for the special case whereis a class of linear functions. Linear models We fix a feature map ϕ:×Π→_2^d(1), where _2^d(1) is the unit-norm Euclidean ball in ^d. The feature map is assumed to be known to the learning agent. For example, in the case of medical treatments, ϕ transforms the medical history and symptoms x for the patient, along with a possible treatment π, to a representation ϕ(x,π)∈_2^d(1). We taketo be the set of linear functions given by=*(x,π)↦θ,ϕ(x,π)|θ∈Θ,where Θ⊆_2^d(1) is the parameter set. As before, we assume ∈; we let θ^* denote the corresponding parameter vector, so that (x,π)=*,ϕ(x,π). With some abuse of notation, we associate the set of parameters Θ to the corresponding functions in .To apply the technical results in the previous section, we assume for simplicity that Θ= is finite. To extend our results to potentially non-finite sets, one can work with an -discretization, or -net, which is of size at most O(^-d) using standard arguments. Taking ∼ 1/T ensures only a constant loss in cumulative regret relative to the continuous set of parameters, while log|| ≲ dlog T. The LinUCB algorithm The following figure displays an algorithm we refer to as LinUCB <cit.>, which adapts the generic template for optimistic algorithms to the case whereis linear in the sense of (<ref>).The following result shows that LinUCB enjoys a regret bound that scales with the complexity log of the model class and the feature dimension d. Let Θ⊆_2^d(1) and fix ϕ:×Π→_2^d(1). For a finite setof linear functions (<ref>), taking β = 8log(||/δ), LinUCB satisfies, with probability at least 1-δ, ≲√(β dT log(1+T/d))≲√(dTlog(/δ)log(1+T/d))for any sequence of contexts x1,…,xT. More generally, for infinite , we may take β = O(dlog(T))[This follows from a simple covering number argument.] and ≲ d√(T)log(T). Notably, this regret bound has no explicit dependence on the context space size . Interestingly, the bound is also independent of the number of actions Π, which is replaced by the dimension d; this reflects that the linear structure ofallows the learner to generalize not just across contexts, but across decisions. We will expand upon the idea of generalizing across actions in <ref>. The confidence set (<ref>) in the generic optimistic algorithm template is t = *θ∈Θ: ∑_i=1^t-1 (θ,ϕ(xi,πi) - ri)^2 ≤∑_i=1^t-1 (θt,ϕ(xi,πi) - ri)^2 + β,where θt is the least squares solution computed in LinUCB. According tolem:valid_CI, with probability at least 1-δ, for all t∈[T], all θ∈t satisfy∑_i=1^t-1 (θ-θ^*,ϕ(xi,πi))^2 ≤ 4β,which means that t is a subset of[For a PSD matrix Σ0, we define x_Σ=√(*x,Σx).]Θ' = *θ∈Θ: θ-θ^*_Σt^2 ≤ 4β,    where    Σt=∑_i=1^t-1ϕ(xi,πi)ϕ(xi,πi)^.Since θt∈t, we have that for any θ∈Θ', by triangle inequality, []θ-θt_Σt^2 ≤ 16β.Furthermore, since θt∈Θ⊆_2^d(1), []θ-θt_2 ≤ 2. Combining the two constraints into one, we find that Θ' is a subset ofΘ” = *θ∈^d: []θ-θt_Σt^2 ≤ 16β+4,    where    Σt=∑_i=1^t-1ϕ(xi,πi)ϕ(xi,πi)^ + I. The definition of t in (<ref>) and the inclusion Θ'⊆Θ” implies that t(x,π) ≤max_θ:θ-θt_Σt≤√(16β+4)θ, ϕ(x,π) = []θt, ϕ(x,π)+ √(16β+4)ϕ(x,π) _(Σt)^-1, and similarly t(x,π) ≥[]θt, ϕ(x,π)- √(16β+4)*ϕ(x,π) _(Σt)^-1. We conclude that regret of the UCB algorithm, in view of lem:regret_optimistic, is ≤ 2√(16β+4)∑_t=1^T ϕ(xt,πt) _(Σt)^-1≲√(β T ∑_t=1^T ϕ(xt,πt) _(Σt)^-1^2).The above upper bound has the same flavor as the one in lem:confidence_width_potential: as we obtain more and more information in some direction v, the matrix Σt has a larger and larger component in that direction, and for that direction v, the term v _(Σt)^-1^2 becomes smaller and smaller. To conclude, we apply a potential argument, <ref> below, to bound∑_t=1^T ϕ(xt,πt) _(Σt)^-1^2≲dlog(1+T/d).The following result is referred to as the elliptic potential lemma, and it can be thought of as a generalization of lem:confidence_width_potential. Let a_1,…,a_T∈^d satisfy a_t≤ 1 for all t∈[T], and let V_t = I + ∑_s≤ t a_s a_s^. Then ∑_t=1^T a_t^2_V_t-1^-1≤ 2d log*1+T/d.First, the determinant of V_t evolves as(V_t) = (V_t-1)*1+a_t^2_V_t-1^-1.Second, using the identity u∧ 1≤ 2ln(1+u) for u≥ 0, the left-hand side of (<ref>) is at most 2∑_t=1^T log*1+a_t^2_V_t-1^-1. The proof concludes by upper bounding the determinant of V_n via the AM-GM inequality. We leave the details as an exercise; see also <cit.>.§.§ Moving Beyond Linear Classes: ChallengesWe now present an example of a classfor which optimistic methods necessarily incur regret that scales linearly with either the cardinality ofor with cardinality of , meaning that we do not achieve the desired log scaling of regret that one might expect in (offline or online) supervised learning.[Failure of optimism for contextual bandits <cit.>] Let A=2, and let N∈ be given. Let πg and πb be two actions available in each context, so that Π=πg,πb. Let ={x1,…,xN} be a set of distinct contexts, and define a class ={,f1,…,fN} of cardinality N+1 as follows. Fix 0<<1. Let (x,πg)=1- and (x,πb)=0 for any x∈. For each i∈[N], fi(xj,πg)=1- and fi(xj,πb)=0 for j≠ i, while fi(xi,πg)=0 and fi(xi,πb)=1.Now, consider a (well-specified) problem instances in which rewards are deterministic and given byrt=(xt,πt),which we note is a constant function with respect to the context. Sinceis the true model, πg is always the best action, bringing a reward of 1- per round. Any time πb is chosen, the decision-maker incurs instantaneous regret 1-. We will now argue that if we apply the generic optimistic algorithm from <ref>, it will choose πb every time a new context is encountered, leading to (N) regret.Let St be the set of distinct contexts encountered before round t. Clearly, the exact minimizers of empirical square loss (see (<ref>)) are , and all fi where i is such that x_i∉ St. Hence, for any choice of β≥ 0, the confidence set in (<ref>) contains all fi for which xi∉ St. This implies that for each t∈[T] where xt=xi∉ St, action πb has a higher upper confidence bound than πg, sincet(xt,πb)= f_i(x_i,πb) = 1 >t(xt,πg) = (xt,πg)=1-.Hence, the cumulative regret grows by 1- every time a new context is presented, and thus scales as (N(1-)) if the contexts are presented in order. That is, since N=||=||-1, the confidence-based algorithm fails to achieve logarithmic dependence on(note that we may take =1/2 for concreteness).Let us remark that this failure continues even if contexts are stochastic. If the contexts are chosen via the uniform distribution on , then for T≥N, at least a constant proportion of the domain will be presented, which still leads to a lower bound of= Ω(N)=Ω(min,). What is behind the failure of optimism in this example? The structure offorces optimistic methods to over-explore, as the algorithm puts too much hope into trying the arm πb for each new context. As a result, the confidence widths in eq:width_cb do not shrink quickly enough. Below, we will see that there are alternative methods which do enjoy logarithmic dependence on the size of , with the best of these methods achieving regret O(√(ATlog)). We mention in passing that even though optimism does not succeed in general, it is useful to understand in what cases it works. We saw that the structure of linear classes in ^d only allowed for d “different” directions, while in the example above, the optimistic algorithm gets tricked by each new context, and is not able to shrink the confidence band quickly enough over the domain. In a few lectures (sec:structured), we will introduce the eluder dimension, a structural property of the classwhich is sufficient for optimistic methods to experience low regret, generalizing the positive result for the linear setting. §.§ The ε-Greedy Algorithm for Contextual Bandits Given that the principle of optimism only leads to low regret for classeswith special structure, we are left wondering whether there are more general algorithmic principles for decision making that can succeed for any class . In this section and the following one, we will present two such principles. Both approaches will still make use of supervised learning with the class , but will build upon online supervised learning as opposed to offline/statistical learning. To make the use of supervised learning as modular as possible, we will abstract this away using the notion of an online regression oracle <cit.>. At each time t∈[T], an online regression oracle returns, given (x1,π1,r1),…,(xt-1,πt-1,rt-1)with ri|xi,πi=(xi,πi) and πi∼ pi, a function t:×Π→ such that ∑_t=1^T _πt∼ pt(t(xt,πt)-(xt,πt))^2 ≤(, T, δ) with probability at least 1-δ. For the results that follow, pi = pi(· | xi, i-1) will represent the randomization distribution of a decision-maker. For example, for finite classes, the (averaged) exponential weights method introduced in <ref> is an online regression oracle with (,T,δ)=log(/δ). More generally, in view of lem:well_specified_reg_est, any online learning algorithm that attains low square loss regret for the problem of predicting of rt based on (xt,πt) leads to a valid online regression oracle. Note that we make use of online learning oracles for the results that follow because we aim to derive regret bounds that hold for arbitrary, potentially adversarial sequences x1,…,xT. If we instead assume that contexts are , it is reasonable to make use of algorithms for offline estimation, or statistical learning with . See <ref> for further discussion.The first general-purpose contextual bandit algorithm we will study, illustrated below, is a contextual counterpart to the method introduced in <ref>.At each step t, the algorithm uses an online regression oracle to compute a reward estimator t(x,a) based on the data t-1 collected so far. Given this estimator, the algorithm uses the same sampling strategy as in the non-contextual case: with probability 1-, the algorithm chooses the greedy decisiont = _πt(xt,π),and with probabilityit samples a uniform random action πt∼(1,…,A). The following theorem shows that whenever the online estimation oracle has low estimation error (,T,δ), this method achieves low regret. Assume ∈ and (x,a)∈0,1. Suppose the decision-maker has access to an online regression oracle (def:online_regression_oracle) with a guarantee (, T, δ). Then by choosingappropriately, the algorithm ensures that with probability at least 1-δ,A^1/3T^2/3·(,T, δ)^1/3for any sequence x1,…,xT. As a special case, whenis finite, if we use the (averaged) exponential weights algorithm as an online regression oracle, the algorithm hasA^1/3T^2/3·log^1/3(/δ). Notably, this result scales with log for any finite class, analogous to regret bounds for offline/online supervised learning. The T^2/3-dependence in the regret bound is suboptimal (as seen for the special case of non-contextual bandits), which we will address using more deliberate exploration methods in the sequel. Recall that pt denotes the randomization strategy on round t, computed after observing xt. Following the same steps as the proof of <ref>, we can bound regret by= ∑_t=1^T_πt∼pt*(xt,(xt)) - (xt, πt)≤∑_t=1^T(xt,(xt)) - (xt,t) + T,where the T term represents the bias incurred by exploring uniformly.Fix t and abbreviate (xt)=. We have(xt,) - (xt,t)= [(xt,)-t(xt,)] + [t(xt,) - t(xt,t)] + [t(xt,t) - (xt,t)] ≤∑_π∈{t, } |(xt,π)-t(xt,π)| =∑_π∈{t, }1/√(pt(π))√(pt(π))|(xt,π)-t(xt,π)| .By the Cauchy-Schwarz inequality, the last expression is at most*∑_π∈{t, }1/pt(π)^1/2*∑_π∈{t, }pt(π)*(xt,π)-t(xt,π)^2^1/2≤√(2A/)*_πt∼ pt*(xt,πt)-t(xt,πt)^2^1/2.Summing across t, this gives∑_t=1^T(xt,(xt)) - (xt,t) ≤√(2A/)∑_t=1^T*_πt∼ pt*(xt,πt)-t(xt,πt)^2^1/2≤√(2AT/)*∑_t=1^T_πt∼ pt*(xt,πt)-t(xt,πt)^2^1/2.Now observe that the online regression oracle guarantees that with probability 1-δ,∑_t=1^T_πt∼ pt*(xt,πt)-t(xt,πt)^2≤(,T, δ).Whenever this occurs, we have√(AT(,T,δ)/) +T.Choosingto balance the two termsleads to the claimed result. §.§ Inverse Gap Weighting: An Optimal Algorithm for General Model ClassesTo conclude this section, we present a general, oracle-based algorithm for contextual bandits which achieves√(ATlog)for any finite class . As with , this approach has no dependence on the cardinalityof the context space, reflecting the ability to generalize across contexts. The dependence on T improves upon , and is optimal.To motivate the approach, recall that conceptually, the key step of theproof of prop:eps_greedy_cb involved relating the instantaneous regret _πt∼pt*(xt,(xt)) - (xt, πt)of the decision maker at time t to the instantaneous estimation error_πt∼ pt*[](xt,πt)-t(xt,πt)^2between t andunder the randomization distribution pt. The exploration distribution gives a way to relate these quantities, but the algorithm's regret is suboptimal because the randomization distribution puts mass at least /A on every action, even those that are clearly suboptimal and should be discarded.One can ask whether there exists a better randomization strategy that still admits an upper bound on (<ref>) in terms of (<ref>). prop:igw_mab below establishes exactly that. At first glance, this distribution might appear to be somewhat arbitrary or “magical”, but we will show in subsequent chapters that it arises as a special case of more general—and in some sense, universal—principle for designing decision making algorithms, which extends well beyond contextual bandits. Given a vector =((1), …, (A))∈^A, the Inverse Gap Weighting distribution p=_γ((1), …, (A)) with parameter γ≥ 0 is defined asp(π) = 1/λ + 2γ(()-(π)),where =_π(π) is the greedy action, and where λ∈1,A is chosen such that ∑_πp(π)=1.Above, the normalizing constant λ∈*1,A is always guaranteed to exist, because we have 1/λ≤∑_πp(π)≤A/λ, and because λ↦∑_πp(π) is continuous over 1,A.Let us give some intuition behind the distribution in <ref>. We can interpret the parameter γ as trading off exploration and exploitation. Indeed, γ→ 0 gives a uniform distribution, while γ→∞ amplifies the gap between the greedy actionand any action with (π)<(), resulting in a distribution supported only on actions that achieve the largest estimated value ().The following fundamental technical result shows that playing the Inverse Gap Weighting distribution always suffices to link the instantaneous regret in (<ref>) in to the instantaneous estimation error in (<ref>). Consider a finite decision space Π=1,…,A. For any vector ∈^A and γ>0, define p=_γ((1), …, (A)).This strategy guarantees that for all ∈^A, _π∼p*()-(π)≤A/γ + γ·_π∼p[]((π)-(π))^2.We break the “regret” term on the of <ref> into three terms: _π∼p[]()-(π) = _π∼p[]()-(π)_(I) exploration bias +_π∼p[](π)-(π) _(II) est. error on policy + ()-() _(III) est. error at opt . The first term asks “how much would we lose by exploring, ifwere the true reward function?”, and is equal to ∑_π() - (π)/λ + 2γ[]() - (π)≤A-1/2γ, while the second term is at most √(_π∼p[]( (π)-(π))^2)≤1/2γ + γ/2_π∼p( (π)-(π))^2. The third term can be further written as ()-() - (() -()) ≤γ/2 p() ()-()^2 + 1/2γ p() - (() -()) ≤γ/2_π∼p( (π) - (π))^2 + *1/2γ p() - (() -()). The term in brackets above is equal to λ + 2γ(() -())/2γ - (() -()) = λ/2γ≤A/2γ. The simple result we just proved is remarkable. The special   strategy guarantees a relation between regret and estimation error for any estimatorand any , irrespective of the problem structure or the class . prop:igw_mab will be at the core of the development for the rest of the course, and will be greatly generalized to general decision making problems and reinforcement learning.Below, we present a contextual bandit algorithm called <cit.> which makes use of the Inverse Gap Weighting distribution.At each step t, the algorithm uses an online regression oracle to compute a reward estimator t(x,a) based on the data t-1 collected so far. Given this estimator, the algorithm uses Inverse Gap Weighting to compute pt=_γ(t(xt,·)) as an exploratory distribution, then samples πt∼pt.The following result, which is a near-immediate consequence of <ref>, gives a regret bound for this algorithm. Given a classwith ∈, assume the decision-maker has access to an online regression oracle (def:online_regression_oracle) with estimation error (, T, δ). Thenwith γ = √(TA / (, T, δ)) attains a regret bound of≲√(A T (, T, δ))with probability at least 1-δ for any sequence x1,…,xT. As a special case, whenis finite, the averaged exponential weights algorithm achieves (, T, δ)≲log(/δ), leading to≲√(A T log(/δ)).We begin with regret, then add and subtract the squared estimation error as follows:= ∑_t=1^T_πt∼pt*(xt, ) - (xt, πt)= ∑_t=1^T_πt∼pt*(xt, ) - (xt, πt) - γ·((xt, πt)-t(xt, πt))^2 + γ·(,T, δ).By appealing to prop:igw_mab with (xt,·) and (xt,·), for each step t, we have_πt∼pt*(xt, ) - (xt, πt) - γ·((xt, πt)-t(xt, πt))^2≤A/γ,and thus≤TA/γ + γ·(, T, δ).Choosing γ to balance these terms yields the result. If the online regression oracle is minimax optimal (that is, (, T, δ) is the “best possible” for ) thenis also minimax optimal for . Thus,not only provides a connection between online supervised learning and decision making, but it does so in an optimal fashion. Establishing minimax optimality is beyond the scope of this course: it requires understanding of minimax optimality of online regression with arbitrary , as well as lower bound on regret of contextual bandits with arbitrary sequences of contexts. We refer to <cit.> for details.§.§.§ Extending to Offline RegressionWhen x1,…,xT are i.i.d., it is natural to ask whether an online regression method that works for arbitrary sequences is necessary, or whether one can work with a weaker oracle tuned to i.i.d. data. For , it turns out that any oracle for offline regression (defined below) is sufficient. Given(x1,π1,r1),…,(xt-1,πt-1,rt-1)where x1,…, xt-1 are i.i.d., πi∼ p(xi) for fixedp:→Δ(Π) and ri|xi, πi = (xi, πi), an offline regression oracle returns a function :×Π→ such that _x, π∼ p(x) ((x,π)-(x,π))^2 ≤ t^-1(, t, δ) with probability at least 1-δ.Note that the normalization t^-1 above is introduced to keep the scaling consistent with our conventions for offline estimation.Below, we state a variant of which is adapted to offline oracles <cit.>. Compared to the for online oracles, the main change is that we update the estimation oracle and exploratory distribution on an epoched schedule as opposed to updating at every round. In addition, the parameter γ for the Inverse Gap Weighting distribution changes as a function of the epoch.While this algorithm is quite intuitive, proving a regret bound for it is quite non-trivial—much more so than the online oracle variant. They key challenge is that, while the contexts x1,…,xT are , the decisions π1,…,πT evolve in a time-dependent fashion, which makes it unclear to invoke the guarantee in <ref>. Nonetheless, the following remarkable result shows that this algorithm attains a regret bound similar to that of <ref>.Let τ_m = 2^m and γ_m = √(AT/(, τ_m-1, δ)) for m=1,2,…. Then with probability at least 1-δ, regret ofwith an offline oracle is at most≲∑_m=1^⌈log T⌉√( A ·τ_m ·(, τ_m, δ/m^2)). Under mild assumptions, above bound scales as≲√( A · T ·(, τ_m, δ/log T)).For a finite class , we recall from <ref> that empirical risk with the square loss (least squares) achieves (,T,δ)log(/δ), which gives≲√( AT log(/δ)).§.§ Exercises [Unstructured Contextual Bandits]Consider a contextual bandit problem with a finite setof possible contexts, and a finite set of actions . Show that running UCB independently for each context yields a regret bound of the order O(√(|X|T)) in expectation, ignoring logarithmic factors. In the setting where =×→0,1 is unstructured, and consists of all possible functions, this is essentially optimal. [-Greedy with Offline Oracles]In prop:eps_greedy_cb, we analyzed the ε-Greedy contextual bandit algorithm assuming access to an online regression oracle. Because we appeal to online learning, this algorithm was able to handle adversarial contexts x1,…,xT. In the present problem, we will modify the -greedy algorithm and proof to show that if contexts are stochastic (that is xt∼∀t, whereis a fixed distribution), -greedy works even if we use an offline oracle (def:offline_regression_oracle).We consider the following variant of -greedy. The algorithmproceeds in epochs m=0,1,… of doubling size 2, 3, 4, 5 … 8, …, 2^m+1,2^m+1_epochm , …, T/2+1, T;we assume without loss of generality that T is a power of 2, and that an arbitrary decision is made on round t=1. At the end of each epoch m-1, the offline oracle is invoked with the data from the epoch, producing an estimated model f^m. This model is used for the greedy step in the next epoch m. In other words, for any round t∈[2^m+1,2^m+1] of epoch m, the algorithm observes xt∼, chooses an action πt∼[A] with probabilityand chooses the greedy action π^t = _π∈[A]m(xt, π)with probability 1-. Subsequently, the reward rt is observed.* Prove that for any T∈ and δ>0, by settingappropriately, this method ensures that with probability at least 1-δ,≲ A^1/3 T^1/3*∑_m=1^log_2 T 2^m/2(, 2^m-1, δ/m^2)^1/2^2/3 * Recall that for a finite class, ERM achieves (,T,δ)≲log(/δ). Show that with this choice, the above upper bound matches that in prop:eps_greedy_cb, up to logarithmic in T factors.[Model Misspecification in Contextual Bandits]In prop:squarecb, we showed that for contextual bandits with a general class , SquareCB attains regret≲√(A T·(, T, δ)).To do so, we assumed that ∈, where (x,a)_r∼(·| x,a)r; that is, we have a well-specified model. In practice, it may be unreasonable to assume that we have ∈. Instead, a weaker assumption is that there exists some function ∈ such thatmax_x∈,a∈(x,a)-(x,a)≤for some >0; that is, the model is -misspecified. In this problem, we will generalize the regret bound for SquareCB to handle misspecification. Recall that in the lecture notes, we assumed (def:online_regression_oracle) that the regression oracle satisfies ∑_t=1^T _πt∼ pt[](t(xt,πt)-(xt,πt))^2≤(, T, δ).In the misspecified setting, this is too much to ask for. Instead, we will assume that the oracle satisfies the following guarantee for every sequence:∑_t=1^T(t(xt,πt)-rt)^2 - min_f∈∑_t=1^T(f(xt,πt)-rt)^2 ≤(,T).Whenever ∈, we have (,T,δ)(,T)+log(1/δ) with probability at least 1-δ. However, it is possible to keep (,T) small even when ∉. For example, the averaged exponential weights algorithm satisfies this guarantee with (,T)log, regardless of whether ∈. We will show that for every δ>0, with an appropriate choice of γ, SquareCB (that is, the algorithm that chooses pt=_γ(t(xt,·))) ensures that with probability at least 1-δ,√(A T·((, T)+log(1/δ))) + ·A^1/2T.Assume that all functions inand rewards take values in [0,1].* Show that for any sequence of estimators 1,…,t, by choosing pt=_γ(t(xt,·)), we have that= ∑_t=1^T_πt∼pt*(xt,(xt)) - (xt,πt)AT/γ + γ∑_t=1^T_πt∼pt*(t(xt,πt)-(xt,πt))^2 + T.If we had =, this would follow from prop:igw_mab, but the difference is that in general (≠),the expression above measures estimation error with respect to the best-in-class modelrather than the true model(at the cost of an extra T factor). * Show that the following inequality holds for every sequence∑_t=1^T(t(xt,πt)-(xt,πt))^2 ≤(,T) + 2∑_t=1^T(rt-(xt,πt))(t(xt,πt) - (xt,πt)). * Using Freedman's inequality (<ref>), show that with probability at least 1-δ,∑_t=1^T_πt∼pt*(t(xt,πt)-(xt,πt))^2≤ 2∑_t=1^T(t(xt,πt)-(xt,πt))^2 + (log(1/δ)). * Using Freedman's inequality once more, show that with probability at least 1-δ,2∑_t=1^T(rt-(xt,πt))(t(xt,πt) - (xt,πt)) ≤1/4∑_t=1^T_πt∼pt*(t(xt,πt)-(xt,πt))^2 + (^2T + log(1/δ)).Conclude that with probability at least 1-δ,∑_t=1^T_πt∼pt*(t(xt,πt)-(xt,πt))^2(,T) + ^2T + log(1/δ). * Combining the previous results, show that for any δ>0, by choosing γ>0 appropriately, we have that with probability at least 1-δ,√(A T·((, T)+log(1/δ))) + ·A^1/2T.§ STRUCTURED BANDITSUp to this point, we have focused our attention on bandit problems (with or without contexts) in which the decision space Π is a small, finite set. This section introduces the structured bandit problem, which generalizes the basic (non-contextual) multi-armed bandit problem by allowing for large, potentially infinite or continuous decision spaces. The protocol for the setting is as follows. This protocol is exactly the same as for multi-armed bandits (sec:mab), except that we have removed the restriction that Π=1,…,A, and now allow it to be arbitrary. This added generality is natural in many applications: * In medicine, the treatment may be a continuous variable, such as a dosage. The treatment could even by a high-dimensional vector (such as dosages for many different medications). See fig:structured_bandit.* In pricing applications, a seller might aim to select a continuous price or vector or prices in order to maximize their returns.* In routing applications, the decision space may be finite, but combinatorially large. For example, the decision might be a path or flow in a graph.Both contextual bandits and structured bandits generalize the basic multi-armed bandit problem, by incorporating function approximation and generalization, but in different ways: * The contextual bandit formulation in <ref> assumes structure in the context space. The aim here was to generalize across contexts, but we restricted the decision space to be finite (unstructured).* In structured bandits, we will focus our attention on the case of no contexts, but will assume the decision space is structured, and aim to generalize across decisions.Clearly, both ideas above can be combined, and we will touch on this in sec:structured_contexts.Assumptions and regret To build intuition as to what it means to generalize across decisions, and to give a sense for what sort of guarantees we might hope to prove, let us first give the formal setup for the structured bandit problem. As in preceding sections, we will assume that rewards are stochastic, and generated from a fixed model. [Stochastic Rewards]Rewards are generated independently viart∼(·|πt),where (·|·) is the underlying model.We define(π) [r|π]as the mean reward function under r∼(·|π), and measure regret via∑_t=1^T() - ∑_t=1^T_πt∼pt(πt).Here, _π∈Π(π) as usual. We will define the history as t=(π1,r1),…,(rt,πt). Function approximationA first attempt to tackle the structured bandit problem might be to apply algorithms for the multi-armed bandit setting, such as UCB. This would give regret (√(ΠT)), which could be vacuous if Π is large relative to T. However, with no further assumptions on the underlying reward function , this is unavoidable. To allow for better regret, we will make assumptions on the structure ofthat will allow us to share information across decisions, and to generalize to decisions that we may not have played. This is well-suited for the applications described above, where Π is a continuous set (e.g., Π⊆^d), but we expectto be continuous, or perhaps even linear with respect some well-designed set of features. To make this idea precise, we follow the same approach as in statistical learning and contextual bandits, and assume access to a well-specified function classthat aims to capture our prior knowledge about .The decision-maker has access to a class ⊂{f:Π→} such that ∈. Given such a class, a reasonable goal—particularly in light of the development in <ref> and <ref>—would be to achieve guarantees that scale with the complexity of supervised learning or estimation with , e.g. log for finite classes; this is what we were able to achieve for contextual bandits, after all. Unfortunately, this is too good to be true, as the following example shows. [Necessity of structural assumptions]Let Π=A, and let =f_i_i∈A, wheref_i(π)1/2 + 1/2π=i.It is clear that one needs A for this setting, yet log=log(A), so a regret bound of the form √(Tlog) is not possible if A is large relative to T.What this example highlights is that generalizing across decisions is fundamentally different (and, in some sense, more challenging) than generalizing across contexts. In light of this, we will aim for guarantees that scale with log, but additionally scale with an appropriate notion of complexity of exploration for the decision space Π. Such a notion of complexity should reflect how much information is shared across decisions, which depends on the interplay between Π and . §.§ Building Intuition: Optimism for Structured Bandits Our goal is to obtain regret bounds for structured bandits that reflect the intrinsic difficulty of exploring the decision space Π, which should reflect the structure of the function classunder consideration. To build intuition as to what such guarantees will look like, and how they can be obtained, we first investigate the behavior of the optimism principle and the UCB algorithm when applied to structured bandits. We will see that: * UCB attains guarantees that scale with log, and additionally scale with a notion of complexity called the eluder dimension, which is small for simple problems such as bandits with linear rewards.* In general, UCB is not optimal, and can have regret that is exponentially large compared to the optimal rate. §.§.§ UCB for Structured Bandits We can adapt the UCB algorithm from multi-armed bandits to structured bandit by appealing to least squares and confidence sets, similar to the approach we took for contextual bandits <cit.>. Assume =*f:Π→ [0,1] and rt∈[0,1] almost surely. Lett = _f∈∑_i=1^t-1 (f(πi)-ri)^2be the empirical minimizer on round t, and with β8log(||/δ), define confidence sets 1= andt = *f∈: ∑_i=1^t-1 (f(πi) - ri)^2 ≤∑_i=1^t-1 (t(πi) - ri)^2 + β.Defining t(π)max_f∈tf(π) as the upper confidence bound, the generalized UCB algorithm is given byπt=_π∈Πt(π). When does the confidence width shrink? Using prop:linucb, one can see the generalized UCB algorithm ensures that ∈t for all t with high probability. Whenever this happens, regret is bounded by the upper confidence width:≤∑_t=1^Tt(πt)-(πt).This bound holds for all structured bandit problems, with no assumption on the structure of Π and . Hence, to derive a regret bound, the only question we need to answer is when will the confidence widths shrink?For the unstructured multi-armed bandit, we need to shrink the width for every arm separately, and the best bound on eq:width_structured we can hope for is (√(ΠT)). One might hope that if Π andhave nice structure, we can do better. In fact, we have already seen one such case: For linear models, where=*π↦θ,ϕ(π)|θ∈Θ⊂_2^d(1),prop:linucb shows that we can bound eq:width_structured by √(dTlog). Here, the number of decisions Π is replaced by the dimension d, which reflects the fact that there are only d truly unique directions to explore before we can start extrapolating to new actions. Is there a more general version of this phenomenon when we move beyond linear models? §.§.§ The Eluder DimensionThe eluder dimension <cit.> is a complexity measure that aims to capture the extent to which the function classfacilitates extrapolation (i.e., generalization to unseen decisions), and gives a generic way of bounding the confidence width in eq:width_structured. It is defined for a classas follows. Let ⊂(→) and :Π→ be given, and define _(,) as the length of the longest sequence of decisions 1,…,d∈ such that for all t∈d, there exists ft∈ such that*ft(t)-(t)>,and∑_i<t(ft(i)-(i))^2≤^2.The eluder dimension is defined as _(,)=sup_'≥_(,')∨1. We abbreviate (,) = max_∈_(,).The intuition behind the eluder dimension is simple: It asks, for a worst-case sequence of decisions, how many times we can be “surprised” by a new decision πt if wecan estimate the underlying modelwell on all of thepreceding points. In particular, if we form confidence sets as ineq:confidence_set_structured with β=^2, then thenumber of times the upper confidence width in<ref> can belarger thanis at most _(,). We consider the definition_(,)=sup_'≥_(,')∨1instead of directly working with _(,) to ensure monotonicity with respect to , which will be usefulin the proofs that follow.The following result gives a regret bound for UCB for generic structured bandit problems. The regret bound has no dependence on the size of the decision space, and scales only with (,) and log. For a finite set of functions ⊂(Π→0,1), using β = 8log(||/δ), the generalized UCB algorithm guarantees that with probability at least 1-δ,≲min_>0*√((,)·Tlog(/δ)) + T √((,T^-1/2)·Tlog(/δ)). For the case of linear models in eq:linear_structured, it is possible to use the elliptic potential lemma (lem:elliptic_potential) to show that(,)dlog(^-1).For finite classes, this gives √(dTlog(/δ)log(T)), which recovers the guarantee in prop:linucb. Another well-known example is that of generalized linear models. Here, we fix link function σ:-1,+1→ and define=*π↦σ[]θ,ϕ(π)|θ∈Θ⊂_2^d(1).This is a more flexible model than linear bandits. A well-known special case is the logistic bandit problem, where σ(z)=1/(1+e^-z). One can show <cit.> that for any choice of σ, if there exist μ,L>0 such that μ<σ'(z)<L for all z∈-1,+1, then(,) L^2/μ^2·dlog(^-1).This leads to a regret bound that scales with L/μ√(dTlog), generalizing the regret bound for linear bandits.In general, the eluder dimension can be quite large. Consider the generalized linear model setup above with σ(z)=+(z) or σ(z)=-(z) (either choice of sign works), where (z)maxz,0 is the ReLU function; this can be interpreted as a neural network with a single neuron. Here, we can have σ'(z)=0, so eq:eluder_glm does not apply, and it turns out <cit.> that (,)e^dfor constant . That is, even for a single ReLU neuron, the eluder dimension is alreadyexponential, which is a bit disappointing. Fortunately, we will show in the sequel that the eluder dimension can be overly pessimistic, and it is possible to do better, but this will require changing the algorithm.Definet = *f∈|∑_i<t(f(πi)-(πi))^2 ≤ 4β.By lem:valid_CI, we have that with probability at least 1-δ, for all t: * ∈t.* t⊆t.Let us condition on this event. As in lem:regret_optimistic, since ∈t, we can upper bound≤∑_t=1^Tt(πt)-(πt).Now, definewt(π) = sup_f∈t*f(π)-(π),which is a useful upper bound on the upper confidence width at time t. Since t⊆t, we have≤∑_t=1^Twt(πt). We now appeal to the following technical lemma concerning the eluder dimension. Fix a function class , function ∈, and parameter β>0. For any sequence π1,…,πT, if we definewt(π) = sup_f∈*f(π)-(π) : ∑_i<t(f(πi)-(πi))^2≤β,then for all α>0,∑_t=1^Twt(πt)>α≤*β/α^2+1·_(,α).Note that for the special case where β=α^2, the bound in lem:eluder_indicator_bound immediately follows from the definition of the eluder dimension. The point of this lemma is to show that a similar bound holds for all scales α simultaneously, but with a pre-factor β/α^2 that grows large when α^2≪β. To apply this result, fix >0, and bound∑_t=1^Twt(πt) ≤∑_t=1^Twt(πt)wt(πt)> + T.Let us order the indices 1,…,T as i_1,…,i_T, so that wi_1(πi_1)≥wi_2(πi_2)≥…≥wi_τ(πi_τ). Consider any index τfor which wi_τ(πi_τ)>. For any α>, if we have wi_τ(πi_τ)>α, then lem:eluder_indicator_bound (since α≤1≤β) implies thatτ≤∑_t=1^Twt(πt)>α≤*4β/α^2+1_(,α)≤5β/α^2_(,α).Since we have restricted to α≥ and α↦_(,α) is decreasing, rearranging yieldswi_τ(πi_τ) ≤√(5β/τ).With this, we can bound the main term in eq:eluder_width_decomp by∑_t=1^Twt(πt)wt(πt)>∑_t=1^T√(β/t)√(βT).Combining this with eq:eluder_width_decomp gives √(βT)+T. Since >0 was arbitrary, we are free to minimize over it. Let us adopt the shorthand d=_(,α). We begin with a definition. We say π is α-independent of π1,…,πt if there exists f∈ such that *f(π)-(π)>α and ∑_i=1^t*f(πi)-(πi)^2≤α^2. We say π is α-dependent on π1,…,πt if for all f∈ with ∑_i=1^t*f(πi)-(πi)^2≤α^2, *f(π)-(π)≤α.We first claim that for any t, if wt(πt)>α, then π_t is α-dependent on at most β/α^2 disjoint subsequences of π1,…,πt-1. Indeed, let f be such that *f(πt)-(πt)>α. If πt is α-dependent on a particular subsequence πi_1,…,πi_k but wt(πt)>α, we must have∑_j=1^k(f(πi_j)-(πi_j))^2≥α^2.If there are M such disjoint sequences, we haveMα^2≤∑_i<t(f(πi)-(πi))^2≤β,so M≤β/α^2.Next, we claim that for τ and any sequence (π1,…,πτ), there is some j such that πj is α-dependent on at least τ/d disjoint subsequences of π1,…,πj-1. Let N=τ/d, and let B_1,…,B_N be subsequences of π1,…,πτ. We initialize with B_i = (πi). If πN+1 is α-dependent on B_i=*πi for all 1≤i≤N we are done. Otherwise, choose i such that πN+1 is α-independent of B_i, and add it to B_i. Repeat this process until we reach j such that either πj is α-dependent on all B_i orj=τ. In the first case we are done, while in the second case, we have ∑_i=1^N*B_i≥τ≥dN. Moreover, *B_i≤d, since each πj∈B_i is α-independent of its prefix (this follows from the definition of eluder dimension). We conclude that *B_i=d for all i, so in this case πτ is α-dependent on all B_i.Finally, let (πt_1,…,πt_τ) be the subsequence π1,…,πT consisting of all elements for which wi_i(πt_i)>α. Each element of the sequence is dependent on at most β/α^2 disjoint subsequences of (πt_1,…,πt_τ), and by the argument above, one element is dependent on at least τ/d disjoint subsequences, so we must have τ/d≤β/α^2, and which implies that τ≤ (β/α^2+1)d. §.§.§ Suboptimality of Optimism The following example shows a function classfor which the regret experienced by UCB is exponentially large compared to the regret obtained by a simple alternative algorithm. This shows that while the algorithm is useful for some special cases, it does not provide a general principle that attains optimal regret for any structured bandit problem. [Cheating Code <cit.>]Let A∈ be a power of 2 and consider the following function class .* The decision space is Π=A∪, where =c_1,…,c_log_2(A) is a set of “cheating” actions.* For all actions π∈A, f(π)∈*0,1 for all f∈, but we otherwise make no assumption on the reward.* For each f∈, rewards for actions intake the following form. Let π_f∈A denote the action in A with highest reward. Let b(f)=(b_1(f),…,b_log_2(A)(f))∈0,1^log_2(A) be a binary encoding for the index of π_f∈A (e.g., if π_f=1, b(f)=(0,0,…,0), if π_f=2, b(f)=(0,0,…,0,1), and so on). For each action c_i∈, we setf(c_i) = -b_i(f). The idea here is that if we ignore the actions , this looks like a standard multi-armed bandit problem, and the optimal regret is Θ(√(AT)). However, we can use the actions into “cheat” and get an exponential improvement in sample complexity. The argument is as follows.Suppose for simplicity that rewards are Gaussian with r∼((π),1) under π. For each cheating action c_i∈, since (c_i)=-b_i()∈0,-1, we can determine whether the value is b_i()=0 or b_i()=1 with high probability using (1) action pulls. If we do this for each c_i∈, which will incur (log(A)) regret (there are log(A) such actions and each one leads to constant regret), we can infer the binary encoding b()=b_1(),…,b_log_2(A)() for the optimal actionwith high probability. At this point, we can simply stop exploring, and commit to playingfor the remaining rounds, which will incur no more regret. If one is careful with the details, this gives that with probability at least 1-δ,log^2(A/δ).In other words, by exploiting the cheating actions, our regret has gone from linear to logarithmic in A (we have also improved the dependence on T, which is a secondary bonus).Now, let us consider the behavior of the generalized UCB algorithm. Unfortunately, since all actions c_i∈ have f(c_i)≤0 for all f∈, we have t(c_i)≤0. As a result, the generalized UCB algorithm will only ever pull actions in A, ignoring the cheating actions and effectively turning this into a vanilla multi-armed bandit problem, which means that√(AT).This example shows that UCB can behave suboptimally in the presence of decisions that reveal useful information but do not necessarily lead to high reward. Since the “cheating” actions are guaranteed to have low reward, UCB avoids them even though they are very informative. We conclude that: * Obtaining optimal sample complexity for structured bandits requires algorithms that more deliberately balance the tradeoff between optimizing reward and acquiring information.* In general, the optimal strategy for picking decisions can be very different depending on the choice of the class . This contrasts the contextual bandit setting, where we saw that the Inverse Gap Weighting algorithm attained optimal sample complexity for any choice of class , and all that needed to change was how to perform estimation.Recall the Bayesian bandit setting in <ref>, where we showed that the posterior sampling algorithm attains regret (√(AT)) when Π=1,…,A. Posterior sampling is a general-purpose algorithm, and can be applied to directly to arbitrary structured bandit problems (as long as a prior is available). However, similar to UCB, the cheating code construction in <ref> implies that posterior sampling is not optimal in general. Indeed, posterior sampling will never select the cheating arms in , as these have sub-optimal reward for all models in . As a result, the Bayesian regret of the algorithm will scale with √(AT) for a worst-case prior.§.§ TheThe discussion in the prequel highlights two challenges in designing algorithms and understanding sample complexity for structured bandits: 1) the optimal regret (in a sense, the complexity of exploration) can depend on the classin a subtle, sometimes surprising fashion, and 2) the algorithms required to achieve optimal regret can heavily depend on the choice of . In light of these challenges, it is natural to ask whether it is possible to have any sort of unified understanding of the optimal regret. We will now show that the answer is yes, and this will be achieved by a single, general-purpose principle for algorithm design.The algorithm we will present in this section reduces the problem of decision making to that of supervised online learning/estimation, in a similar fashion to the method for contextual bandits in sec:cb. To apply this method, we require the following oracle for supervised estimation. At each time t∈[T], an online regression oracle returns, given (π1,r1),…,(πt-1,rt-1)with ri|πi=(πi) and πi∼ pi, a function t:Π→ such that ∑_t=1^T _πt∼ pt(t(πt)-(πt))^2 ≤(, T, δ) with probability at least 1-δ. Here, pi(· | i-1) is the randomization distribution for the decision-maker.Recall, following the discussion in <ref>, that the averaged exponential weights algorithm achieves is an online regression oracle with (,t,δ)log(/δ).The following algorithm, which we call or <cit.>, is a general-purpose meta-algorithm for structured bandits.At each timestep t, the algorithm calls invokes an online regression oracle to obtain an estimator t using the data t-1=(π1,r1,…,πt-1,rt-1) observed so far. The algorithm then finds a distribution pt by solving a min-max optimization problem involving the estimator t and the class , then samples the decision πt from this distribution.The minimax problem in is derived from a complexity measure (or, structural parameter) forcalled the<cit.>, whose value is given by (,) = min_p∈Δ()max_f∈_∼p[f()-f(π)_regret of decision -γ·(f(π)-(π))^2_information gain for obs.].The can be thought of as the value of a game in which the learner (represented by the min player) aims to find a distribution over decisions such that for a worst-case problem instance (represented by the max player), the regret of their decision is controlled by a notion of information gain (or, estimation error) relative to a reference model . Conceptually,should be thought of as a guess for the true model, and the learner (the min player) aims to—in the face of an unknown environment (the max player)—optimally balance the regret of their decision with the amount information they acquire. With enough information, the learner can confirm or rule out their guess , and scale parameter γ controls how much regret they are willing to incur to do this. In general, the larger the value of (,), the more difficult it is to explore.To state a regret bound for , we define() = sup_∈()(,).Here, () denotes the set of all convex combinations of elements in . The reason we consider the set () is that in general, online estimation algorithms such as exponential weights will produce improper predictions with ∈(). In fact, it turns out (see prop:dec_unconstrained) that even if we allowto be unconstrained above, the maximizer always lies in () without loss of generality.The main result for this section shows that the regret for is controlled by the value of the and the estimation error (,T,δ) for the online regression oracle. The algorithm with exploration parameter γ>0 guarantees that with probability at least 1-δ,≤()·T + γ·(,T,δ). We can optimize over the parameter γ in the result above, which yields≤inf_γ>0[]()·T + γ·(,T,δ)≤2·inf_γ>0max[]()·T, γ·(,T,δ).For finite classes, we can use the exponential weights method to obtain (,T,δ)log(/δ), and this bound specializes toinf_γ>0max[]()·T, γ·log(/δ).As desired, this gives a bound on regret that scales only with: * the complexity log for estimation.* the complexity of exploration in the decision space, which is captured by ().Before interpreting the result further, we give the proof, which is a nearly immediate consequence of the definition of the , and bears strong similarity to the proof of the regret bound for SquareCB (<ref>), minus contexts.We write=∑_t=1^T_t∼pt*()-(t)=∑_t=1^T_t∼pt*()-(t)- γ·_t∼pt*((πt)-t(πt))^2+ γ·(,T,δ).For each t, since ∈, we have_t∼pt*()-(t)- γ·_t∼pt*((πt)-t(πt))^2≤sup_f∈*_t∼pt*f()-f(t)- γ·_t∼pt* (f(πt)-t(πt))^2 = inf_p∈Δ()sup_f∈_∼p*f()-f() - γ· (f(πt)-t(πt))^2 = (,t),where the first equality above uses that pt is chosen as the minimizer for (,t). Summing across rounds, we conclude that≤sup_(,)·T + γ·(,T,δ). When designing algorithms for structured bandits, a common challenge is that the connection between decision making (where the learner's decisions influence what feedback is collected) and estimation (where data is collected passively) may not seem apparent a-priori. The power of the is that it—by definition—provides a bridge, which the proof of prop:dec_bandit highlights. One can select decisions by building an estimate for the model using all of the observations collected so far, then sampling from the distribution p that solves eq:dec_structured with the estimated reward functionplugged in. Boundedness of the implies that at every round, any learner using this strategy either enjoys small regret or acquires information, with their total regret controlled by the cumulative online estimation error. Example: Multi-Armed Bandit Of course, the perspective above is only useful if the is indeed bounded, which itself is not immediately apparent. In <ref>, we will show that boundedness of the is not just sufficient, but in fact necessary for low regret in a fairly strong quantitative sense.For now, we will build intuition about the through examples. We begin with the multi-armed bandit, where Π=A and =^A.Our first result shows that ()≤A/γ, and that this is achieved with the Inverse Gap Weighting method introduced in sec:cb. For the Multi-Armed Bandit setting, where Π=A and =^A, the Inverse Gap Weighting distribution p=_4γ() in eq:igw is the exact minimizer for (,), and certifies that (,)=A-1/4γ.By rewriting prop:igw_mab, it is straightforward to deduce that the is bounded by A/γ, but prop:igw_exact shows thatis actually the best possible distribution for this minimax problem. In this sense, the algorithm can be seen as a (contextual) special case of the principle. Note that to attain the exact optimal value (instead of a bound that is optimal up to constants), we use _4γ as opposed_γ as in prop:igw_mab; the reason why this choice for γ is optimal is related to the fact that the inequality xy≤x^2+1/4y^2 is tight in general. We rewrite the minimax problem asmin_p∈Δ(A)max_f∈^A_π∼p*f(π_f)-f(π)-γ(f(π)-(π))^2=min_p∈Δ(A)max_f∈^Amax_∈A_π∼p*f()-f(π)-γ(f(π)-(π))^2=min_p∈Δ(A)max_∈Amax_f∈^A_π∼p*f()-f(π)-γ(f(π)-(π))^2.For any fixed p and , first-order conditions for optimality imply that the choice for f that maximizes this expression isf(π) = (π) - 1/2γ + 1/2γp()π=.This choice gives_π∼p*f() - f(π) = _π∼p*() - (π) + 1-p()/2γp()andγ_π∼p*(f(π)-(π))^2 = 1-p()/4γ + (1-p())^2/4γp() = 1/4γp() - 1/4γ.Plugging in and simplifying, we compute that the original minimax game is equivalent tomin_p∈Δ(A)max_∈A*_π∼p*()-(π) + 1/4γp() - 1/4γ.Finishing the proof: Ad-hoc approach. Observe that for any p∈Δ(Π), we havemax_∈A*_π∼p*()-(π) + 1/4γp()≥_∼p*_π∼p*()-(π) + 1/4γp() = A/4γ,so no p can attain value better than A/4γ. If we can show that IGW achieves this value, we are done.Observe that by setting p=_4γ(), we have that for all ,_π∼p*()-(π) + 1/4γp() = _π∼p*()-(π) + λ/4γ + () - () = _π∼p*()-(π) + λ/4γ.Note that the value on the right-hand side is independent of . That is, the inverse gap weighting distribution is an equalizing strategy. This means that for this choice of p, we havemax_∈A*_π∼p*()-(π) + 1/4γp() =min_∈A*_π∼p*()-(π) + 1/4γp()=_∼p*_π∼p*()-(π) + 1/4γp()= A/4γ.Hence, p=_4γ() achieves the optimal value. Finishing the proof: Principled approach. We begin by relaxing to p∈^A_+. Defineg_(p) = () + 1/4γp().Let ν∈ be a Lagrange multiplier and p∈_+^A, and consider the Lagrangian(p,ν) = g_(p) - ∑_πp(π)(π) + ν*∑_πp(π)-1.By the KKT conditions, if we wish to show that p∈Δ(Π) is optimal for the objective in eq:igw_simplified, it suffices to find ν such that[If p∈Δ(Π), the KKT condition that d/dν(p,ν)=0 is already satisfied.]0∈∂_p(p,ν),where ∂_p denotes the subgradient with respect to p. Recall that for a convex function h(x)=max_yg(x,y), we have ∂_xh(x)=(*g(x,y)|g(x,y)=max_y'g(x,y')). As a result, ∂_p(p,ν) = ν1 -+ (_pg_(p)|g_(p)=max_π'g_π'(p)).Now, let p=_4γ(). We will argue that 0∈∂_p(p,ν) for an appropriate choice of ν. By eq:igw_equalizing, we know that g_π(p)=g_π'(p) for all π,π' (p is equalizing), so the expression above simplifies to∂_p(p,ν) = ν1 -+ (_pg_(p)_∈Π).Noting that _pg_(p)=-1/4γp^2()e_, wecomputeδ∑_πp(π)g_π(p) = *-1/4γp(π)_π∈Π = *-λ/4γ - () + (π)_π∈Π,which has δ∈(_pg_(p)_∈Π). By choosing ν=λ/4γ+(), we haveν1 -+ δ = 0,so eq:igw_subgradient is satisfied. §.§ : Examples We now show how to bound the for a number of examples beyond finite-armed bandits—some familiar and others new—and show how this leads to bounds on regret via . Approximately solving theBefore proceeding, let us mention that to apply , it is not necessary to exactly solve the minimax problem eq:dec_structured. Instead, let us say that a distribution p=p(,γ) certifies an upper bound on the if, givenand γ>0, it ensures thatsup_f∈_π∼p* f() - f(π) - γ·(f(π)-(π))^2 ≤(,)for some known upper bound (,)≥(,). In this case, letting ()sup_(,), it is simple to see that if we use this distribution pt=p(t,γ) within , we have≤()·T + γ·(,T,δ).§.§.§ Cheating Code For a first example, we show that the leads to regret bounds that scale with log(A) for the cheating code example in <ref>; that is, unlike UCB and posterior sampling, the correctly adapts to the structured of this problem.Consider the cheating code in ex:cheating_code. For this class , we have() log_2(A)/γ. Note that while the strategy p in prop:cheating certifies a bound on the , it is not necessarily the exact minimizer, and hence the distributions p1,…,pT played by may be different. Nonetheless, since the regret of is bounded by the , this result (via prop:dec_bandit) implies that its regret is bounded by √(log_2(A)Tlog). Using a slightly more refined version of the algorithm <cit.>, one can improve this to match the log(T) regret bound given in ex:cheating_code. To simplify exposition, we present a bound on (,) for this example only for ∈, not for ∈(). A similar approach (albeit with a slightly different choice for p) leads to the same bound on ().Let ∈ and γ>0 be given, and define p = (1-) + ·().We will show that if we choose =2log_2(A)/γ, this strategy certifies that(,) log_2(A)/γ.Let f∈ be fixed, and consider the value_∼p*f()-f(π) -γ·(f(π)-(π))^2 .We consider two cases. First the first, if π_f=, then we can upper bound_∼p*f()-f(π) -γ·(f(π)-(π))^2 ≤_∼p*f()-f(π) =_∼p*f()-f(π)≤2,since f∈-1,1. For the second case, suppose that π_f≠π_. We begin by bounding _∼p*f()-f(π) -γ·(f(π)-(π))^2 ≤ 2 -γ·_∼p*(f(π)-(π))^2,using that f∈-1,1. To proceed, we want to argue that the negative offset term above is sufficiently large; informally, this means that we are exploring “enough”. Observe that since π_f ≠π_, if we let b_1,…,b_log_2(A) and b'_1,…,b'_log_2(A) denote the binary representations for π_f and π_, there exists i such that b_i≠b'_i. As a result, we have_∼p*(f(π)-(π))^2≥/log_2(A)(f(c_i)-(c_i))^2 = /log_2(A)(b_i-b'_i)^2 = /log_2(A).We conclude that in the second case,_∼p*f()-f(π) -γ·(f(π)-(π))^2 ≤ 2- γ/log_2(A). Putting the cases together, we have_∼p*f()-f(π) -γ·(f(π)-(π))^2 ≤max*2, 2- γ/log_2(A).To balance these terms, we set= 2log_2(A)/γ,which leads to the result. §.§.§ Linear Bandits We next consider the problem of linear bandits linear bandit <cit.>, which is a special case of the linear contextual bandit problem we saw in sec:cb. We letbe arbitrary, and define =*↦θ,ϕ()|θ∈Θ, where Θ⊆_2^d(1) is a parameter set and ϕ:Π→_2^d(1) is a fixed feature map that is known to the learner. To prove bounds on the for this setting, we make use of a primitive from convex analysis and experimental design known as the G-optimal design.For any compact set ⊆^d with dim span()=d, there exists a distribution p∈Δ(), called the G-optimal design, which hassup_z∈[]Σ_p^-1z,z≤d,where Σ_p_z∼p*zz^. The G-optimal design ensures coverage in every direction of the decision space, generalizing the notion of uniform exploration for finite action spaces. In this sense, it can be thought of as a “universal” exploratory distribution for linearly structured action spaces. Special cases include: * When =Δ(A), we can take p=(e_1,…,e_A) as an optimal design* When =_2^d(1), we can again take p=(e_1,…,e_A) as an optimal design.* For any positive definite matrix A0, the set =*z∈^d|Az,z≤1 is an ellipsoid. Letting λ_1,…,λ_d and v_1,…,v_d denote the eigenvalues and eigenvectors for A, respectively, the distribution p=(λ_1^-1/2v_1,…,λ_d^-1/2v_d) is an optimal design. To see how the G-optimal design can be used for exploration, consider the following generalization of the -greedy algorithm. * Let q∈Δ(Π) be the G-optimal design for the set ϕ(π)_π∈Π.* At each step t, obtain t from a supervised estimation oracle. Play t=π_t with probability 1-, and sample πt∼q otherwise.It is straightforward to show that this strategy gives d^1/3T^2/3log for linear bandits. The basic idea is to replace eq:eps_greedy_min_probability in the proof of prop:eps_greedy_cb with the optimal design property eq:optimal_design, using that the reward functions under consideration are linear. The intuition is that even though we are no longer guaranteed to explore every single action with some minimum probability, by exploring with the optimal design, we ensure that some fraction of the data we collect covers every possible direction in action space to the greatest extent possible.The following result shows that by combining optimal design inverse gap weighting, we can obtain a d/γ bound on the , which leads to an improved √(dT) regret bound. Consider the linear bandit setting. Let a linear functionand γ>0 be given, consider the following distribution p: * Define (π) = ϕ(π)/√(1+γ/d[]()-(π)), where =_π∈Π(π).* Let ∈Δ(Π) be the G-optimal design for the set (π)_π∈Π, and define q=1/2 + 1/2_.* For each π∈Π, setp(π) = q(π)/λ+γ/d(()-(π)),where λ∈1/2,1 is chosen such that ∑_πp(π)=1.[The normalizing constant λ always exists because we have 1/2λ≤∑_πp(π)≤1/λ.]This strategy certifies that() d/γ. One can show that ()d/γ for this setting as well, so this is the best bound we can hope for. Combining this result with prop:dec_bandit and using the averaged exponential weights algorithm for estimation as in <ref> gives √(dTlog(/δ)).Fix f∈. Let us abbreviate η=γ/d. As in prop:igw_mab, we break regret into three terms: _π∼p[ f()-f(π)] = _π∼p[ ()-(π)] _(I) exploration bias +_π∼p[ (π)-f(π)] _(II) est error on policy +f()-() _(III) est error at opt . The first term captures the loss in exploration that we would incur ifwe true the reward function, and is equal to: ∑_πq(π)(() - (π))/λ + η(() - (π))≤∑_πq(π)/η≤1/η, and the second term, as before, is at most √(_π∼p[]( (π)-f(π))^2)≤1/2γ + γ/2_π∼p( (π)-f(π))^2. The third term can be written as = f()-() - (() -()) = []θ-,ϕ() - (() -()),where θ,∈Θ are parameters such that f(π)=θ,ϕ(π) and (π)=,ϕ(π). Defining Σ_p=_π∼p*ϕ(π)ϕ(π)^, we can bound[]θ-,ϕ() = []Σ_p^1/2(θ-),Σ_p^-1/2ϕ()≤Σ_p^1/2(θ-)_2Σ_p^-1/2ϕ()_2≤γ/2Σ_p^1/2(θ-)_2^2 + 1/2γΣ_p^-1/2ϕ()_2^2.Note that Σ_p^1/2(θ-)_2^2=_π∼p((π)-f(π))^2 and Σ_p^-1/2ϕ()_2^2=ϕ(),Σ_p^-1ϕ(),so we have≤γ/2_π∼p((π)-f(π))^2 + 1/2γϕ(),Σ_p^-1ϕ() - (() -())_.To proceed, observe thatΣ_p1/2∑_π(π)/λ + η(()-(π))ϕ(π)ϕ(π)^1/2∑_π(π)/1 + η(()-(π))ϕ(π)ϕ(π)^1/2∑_π(π)(π)(π)^1/2Σ_This means that we can boundϕ(),Σ_p^-1ϕ() ≤ 2 ϕ(),_^-1ϕ()= 2(1+η(()-())(),_^-1()≤ 2d(1+η(()-()),where the last line uses thatis the G-optimal design for (π)_π∈Π. We conclude that≤2d/2γ + 2dη/2γ(()-()) - (()-())≤d/γ. In fact, it can be shown <cit.> that when Θ=^d, the exact minimizer of the forlinear bandits is given byp=_p∈Δ()*_∼p[]() + 1/4γlog(_∼pϕ()ϕ()^) . §.§.§ Nonparametric BanditsFor all of the examples so far, we have shown that() (,Π)/γ,where (,Π) is some quantity that(informally) reflects the amount of exploration required for the classunder consideration (A for bandits, log_2(A) for the cheating code, and d for linear bandits). In general though, thedoes not always shrink at a γ^-1 rate, and can have slower decay for problems where the optimal rate is worse than √(T). We now consider such a setting: a standard nonparametric bandit problem called Lipschitz bandits in metric spaces <cit.>.We taketo be a metric space equipped with metric , and define= *f:→0,1|f is 1-Lipschitz w.r.t .We give a bound on the which depends on the covering number for the space Π (with respect to the metric ρ). Let us say that '⊆ is an -cover with respect toif∀∈∃'∈'s.t.(,')≤,and let (,) denote the size of the smallest such cover. Consider the Lipschitz bandit setting, and suppose that there exists d>0 such that (,)≤^-d for all >0. Let :Π→0,1 and γ≥1 be given and consider the following distribution: * Let '⊆ witness the covering number (,) for a parameter >0.* Let p be the result of applying the inverse gap weighting strategy in eq:igw to , restricted to the (finite) decision space '.By setting ∝γ^-1/d+1, this strategy certifies that(,) γ^-1/d+1. Ignoring dependence on (,T,δ), this result leads to regret bounds that scale as T^d+1/d+2 (after tuning γ in <ref>), which cannot be improved. Let f∈ be fixed. Let ' be the -cover for Π. Since f is 1-Lipschitz, for all π∈Π there exists a corresponding covering element ()∈' such that ρ(π,())≤, and consequently for any distribution p,_∼p*f() -f() ≤_∼p*f(()) -f() + f() - f(())≤_∼p*f(()) -f() + ρ(,())≤_∼p*f(()) -f() + .At this point, since ()∈', prop:igw_mab ensures that if we choose p using inverse gap weighting over ', we have_∼p*f(())-f()≤'/γ + γ·_∼p*(f()-())^2.From our assumption on the growth of (,), '≤^-d, so the value is at most+^-d/γ.We choose ∝γ^-1/d+1 to balance the terms, leading to the result.§.§.§ Further ExamplesWe state the following additional upper bounds on the without proof; details can be found in <cit.>. [subsumes Eluder Dimension] Consider any classwith values in 0,1. For allγ≥e, we have() inf_>0* + (-,)log^2(γ)/γ + γ^-1.As a special case, this example implies that enjoys a regret bound for generalized linear bandits similar to that of UCB. [Bandits with Concave Rewards] The concave (or convex, if one considers losses rather than rewards) bandit problem <cit.> is a generalization of the linear bandit. We take ⊆_2^d(1) and define=*f:→0,1|f is concave and 1-Lipschitz w.r.t _2.For this setting, whenever ⊆(Π→0,1), results of <cit.> imply that() d^4/γ·(d,γ)for all γ>0.For the function class=*f(π)=-(ϕ(π),θ)|θ∈Θ⊂_2^d(1),(<ref>) leads to a √((d)T) regret bound for . This highlights a case where the Eluder dimension is overly pessimistic, since we saw that it grows exponentially for this class. §.§ Relationship to Optimism and Posterior Sampling We close this section by highlighting some connections between the and and other techniques we have covered so far: Optimism (UCB) and Posterior Sampling. Additional connections to optimism can be found in <ref>.§.§.§ Connection to OptimismThe meta-algorithm and the can be combined with the idea of confidence sets that we used in the UCB algorithm. Consider the following variant of .This strategy is the same as the basic algorithm, except that at each step, we compute a confidence set t and modify the minimax problem so that the max player is restricted to choose f∈t.[Note that compared to the confidence sets used in UCB, a slight difference is that we compute t using the estimates 1,…,T produced by the online regression oracle (this is sometimes referred to as “online-to-confidence set conversion”) as opposed to using ERM; this difference is unimportant, and the later would work as well.] With this change, the distribution pt can be interpreted as the minimizer for (t,t). To analyze this algorithm, we show that as long as ∈t for all t, the same per-step analysis as in prop:dec_bandit goes through, withreplaced by t. This allows us to prove the following result.For any δ∈(0,1) and γ>0, if we set β=(,T,δ), then with confidence sets ensures that with probability at least 1-δ,≤∑_t=1^T(t) + γ·(,T,δ).This bound is never worse than the one in prop:dec_bandit, but it can be smaller if the confidence sets 1,…,T shrink quickly. For a proof, see ex:dec_structured_conf. In fact, the regret bound in (<ref>) can be shown to hold for any sequence of confidence sets 1,…,T, as long as ∈t∀t with probability at least 1-δ; the specific construction we use within the variant above is chosen only for concreteness. Relation to confidence width and UCB It turns out that the usual UCB algorithm, which selects πt=_π∈Πt(π) for t(π)=max_f∈tft(π), certifies a bound on (t) which is never worse than usual confidence width we use in the UCB analysis.The UCB strategy πt=_π∈Πt(π) certifies that[0](t) ≤t(πt) - (πt).By choosing πt=_π∈Πt(π), we have that for any ,[0](t,)= inf_p∈Δ(Π)sup_f∈t_π∼p*max_f() - f(π)≤sup_f∈t*max_f() - f(πt)≤sup_f∈t*max_t() - f(πt)= sup_f∈t*t(πt) - f(πt) = t(πt) - t(πt).As we saw in the analysis of UCB for multi-armed bandits with Π=1,…,A (sec:ucb_bandits), the confidence width in eq:dec_width0 might be large for a given round t, but by the pigeonhole argument (lem:confidence_width_potential), when we sum over all rounds we have∑_t=1^T[0](t) ≤∑_t=1^Tt(πt) - t(πt) ≤(√(AT)).Hence, even though UCB is not the optimal strategy to minimize the DEC, it can still lead to upper bounds on regret when the confidence width shrinks sufficiently quickly. Of course, as examples like the cheating code show, we should not expect this to happen in general.Interestingly, the bound on the in <ref> holdsfor γ=0, which only leads to meaningful bounds on regret because 1,…,T are shrinking. Indeed, prop:igw_exact shows that with =^A, we have() A/γ,so the unrestricted classhas ()→∞ as γ→0. By allowing for γ>0, we can prove the followingslightly stronger result, which replaces t by t. For any γ>0, the UCB strategy πt=_π∈Πt(π) certifies that(t,t) ≤t(πt) - t(πt) + 1/4γ.This is a slight generalization of the proof of <ref>. By choosing πt=_π∈Πt(π), we have (,t)= min_p∈Δ(Π)max_f∈_t _π∼p[max_ f()-f(π) -γ·(t(π) - f(π))^2 ] ≤max_f∈_t[max_f()-f(πt) -γ·(t(πt) - f(πt))^2 ] ≤max_f∈_t[t(πt)-f(πt) -γ·(t(πt) - f(πt))^2 ] = max_f∈_t[t(πt)-f(πt) -γ·(t(πt) - f(πt))^2 ]_≤1/4γ + t(πt)-t(πt).§.§.§ Connection to Posterior SamplingThe eq:dec_structured is a min-max optimization problem, which we have mentioned can be interpreted as a game in which the learner (the “min” player) aims to find a decision distribution p that optimally trades off regret and information acquisition in the face of an adversary (the “max” player) that selects a worst-case model in . We can define a natural dual (or, max-min) analogue of the via(,) = sup_μ∈Δ()inf_p∈Δ()_f∼μ_∼p*f()-f(π) -γ·(f(π)-(π))^2 .The dual has the following Bayesian interpretation. The adversary selects a prior distribution μ over models in , and the learner (with knowledge of the prior) finds a decision distribution p that balances the average tradeoff between regret and information acquisition when the underlying model is drawn from μ.Using the minimax theorem (lem:sion), one can show that the and its Bayesian counterpart coincide.Under mild regularity conditions, we have(,) = (,). Thus, any bound on the dual immediately yields a bound on the primal . This perspective is useful because it allows us to bring existing tools for Bayesian bandits and reinforcement learning to bear on the primal . As an example, we can adapt the posterior sampling/probability matching strategy introduced in sec:mab. When applied to the Bayesian DEC—this approach selects p to be the action distribution induced by sampling f∼μ and selecting . Using lem:mab_decoupling_basic, one can show that this strategy certifies that() /γfor the multi-armed bandit. In fact, existing analysis techniques for the Bayesian setting can be viewed as implicitly providing bounds on the dual <cit.>. Notably, the dual is always bounded by a Bayesian complexity measure known as the information ratio, which is used throughout the literature on Bayesian bandits and reinforcement learning <cit.>.Beyond the primal and dual , there are deeper connections between the and Bayesian algorithms, including a Bayesian counterpart to the algorithm itself <cit.>.§.§ Incorporating Contexts The and algorithm trivially extend to handle contextual structured bandits. This approach generalizes the method introduced in sec:cb from finite action spaces to general action spaces. Consider the following protocol.This is the same as the contextual bandit protocol in sec:cb, except that we allow Π to be large and potentially continuous. As in that section, we allow the contexts x1,…,xT to be generated in an arbitrary, potentially adversarial fashion, but assume thatrt∼(·|xt,πt),and define (x,π)=_r∼(·|x,π). We assume access to a function classsuch that ∈, and assume access to an estimation oracle forthat ensures that with probability at least 1-δ, ∑_t=1^T _πt∼ pt(t(xt,πt)-(xt,πt))^2 ≤(, T, δ).For f∈, we define (x)=_π∈Πf(x,π).To extend the algorithm to this setting, at each time t we solve the minimax problem corresponding to the , but condition on the context xt.For x∈, define(x,·)=*f(x,·)|f∈as the projection ofonto x∈. The following result shows that whenever the is bounded conditionally—that is, whenever it is bounded for (x,·) for all x—this strategy has low regret. The algorithm with exploration parameter γ>0 guarantees that≤sup_x∈((x,·))·T + γ·(,T,δ), We omit the proof of this result, which is nearly identical to that of prop:dec_bandit. The basic idea is that for each round, once we condition on the context xt, the allows us to link regret to estimation error in the same fashion non-contextual setting.We showed in prop:igw_exact that thedistribution exactly solves the minimax problem when =^A. Hence, the algorithm in sec:cb is precisely the special case of Contextual in which =^A.Going beyond the finite-action setting, it is simplest to interpret prop:dec_contextual_structured when (x,·) has the same structure for all contexts. One example is contextual bandits with linearly structured action spaces. Here, we take= *f(x,a)=*ϕ(x,a),g(x)|g∈,where ϕ(x,a)∈^d is a fixed feature map and ⊂(→_2^d(1)) is an arbitrary function class. This setting generalizes the linear contextual bandit problem from sec:cb, which corresponds to the case whereis a set of constant functions. We can apply prop:dec_linear to conclude that sup_x∈((x,·))d/γ, so that prop:dec_contextual_structured gives √(dT·(,T,δ)). §.§ Additional Properties of the The following proposition indicates that the value of the Decision-Estimation Coefficient (,) cannot be increased by taking references modelsoutside the convex hull of : For any γ>0,sup_:Π→(,) = sup_∈()[γ](,).§.§ Exercises[Posterior Sampling for Multi-Armed Bandits] Prove that for the standard multi-armed bandit, () /γ,by using the Posterior Sampling strategy (select p to be the action distribution induced by sampling f∼μ and selecting ), and applying the decoupling lemma (lem:mab_decoupling_basic). Recall that here, () is the “maxmin” version of the DEC (<ref>). Prove prop:dec_structured_conf. In this exercise, we will prove prop:dec_unconstrained as follows. First, show that the left-hand side is an upper bound on the right-hand side. For the other direction: * Prove thatinf_∈()_f∼μ_π∼ p (f(π)-(π))^2 ≤inf_:Π→_f∼μ_π∼ p (f(π)-(π))^2.* Use the Minimax Theorem (lem:sion in sec:minimax_appendix) to conclude prop:dec_unconstrained. § REINFORCEMENT LEARNING: BASICSWe now introduce the framework of reinforcement learning, which encompasses a rich set of dynamic, stateful decision making problems. Consider the task of repeated medical treatment assignment, depicted in fig:mab. To make the setting more realistic, it is natural to allow the decision-maker to apply multi-stage strategies rather simple one-shot decisions such as “prescribe a painkiller.” In principle, in the language of structured bandits, nothing is preventing us from having each decision π^t be a complex multi-stage treatment strategy that, at each stage, acts on the patient's dynamic state, which evolves as a function of the treatments at previous stages. As an example, intermediate actions of the type “if patient's blood pressure is above X then do Y” can form a decision tree that defines the complex strategy π^t. Methods from the previous lectures provide guarantees for such a setting, as long as we have a succinct model of expected rewards. What sets RL apart from structured bandits is the additional information about the intermediate state transitions and intermediate rewards. This information facilitates credit assignment, the mechanism for recognizing which of the actions led to the overall (composite) decision to be good or bad. This extra information can reduce what would otherwise be exponential sample complexity in terms of the number of stages, states, and actions in multi-stage decision making. This section is structured as follows. We first present the formal reinforcement learning framework and present basic principles including Bellman optimality and dynamic programming, which facilitate efficiently computing optimal decisions when the environment is known. We then consider the case in which the environment is unknown, and give algorithms for perhaps the simplest reinforcement learning setting, tabular reinforcement learning, where the state and action spaces are finite. Algorithms for more complex reinforcement learning settings are given in <ref>.§.§ Finite-Horizon Episodic MDP FormulationWe consider an episodic finite-horizon reinforcement learning framework. With H denoting the horizon, a Markov Decision Process (MDP) M takes the formM=*, , _h_h=1^H, _h_h=1^H, d_1,whereis the state space,is the action space,_h:×→Δ()is the probability transition kernel at step h,_h:×→Δ()is the reward distribution, and d_1∈Δ() is the initial state distribution. We allowthe reward distribution and transition kernel to vary across MDPs, but assume for simplicity that the initial state distribution is fixed and known.For a fixed MDP M, an episode proceeds under the following protocol. At the beginning of the episode, the learner selects a randomized, non-stationary policyπ=(π_1,…,π_H),where π_h:→Δ(); we letfor “randomized, non-stationary” denote the set of all such policies. The episode then evolves through the following process, beginning from s_1∼d_1. For h=1,…,H: * a_h∼π_h(s_h).* r_h∼_h(s_h,a_h) and s_h+1∼PM_h(s_h,a_h).For notational convenience, we take s_H+1 to be a deterministic terminal state.The Markov property refers to the fact that under this evolution, ℙ^M(s_h+1=s'| s_h, a_h) = ℙ^M(s_h+1=s'| s_h, a_h, s_h-1, a_h-1,…, s_1, a_1). The value for a policy π under M is given by(π)Mπ*∑_h=1^Hr_h,where Mπ· denotes expectation under the process above. We define an optimal policy for model M as∈_π∈(π).Value functions Maximization in (<ref>) is a daunting task, since each policy π is a complex multi-stage object. It is useful to define intermediate “reward-to-go” functions to start breaking this complex task into smaller sub-tasks. Specifically, for a given model M and policy π, we define the state-action value function and state value function viaQ_h^M,π(s,a)=^M,π*∑_h'=h^Hr_h'|s_h=s, a_h=a,V_h^M,π(s)=^M,π*∑_h'=h^Hr_h'|s_h=s.Hence, the definition in (<ref>) reads(π) = _s∼d_1, a∼π_1(s)*Q_1^M,π(s,a) = _s∼d_1*V_1^M,π(s) Online RL For reinforcement learning, our main focus will be on what is called the online reinforcement learning problem, in which we interact with an unknown MDPfor T episodes. For each episode t=1,…,T, the learner selects a policy πt∈. The policy is executed in the MDP , and the learner observes the resulting trajectoryτt=(s_1t,a_1t,r_1t),…,(s_Ht,a_Ht,r_Ht).The goal is to minimize the total regret∑_t=1^T_t∼pt*(π) - (πt)against the optimal policy π for . The online RL framework is a strict generalization of (structured) bandits and contextual bandits (with i.i.d. contexts). Indeed, if ={s_0} and H=1, each episode amounts to choosing an action at∈ and observing a reward rt with mean fM(at), which is precisely a bandit problem. On the other hand, taking = and H=1 puts us in the setting of contextual bandits, with d_1 being the distribution of contexts. In both cases, the notion of regret (<ref>) coincides with the notion of regret in the respective setting.We mention in passing that many alternative formulations for Markov decision processes and for the reinforcement learning problem appear throughout the literature. For example, MDPs can be studied with infinite horizon (with or without discounting), and an alternative to minimizing regret is to consider PAC-RL which aims to minimize the sub-optimality of a final output policy produced after exploring for T rounds.§.§ Planning via Dynamic Programming In some reinforcement learning problems, it is natural to assume that the true MDPis known. This may be the case with games, such as chess or backgammon, where transition probabilities are postulated by the game itself. In other settings, such as robotics or medical treatment, the agent interacts with an unknownand needs to learn at least some aspects of this environment. The online reinforcement learning problem described above falls in the latter category. Before attacking the learning problem, we need to understand the structure of solutions to (<ref>) in the case whereis known to the decision-maker. In this section, we show that the problem of maximizing (π) over π∈Π in a known model M (known as planning) can be solved efficiently via the principle of dynamic programming. Dynamic programming can be viewed as solving the problem of credit assignment by breaking down a complex multi-stage decision (policy) into a sequence of small decisions. We start by observing that the optimal policyin (<ref>) may not be uniquely defined. For instance, if d_1 assigns zero probability to some state s_1, the behavior ofon this state is immaterial. In what follows, we introduce a fundamental result, prop:bellman, which guarantees existence of an optimal policy =(πM,1,…,πM,H) thatmaximizes [π]_1(s) over π∈ for all states s∈ simultaneously (rather than just on average, as in (<ref>)). The fact that such a policy exists may seem magical at first, but it is rather straightforward. Indeed, if πM,h(s) is defined for all s∈ and h=2,…,H, then defining the optimal πM,1(s) at any s is a matter of greedily choosing an action that maximizes the sum of the expected immediate reward and the remaining expected reward under the optimal policy. Indeed, this observation is Bellman's principle of optimality, stated more generally as follows <cit.>: To state the result formally, we introduce the optimal value functions as _h(s,a)=max_π∈^M, π*∑_h'=h^H r_h'| s_h=s, a_h=a _h(s)= max_a_h(s,a)for all s∈, a∈, and h∈[H]; we adopt the convention that _H+1(s)=_H+1(s,a)=0. Since these optimal values are separate maximizations for each s,a,h, it is reasonable to ask whether there exists a single policy that maximizes all these value functions simultaneously. Indeed, the following lemma shows that there existssuch that for all s,a,h,_h(s,a) = []_h(s,a), _h(s)=[]_h(s). The optimal value function (<ref>) for MDP M can be computed via []_H+1(s) 0, and for each s∈, []_h(s) = max_a∈^M*r_h + []_h+1(s_h+1) | s_h=s, a_h=a.The optimal policy is given by πM,h(s) ∈_a∈^M*r_h + []_h+1(s_h+1) | s_h=s, a_h=a. Equivalently, for all s∈, a∈, []_h(s, a) = ^M*r_h + max_a'∈[]_h+1(s_h+1, a') | s_h=s, a_h=a, and an the optimal policy is given by πM,h(s) ∈_a∈[]_h(s, a). The update in (<ref>) is referred to as value iteration (VI). It is useful to introduce a more succinct notation for this update. For an MDP M, define the Bellman Operators M_1,…,M_H via[_hM Q](s, a) = _s_h+1∼P_hM(s,a), r_h∼ RM_h(s,a)*r_h(s,a) + max_a'∈ Q(s_h+1, a')for any Q:×→. Going forward, we will write the expectation above more succinctly as[_h^M Q](s, a) = ^M*r_h(s_h, a_h) + max_a'∈ Q(s_h+1, a') | s_h=s, a_h=aIn the language of Bellman operators, (<ref>) can be written as[]_h = M_h []_h+1.§.§ Failure of Uniform Exploration The task of planning using dynamic programming—which requires knowledge of the MDP—is fairly straightforward, at least if we disregard the computational concerns. In this course, however, we are interested in the problem of learning to make decisions in the face of an unknown environment. Minimizing regret in an unknown MDP requires exploration. As the next example shows, exploration in MDPs is a more delicate issue than in bandits. Recall that -Greedy, a simple algorithm, is a reasonable solution for bandits and contextual bandits, albeit with a suboptimal rate (T^2/3 as opposed to √(T)). The next (classical) example, a so-called “combination lock,” shows that such a strategy can be disastrous in reinforcement learning, as it leads to exponential (in the horizon H) regret. Consider the MDP depicted in fig:graphics_combination_lock, with H+2 states, and two actions a_g and a_b, and a starting state 1. The “good” action a_g deterministically leads to the next state in the chain, while the “bad” action deterministically leads to a terminal state. The only place where a non-zero reward can be received is the last state H, if the good action is chosen. The starting state is 1, and so the only way to receive non-zero reward is to select a_g for all the H time steps within the episode. Since the length of the episode is also H, selecting actions uniformly brings no information about the optimal sequence of actions, unless by chance all of the actions sampled happen to be good; the probability that this occurs is exponentially small in H. This means that T needs to be at least O(2^H) to achieve nontrivial regret, and highlights the need for more strategic exploration.Given the failure of for this example, one can ask whether other algorithmic principles also fail. As we will show now, the principle of optimism succeeds, and an analogue of the UCB method yields a regret bound that is polynomial in the parameters , , and H. Before diving into the details, we present a collection of standard tools for analysis in MDPs, which will find use throughout the remainder of the lecture notes.§.§ Analysis Tools One of the most basic tools employed in the analysis of reinforcement learning algorithms is the performance difference lemma, which expresses the difference in values for two policies in terms of differences in single-step decisions made by the two policies. The simple proof, stated below, proceeds by successively changing one policy into another and keep track of the ensuing differences in expected rewards. One may also interpret this lemma as a version of the credit assignment mechanism.Henceforth, we adopt the following simplified notation. When a policy π is applied to the random variable s_h, we drop the subscript h and write π(s_h) instead of π_h(s_h), whenever this does not cause confusion. For any s∈, and π,π'∈, V_1^M,π'(s) - V_1^M,π(s)= ∑_h=1^H ^M,π*Q_h^M,π'(s_h,π'(s_h)) - Q_h^M,π'(s_h,a_h) | s_1=s Fix a pair of policies π,π' and define π^h =(π_1,…,π_h-1, π'_h,…, π'_H), with π^1 = π' and π^H = π. By telescoping, we can write V_1^M,π'(s)- V_1^M,π(s)= ∑_h=1^H V_1^M,π^h(s)- V_1^M,π^h+1(s) .Observe that for each h, we have V_1^M,π^h(s)-V_1^M,π^h+1(s) =^M,π^h*∑_t=1^Hr_t|s_1=s - ^M,π^h+1*∑_t=1^Hr_t|s_1=s.Here, one process evolves according to (M,π^h) and the one evolves according to (M,π^h+1). The processes only differ in the action taken once the state s_h is reached. In the former, the action π'(s_h) is taken, whereas in the latter it is π(s_h). Hence, (<ref>) is equal to _s_h∼ (M,π)^M,π* Q_h^M,π'(s_h,π'(s_h))- Q_h^M,π'(s_h,π(s_h))|s_1=s which can be written as _(s_h,a_h)∼ (M,π)^M,π* Q_h^M,π'(s_h,π'(s_h)) -Q_h^M,π'(s_h,a_h) |s_1=s. In contrast to the performance difference lemma, which relates the values of two policies under the same MDP, the next result relates the performance of the same policy under two different MDPs. Specifically, the difference in initial value for two MDPs is decomposed into a sum of errors between layer-wise value functions. For any pair of MDPs M=(,) and =(,), for any s∈, and policies π∈,_1(s)- (s) =∑_h=1^Hπ*_h(s_h,a_h) - r_h - _h+1(s_h+1) | s_1=sHence, for M, with the same initial state distribution,(π)- (π) =∑_h=1^Hπ*_h(s_h,a_h) - r_h - _h+1(s_h+1).In addition, for any MDP M and function Q=(Q_1,…,Q_H,Q_H+1) with Q_H+1≡ 0, letting (s)=_a∈Q_h(s,a), we havemax_a∈Q_1(s,a)- []_1(s) =∑_h=1^HM[]Q_h(s_h,a_h) - *_h Q_h+1(s_h,a_h) | s_1=s.and, hence, _s_1∼d_1[]max_a∈Q_1(s_1,a)- () =∑_h=1^HM[]Q_h(s_h,a_h) - *_h Q_h+1(s_h,a_h). Note that for the second part of lem:bellman_residual Q=(Q_1,…,Q_H) can be any sequence of functions, and need not be a value function corresponding to a particular policy or MDP. It is worth noting that Q gives rise to the greedy policy , which, in turn, gives rise to [] (the value ofin model M), but it may well be the case that []≠ Q.We will prove eq:bellman_residual1, and omit the proof for <ref>, which is similar but more verbose. We have∑_h=1^Hπ*_h(s_h,a_h) - r_h - _h+1(s_h+1) = ∑_h=1^Hπ*_h(s_h,a_h)- _h+1(s_h+1) - π*∑_h=1^Hr_h= ∑_h=1^Hπ*_h(s_h,a_h)- _h+1(s_h+1) - (π).On the other hand, since _h(s)=_a∼π_h(s)_h(s,a), a telescoping argument yields∑_h=1^Hπ*_h(s_h,a_h)- _h+1(s_h+1) =∑_h=1^Hπ*_h(s_h)- _h+1(s_h+1)=π*_1(s_1)- π*_H+1(s_H+1)=(π),where we have used that _H+1=0, and that both MDPs have the same initial state distribution. We prove eq:bellman_residual2 (omitting the proof of <ref>) using a similar argument. We have∑_h=1^HM[]Q_h(s_h,a_h) - r_h - max_a∈Q_h+1(s_h+1,a)= ∑_h=1^HM[]Q_h(s_h,a_h)- max_a∈Q_h+1(s_h+1,a) - M*∑_h=1^Hr_h= ∑_h=1^HM[]Q_h(s_h,a_h)- max_a∈Q_h+1(s_h+1,a) - ().Since a_h+1=(s_h+1)=_a∈Q_h+1(s_h+1,a), we have M[]Q_h(s_h,a_h)- max_a∈Q_h+1(s_h+1,a)=M[]Q_h(s_h,a_h)- Q_h+1(s_h+1,a_h+1),and the result follows by telescoping. Another similar analysis tool for MDPs, the simulation lemma, is deferred to <ref> (<ref>). This result can be proven as a consequence of <ref>. §.§ Optimism To develop algorithms for regret minimization in unknown MDPs, we turn to the principle of optimism, which we have seen is successful in tackling multi-armed bandits and linear bandits (in small dimension). Recall that for bandits, <ref> gave a way to decompose the regret of optimistic algorithms into width of confidence intervals. What is the analogue of lem:regret_optimistic for MDPs? Thinking of optimistic estimates at the level of expected rewards for policies π is unwieldy, and we need to dig into the structure of these multi-stage decisions. In particular, the approach we employ is to construct a sequence of optimistic value functions _1,…,_H which are guaranteed to over-estimate the optimal value function . For multi-armed bandits, implementing optimism amounted to adding “bonuses,” constructed from past data, to estimates for the reward function. We will construct optimistic value functions in a similar fashion. Before giving the construction, we introduce a technical lemma, which quantifies the error in using such optimistic estimates in terms of Bellman residuals; Bellman residuals measure self-consistency of the optimistic estimates under the application of the Bellman operator. Let {_1,…,_H} be a sequence of functions _h:×→ with the property thatfor all (s,a), _h(s,a)≤_h(s,a) and set _H+1≡ 0. Let =(_1,…,_H) be such that _h(s) = _a_h(s,a). Then for all s∈, _1(s) - []_1(s) ≤∑_h=1^H ^M, *(_h- _h^M_h+1)(s_h, (s_h)) | s_1=s. <ref> tells us that closeness of _h to the Bellman backup _h^M_h+1 implies closeness oftoin terms of the value. As a sanity check, if _h=_h, the right-hand side of (<ref>) is zero, since _h=_h^M_h+1. Crucially, errors do not accumulate too fast as a function of the horizon. This fact should not be taken for granted: in general, ifis not optimistic, it could have been the case that small changes in _h exponentially degrade the quality of the policy .Another important aspect of the decomposition (<ref>) is the on-policy nature of the terms in the sum. Observe that the law of s_h for each of the terms is given by executingin model M. The distribution of s_h is often referred to as the roll-in distribution; when this distribution is induced by the policy executed by the algorithm, we may have a better control of the error than in the off-policy case when the roll-in distribution is given byor another unknown policy. Let _h(s) max_a∈_h(s,a). Just as in the proof of lem:regret_optimistic, the assumption that _h is “optimistic” implies that _h(s_h,π_M(s_h)) ≤_h(s_h,π_M(s_h)) ≤_h(s_h,(s_h)) and, hence, _1(s)≤_1(s). Then, (<ref>) applied to Q= and = states that_1(s)- []_1(s) =∑_h=1^HM[]_h(s_h,a_h) - *_h _h+1(s_h,a_h) | s_1=s. In fact, the proof of <ref> only uses that the initial value _1 is optimistic. However, to construct a value function with this property, the algorithms we consider will proceed by backwards induction, producingoptimistic estimates _1,…,_H in the process. §.§ The Algorithm for Tabular MDPsWe now instantiate the principle of optimism to give regret bounds for online reinforcement learning in tabular MDPs. Tabular RL may be thought of as an analogue of finite-armed bandits: we assume no structure across states and actions, but require that the state and action spaces are small. The regret bounds we present will depend polynomially on S=|| and A=||, as well as the horizon H. PreliminariesFor simplicity, we assume that the reward function is known to the learner, so that only the transition probabilities are unknown. This does not change the difficulty of the problem in a meaningful way, but allows us to keep notation light. Rewards are deterministic, bounded, and known to the learner: R^M_h (s,a) = δ_r_h(s,a) for known r_h:×→ [0,1], for all M. In addition, assume for simplicity that_1(s)∈[0,1] for any s∈.Define, with a slight abuse of notation, nt_h(s,a) = ∑_i=1^t-1(s_h^i, a_h^i)=(s,a), nt_h(s,a,s') = ∑_i=1^t-1(s_h^i, a_h^i, s_h+1^i)=(s,a,s'),as the empirical state-action and state-action-next state frequencies. We can estimate the transition probabilities viaPt_h (s'| s,a) = nt_h(s,a,s')/nt_h(s,a).The algorithmThe following algorithm,(“Upper Confidence Bound Value Iteration”) <cit.>, combines the notion of optimism with dynamic programming. The algorithm will be analyzed using lem:reg_decomp_optimistic. In constructing functions _h, we will need to satisfy two goals: (1) ensure that with high probability (<ref>) is satisfied, i.e. _hs are optimistic; and (2) that _hs are “self-consistent,” in the sense that the Bellman residuals in (<ref>) are small. The second requirement already suggests that we should define _h approximately as a Bellman backup _h^M_h+1, going backwards for h=H+1,…,1 as in dynamic programming, while ensuring the first requirement. In addition to these considerations, we will have to use a surrogate for the Bellman operator _h^M, since the model M is not known. This is achieved by estimating M using empirical transition frequencies. Putting these ideas together gives the update in <ref>. We apply the principle of value iteration, except that * For each episode t, we augment the rewards r_h(s,a) with a “bonus” bt_h,δ(s,a) designed to ensure optimism. * The Bellman operator is approximated using the estimated transition probabilities in <ref>.The bonus functions play precisely the same role as the width of the confidence interval in (<ref>): these bonuses ensure that (<ref>) holds with high probability, as we will show below in lem:ucb_vi_optimism.The following theorem shows that with an appropriate choice of bonus, this algorithm achieves a polynomial regret bound. For any δ>0, with bt_h,δ(s,a) = 2√(log(2SAHT/δ)/n_ht(s,a)) guarantees that with probability at least 1-δ, ≲ HS√(AT)·√(log(SAHT/δ)) We mention that a slight variation on lem:ucbvi_conc below (using the Freedman inequality instead of the Azuma-Hoeffding inequality) yields an improved rate of O(H√(SAT)+(H,S,A)log T), and the optimal rate can be shown to be Θ(√(HSAT)); this is achieved through a more careful choice for the bonus bt_h,δ and a more refined analysis. We remark that care should be taken in comparing results in the literature, as scaling conventions for the individual and cumulative rewards (as in assm:ucbvi) can vary. §.§.§ Analysis for a Single EpisodeOur aim is to bound the regret= ∑_t=1^T() - (πt)for . To do so, we first prove several helper lemmas concerning the performance within each episode t. In what follows, we fix t and drop the superscript t. Given the estimated transitions []P_h(·|s,a)_h,s,a, define the estimated MDP =*, , _h_h=1^H, _h_h=1^H, d_1. The associated Bellman operator is_h^ Q(s,a) =r_h(s,a) + _s'∼P_h(·| s,a)max_a Q(s',a)for Q:×→.Consider the sequence of functions _h:×→[0,1], _h:→[0,1], for h=1,…,H+1, with _H+1≡ 0 and _h(s,a)={_h^_h+1(s,a) + b_h,δ(s,a) }∧ 1, _h(s) = max_a _h(s,a)for bonus functions b_h,δ:×→ to be chosen later.Henceforth, we follow the usual notation that for functions f,g over the same domain, f≤ g indicates pointwise inequality over the domain.The first lemma we present shows that as long as the bonuses b_h,δ are large enough to bound the error between the estimated transition probabilities and true transition probabilities, the functions _1,…,_H constructed above will be optimistic. Suppose we have estimates []P_h(·|s,a)_h,s,a and a function b_h,δ:×→ with the property that for all s∈,a∈, *∑_s'P_h(s'|s,a) _h(s')-∑_s' PM_h(s'|s,a) _h(s')≤ b_h,δ(s,a).Then for all h∈H, we have _h≥_h, _h ≥_h for _h, _h defined in (<ref>). The proof proceeds by backward induction on the statement _h ≥_hwith h=H+1 down to h=1. We start with the base case h=H+1, which is trivial because _H+1 = _H+1≡ 0. Now, assume _h+1≥_h+1, and let us prove the induction step. Fix (s,a)∈×. If _h(s,a)=1, then, trivially, _h(s,a)≥_h(s,a). Otherwise, _h(s,a) = _h^_h+1(s,a) + b_h,δ(s,a), and thus _h(s,a) - _h(s,a)= b_h,δ(s,a) + _s'∼P_h(·| s,a)_h+1(s')- _s'∼ P^M_h(·| s,a)_h+1(s') ≥ b_h,δ(s,a) + _s'∼P_h(·| s,a)_h+1(s')- _s'∼ P^M_h(·| s,a)_h+1(s') ≥ 0. This, in turn, implies that _h(s) = max_a_h(s,a) ≥max_a_h(s,a) = _h(s), concluding the induction step. We now analyze the effect of using an estimated modelfor the Bellman operator rather than the true unknown _h^M. Suppose we have estimates []P_h(·|s,a)_h,s,a and b_h,δ'(s,a):×→ with the property that max_V∈{0,1}^S*∑_s'P_h(s'|s,a) V(s')-∑_s' PM_h(s'|s,a) V(s')≤ b_h,δ'(s,a)Then the Bellman residual satisfies _h-_h^M_h+1≤ (b_h,δ + b_h,δ')∧1. for _h, _h defined in (<ref>). That _h-_h^M_h+1≤1 is immediate. To prove the main result, observe that_h-_h^M_h+1={_h^_h+1 + b_h,δ}∧ 1 - _h^M_h+1≤ (_h^-_h^M) _h+1 + b_h,δFor any Q∈×→ [0,1], (_h^-_h^M) Q (s,a)= _s'∼P_h(·| s,a)max_a Q(s',a) - _s'∼ P^M_h(·| s,a)max_a Q(s',a) ≤max_V∈ [0,1]^S_s'∼P_h(·| s,a) V(s') - _s'∼ P^M_h(·| s,a) V(s').Since the maximum is achieved at a vertex of [0,1]^S, the statement follows.§.§.§ Regret AnalysisWe now bring back the time index t and show that the estimated transition probabilities in satisfy conditions of lem:ucb_vi_optimism and lem:ucb_vi_bellman_error, ensuring that the functions t_1,…,t_H are optimistic. Let []Pt_h_h∈[H],t∈[T] be defined as in (<ref>). Then with probability at least 1-δ, the functions bt_h,δ(s,a) = 2√(log(2SAHT/δ)/n_ht(s,a)), b't_h,δ(s,a) = 8√(Slog(2SAHT/δ)/n_ht(s,a)) satisfy the assumptions of lem:ucb_vi_optimism and lem:ucb_vi_bellman_error, respectively, for all s∈, a∈, h∈[H], and t∈[T] simultaneously. We leave the proof as an exercise.Putting everything together, we can now prove thm:ucb-vi. Under the event in lem:ucbvi_conc, the functions t_1,…,t_H are optimistic, which means that the conditions of lem:reg_decomp_optimistic hold, and the instantaneous regret on round t (conditionally on s_1∼ d_1) is at most∑_h=1^H ^M, t*(t_h- _h^Mt_h+1)(st_h, t_h(st_h)) | s_1=s≤∑_h=1^H ^M, t*(b_h, δ(st_h, t_h(st_h))+ b_h,δ'(st_h, t_h(st_h)))∧1,where the second inequality invokes <ref>. Summing over t=1,…,T, and applying the Azuma-Hoeffding inequality, we have that with probability at least 1-δ, the regret of is bounded by∑_t=1^T∑_h=1^H ^M, t*(b_h, δ(st_h, t_h(st_h))+ b_h,δ'(st_h, t_h(st_h)))∧1∑_t=1^T∑_h=1^H(b_h, δ(st_h, t_h(st_h))+ b_h,δ'(st_h, t_h(st_h)))∧1 + √(HTlog(1/δ)).Using the bonus definition in <ref>, the bonus term above is bounded by∑_t=1^T∑_h=1^H√(Slog(2SAHT/δ)/n_ht(st_h,t_h(st_h)))∧1 ≤√(Slog(2SAHT/δ))∑_t=1^T∑_h=1^H1/√(n_ht(st_h,t_h(st_h)))∧1The double summation can be handled in the same fashion as lem:confidence_width_potential:∑_t=1^T∑_h=1^H1/√(n_ht(st_h,t_h(st_h)))∧1= ∑_h=1^H∑_(s,a)∑_t=1^T (st_h, t_h(st_h))=(s,a)/√(n_ht(s,a))∧1∑_h=1^H∑_(s,a)√(n_hT(s,a))≤ H√(SAT). § GENERAL DECISION MAKING So far, we have covered three general frameworks for interaction decision making: The contextual bandit problem, the structured bandit problem, and the episodic reinforcement learning problem; all of these frameworks generalize the classical multi-armed bandit problem in different directions. In the context of structured bandits, we introduced a complexity measure called the (), which gave a generic approach to algorithm design, and allowed us to reduce the problem of interactive decision making to that of supervised online estimation. In this section, we will build on this development on two fronts: First, we will introduce a unified framework for decision making, which subsumes all of the frameworks we have covered so far. Then, we will show that i) the and its associated meta-algorithm () extend to the general decision making framework, and ii) boundedness of the is not just sufficient, but actually necessary for low regret, and thus constitutes a fundamental limit. As an application of the general tools we introduce, we will show how to use the (generalized) to solve the problem of tabular reinforcement learning (<ref>), offering an alternative to the method we introduced in <ref>. §.§ Setting For the remainder of the course, we will focus on a framework called Decision Making with Structured Observations (), which subsumes all of the decision making frameworks we have encountered so far. The protocol proceeds in T rounds, where for each round t=1,…,T: * The selects a decision t∈, whereis the decision space.* Nature selects a reward rt∈ and observation t∈ based on the decision, where ⊆ is the reward space andis the observation space. The reward and observation are then observed by the learner. =-1We focus on a stochastic variant of the framework. [Stochastic Rewards and Observations]Rewards and observations are generated independently via(rt,ot) ∼(·|πt),where :Π→Δ(×) is the underlying model.To facilitate the use of learning and function approximation, we assume the has access to a model classthat contains the model .Depending on the problem domain,might consist of linear models, neural networks, random forests, or other complex function approximators; this generalizes the role of the reward function classused in contextual/structured bandits. We make the following standard realizability assumption, which asserts thatis flexible enough to express the true model. [Realizability]The model classcontains the true model .For a model M∈, let Mπ*· denote the expectation under (r,)∼M(π). Further, following the notation in sec:mdp, let(π)Mπ*rdenote the mean reward function, and let _∈() denote the optimal decision with maximal expected reward. Finally, define*|M∈ as the induced class of mean reward functions. We evaluate the 's performance in terms of regret to the optimal decision for :∑_t=1^T_t∼pt*() - (t),where pt∈Δ() is the learner's distribution over decisions at round t. Going forward, we abbreviate = and =,.The framework is general enough to capture most online decision making problems. Let us first see how it subsumes the structured bandit and contextual bandit problems.[Structured bandits]When there are no observations (i.e., =*∅), the framework is equivalent tostructured bandits studied earlier in sec:structured.Therein, we defined a structured bandit instance by specifying a classof mean reward functions and a general class of reward distributions, such as sub-Gaussian or bounded. In the framework, we may equivalently start with a set of modelsand letbe the induced class (<ref>). By changing the class , this encompasses all of the concrete examples of structured bandit problems we studied in <ref>, including linear bandits, nonparametric bandits, and concave/convex bandits.[Contextual bandits] The framework readily captures contextual bandits (sec:cb) with stochastic contexts (see asm:stochastic_rewards_CB). To make this precise, we will slightly abuse the notation and think of πt as functions mapping the context xt to an action in Π=[A]. To this end, on round t, the decision-maker selects a mapping πt:→[A] from contexts to actions, and the context ot=xt is observed at the end of the round. This is equivalent to first observing xt and selecting πt(xt)∈[A]. Formally, let = be the space of contexts, Π=[A] be the set of actions, and Π:→[A] be the space of decisions. The distribution (r,x)∼ M(π) then has the following structure: x∼M and r∼ RM(·|x, π(x)) for some context distribution M and reward distribution RM. In other words, the distribution M for the context x (treated as an observation) is part of the model M. We mention in passing that the framework also naturally extends to the case when contexts are adversarial rather than i.i.d., as in sec:structured_contexts; see <cit.>.[Online reinforcement learning]The online reinforcement learning framework we introduced in <ref> immediately falls into the framework by taking =, rt=∑_h=1^Hr_ht, and t=τt.While we have only covered tabular reinforcement learning so far, the literature on online reinforcement learning contains algorithms and sample complexity bounds for a rich and extensive collection of different MDP structures (e.g., <cit.>). All of these settings correspond to specific choices for the model classin the framework, and we will cover this topic in detail in sec:rl. We adopt the framework because it gives simple, yet unified approach to describing and understanding what is—at first glance—a very general and seemingly complicated problem. Other examples that are covered by the framework include: * Partially Observed Markov Decision Processes (POMDPs)* Bandits with graph-structured feedback* Partial monitoring^⋆§.§ Refresher: Information-Theoretic DivergencesTo develop algorithms and complexity measures for , we need a way to measure the distance between distributions over abstract observations (this was not a concern for the structured and contextual bandit settings, where we only needed to consider the mean reward function). To do this, we will introduce the notion of the Csiszar f-divergence, which generalizes a number of familiar divergences including the Kullback-Leibler (KL) divergence, total variation distance, and Hellinger distance. Letandbe probability distributions over a measurable space (Ω,). We say thatis absolutely continuous with respect toif for all events A∈, (A)=0(A)=0; we denote this by ≪. For a convex function f:(0,∞)→ with f(1)=0, the associated f-divergence forandis given by_*f*d/dwhenever ≪. More generally, defining p=d/dν and q=d/dν for a common dominating measure ν, we have∫_q>0qf*p/qdν + (q=0)·f'(∞),where f'(∞)lim_x→0^+xf(1/x).We will make use of the following f-divergences, all of which have unique properties that make them useful in different contexts. * Choosing f(t)=1/2t-1 gives the total variation (TV) distance= 1/2∫*d/dν-d/dνdν,which can also be written as=sup_A∈(A)-(A). * Choosing f(t) = (1-√(t))^2 gives squared Hellinger distance=∫*√(d/dν)-√(d/dν)^2dν. * Choosing f(t)=tlog(t) gives the Kullback-Leibler divergence:={[ ∫log[]d/dd,≪,; +∞,otherwise. ]. Note that for TV distance and Hellinger distance, we use the notation D(·,·) rather than D(··) to emphasize that the divergence is symmetric. Other standard examples include the Neyman-Pearson χ^2-divergence.For all distributionsand ,≤≤.It is known that = 1 if and only if =2, and = 0 if and only if =0 (more generally, ≤2). Moreover, they induce same topology, i.e. a sequence converges in one distance if and only if it converges the other. KL divergence cannot be bounded by TV distance or Hellinger distance in general, but the following lemma shows that it is possible to relate these quantities if the density ratios under consideration are bounded.Letandbe probability distributions over a measurable space (Ω,). If sup_F∈(F)/(F)≤V, then≤(2+log(V)). Other properties we will use include:* Boundedness of TV (by 1) and Hellinger (by 2).* Triangle inequality for TV and Hellinger distance. * The data-processing inequality, which is satisfied by all f-divergences.* Chain rule and subadditivity properties for KL and Hellinger divergence (see lem:kl_chain_rule). * A variational representation for TV distance: =sup_g:Ω→0,1*_g - _g See <cit.> for further background. §.§ The for General Decision Making Developing algorithms for the general decision making framework poses a number of additional challenges compared to the basic bandit frameworks we have studied so far. The problem of understanding how to optimally explore and make decisions for a given model classis deeply connected to the problem of understanding the optimal statistical complexity (i.e., minimax regret) for . Any notion of problem complexity needs to capture both i) simple problems like the multi-armed bandit, where the mean rewards serve as a sufficient statistic, and ii) problems with rich, structured feedback (e.g., reinforcement learning), where observations, or even structure in the noise itself, can provide non-trivial information about the underlying problem instance. In spite of these apparent difficulties, we will show that by incorporating an appropriate information-theoretic divergence, we can use the to address these challenges, in a similar fashion to <ref>.For a model class , reference model ∈, and scale parameter γ>0, the for <cit.> is defined via(,) = inf_p∈Δ()sup_M∈_∼p[()-(π)_regret of decision -γ·M()()_information gain for obs.]. We further define()=sup_∈()(,).The in eq:dec should look familiar to the definition we used for structured bandits in sec:structured (Eq. (<ref>)). The main difference is that instead of being defined over a classof reward functions, the general is defined over the class of models , and the notion of estimation error/information gain has changed to account for this. In particular, rather than measuring information gain via the distance between mean reward functions, we now consider the information gain_π∼p*M(π)(π),which measures the distance between the distributions over rewards and observations under the models M and(for the learner's decision π). This is a stronger notion of distance since i) it incorporates observations (e.g., trajectories for reinforcement learning), and ii) even for bandit problems, we consider distance between distributions as opposed to distance between means; the latter feature means that this notion of information gain can capture fine-grained properties of the models under consideration, such as noise in the reward distribution. §.§.§ Basic ExamplesTo build intuition as to how the general adapts to the structure of the model class , let us review a few examples—some familiar, and some new.[Multi-armed bandit with Gaussian rewards]Let Π=[A], =, ={∅}. We define= *M: M(π) = (f(π), 1), f:Π→[0,1].We claim that () ∝A/γ.To prove this, consider the case where ∈ for simplicity. Recall that we have previously shown that this behavior holds for the squared error version of the defined in (<ref>). Thus, it is sufficient to argue that the squared Hellinger divergence for Gaussian distributions reduces to square difference between the means:M()()∝ (fM(π) - f(π))^2.The claim will then follow from prop:igw_exact. To prove this, first note thatM()()≤M()() = 1/2(fM(π) - f(π))^2.In the other direction, one can directly computeM()() = 1 - exp*-1/8(fM(π) - f(π))^2and using that 1-exp{-x}≥ (1-e^-1)x for x∈[0,1], we establishM()()≥ c·(fM(π) - f(π))^2for c=1-1/e/8. In fact, one can show that the general DEC in <ref> coincides with the basic squared error version fromsec:structured for general structured bandit problems, not just multi-armed bandits; see <ref>.Let us next consider a twist on the bandit problem that is more information-theoretic in nature, and highlights the need to work with information-theoretic divergences if we want to handle general decision making problems. [Bandits with structured noise] Let Π=[A], =, ={∅}. We define= *M_1,…,M_A∪*where M_i(π)(1/2, 1) for π≠ i and M_i(π)(3/4) for π=i; we further define (π)(1/2, 1) for all π∈Π. Before proceeding with the calculations, observe that we can solve the general decision making problem when the underlying model is ∈ with a simple algorithm. It is sufficient to select every action in [A] only once: all suboptimal actions have Bernoulli rewards and give r∈0,1 almost surely, while the optimal action has Gaussian rewards, and gives r∉0,1 almost surely. Thus, if we select an action and observe a reward r∉0,1, we know that we have identified the optimal action.The valuable information contained in the reward distribution is reflected in the Hellinger divergence, which attains its maximum value when comparing a continuous distribution to a discrete one:M_i()() = 2=i.To use this property to derive the upper bound on (,), first note that the maximum over M in the definition of (,) is not attained at M=, since in that case both the divergence and regret terms are zero, irrespective of p. Now, take p=[A]. Then for any M∈{M_1,…,M_A}, _π∼p*()-(π) = (1-1/A)(3/4-1/2),and (,)≲ (1-1/A)(3/4-1/2) - γ2/A≲γ≤A/4. This leads to an upper bound(,) ≲γ≤A/4which can also be shown to be tight. [Bandits with Full Information]Consider a “full-information” learning setting. We have =A and =0,1, and for a given decisionwe observe a reward r as in the standard multi-armed bandit, but also receive an observation o = (r(π'))_π'∈A consisting of (counterfactual) rewards for every action.For a given model M, let (π) denote the distribution over the reward r for π, and let (π) denote the distribution of o. Then for any decision π, since all rewards are observed, the data processing inequality implies that for all M,∈ and π'∈Π,M(π)(π) ≥(π)(π)= (π')(π')≥(π')(π'). Using this property, we will show that for any ∈, (,)≤1/γ.Comparing to the finite-armed bandit, we see that the for this example is independent of A, which reflects the extra information contained in the observation o.To prove <ref>, for a given model ∈ we choose p=𝕀_ (i.e. the decision maker selectsdeterministically), and bound _∼p*()-() by()-()≤()-() + ()-()≤ 2·max_∈,()-()≤2·max_∈,()(). We then use the AM-GM inequality, which implies that for any γ>0,max_∈,()() γ·max_∈,()() + 1/γ≤γ·M()() + 1/γ,where the final inequality uses eq:hellinger_full_info. This certifies that for all M∈, the choice for p above satisfies_∼p*()-(π)-γ·M()()1/γ,so we have (,)1/γ.In what follows, we will show that the different behavior for the for these examples reflects the fact that the optimal regret is fundamentally different. §.§ Algorithm for General Decision Making (), the meta-algorithm based on the that we gave for structured bandits in sec:structured, readily extends to <cit.>. The general version of the meta-algorithm is given above. Compared to structured bandits, the main difference is that rather than trying to estimate the reward function , we now estimate the underlying model . To do so, we appeal once again to the notion of an online estimation oracle, but this time for model estimation.At each timestep t, the algorithm calls invokes an online estimation oracle to obtain an estimate t forusing the data t-1=(π1,r1,o1),…,(πt-1,rt-1,ot-1) observed so far. Using this estimate, proceeds by computing the distribution pt that achieves the value (,t) for the . That is, we setpt=_p∈Δ()sup_M∈_∼p[()-(π) -γ·M()t()].then samples the decision t from this distribution and moves on to the next round. Like structured bandits, one can show that by running in the setting, the regret for decision making is bounded in terms of the DEC and a notion of estimation error for the estimation oracle. The main difference is that for , the notion of estimation error we need to control is the sum of Hellinger distances between the estimates from the supervised estimation oracle , which we define via∑_t=1^T_t∼pt*(t)t(t).With this definition, we can show that enjoys the following bound on regret, analogous to prop:dec_bandit. with exploration parameter γ>0 guarantees that≤sup_∈(,)·T + γ·,almost surely, whereis any set such that t∈ for all t∈[T].Note that we can optimize over the parameter γ in the result above, which yields≤inf_γ>0[]sup_∈(,)·T + γ·≤2·inf_γ>0max[]sup_∈(,)·T, γ·.We will show in the sequel that for any finite class , the averaged exponential weights algorithm with the logarithmic loss achieves log(/δ) with probability at least 1-δ. For this algorithm, and most others we will consider, one can take =(). In fact, one can show (via an analogue ofprop:dec_unconstrained) that for any , even if ∉(), we have (,)≤sup_∈()[cγ](,)≤[cγ]() for any absolute constant c>0. This means we can restrict our attention to the convex hull without loss of generality. Putting these facts together, we see that for any finite class, it is possible to achieve≤()·T + γ·log(/δ)with probability at least 1-δ.We write=∑_t=1^T_t∼pt*()-(t)=∑_t=1^T_t∼pt*()-(t)- γ·_t∼pt*(t)t(t)+ γ·.For each t, since ∈, we have_t∼pt*()-(t)- γ·_t∼pt*(t)t(t)≤sup_M∈_t∼pt*()-(t)- γ·_t∼pt*M(t)t(t)= inf_p∈Δ()sup_M∈_∼p*()-()- γ·M()t()= (,t).Summing over all rounds t, we conclude that≤sup_∈(,)·T + γ·. Examples for the upper bound We now revisit the examples from <ref> and use and <ref> to derive regret bounds for them. ex:bandit_gaussian For the Gaussian bandit problem from ex:bandit_gaussian, plugging the bound ()A/γ into prop:upper_main yieldsAT/γ + γ·,Choosing γ=√(AT/) balances the terms above and gives√(AT·).ex:bandit_structured_noise For the bandit-type problem with structured noise from ex:bandit_structured_noise, the bound ()γ≤A/4 yieldsγ≤A/4·T + γ·.We can choose γ=A, which givesA·. §.§.§ Online Estimation with Hellinger DistanceLet us now give some more detail as to how to perform the online model estimation required by <ref>. Model estimation is a more challenging problem than regression, since we are estimating the underlying conditional distribution rather than just the conditional mean. In spite of this difficulty, estimating the modelwith respect to Hellinger distance is a classical problem that we can solve using the online learning tools introduced in sec:ol; in particular, online conditional density estimation with the log loss. This generalizes the method of online regression employed in sec:cb,sec:structured.Instead of directly performing estimation with respect to Hellinger distance, the simplest way to develop conditional density estimation algorithms is to work with the logarithmic loss. Given a tuple (t, rt, t), define the logarithmic loss for a model M as=-1t(M) = log*1/(rt, t|t),where we define (·,·|) as the conditional density for (r,) under M. We define regret under the logarithmic loss as: = ∑_t=1^Tt(t) - inf_M∈∑_t=1^Tt(M). The following result shows that a bound on the log-loss regret immediately yields a bound on the Hellinger estimation error. For any online estimation algorithm, whenever ass:realizability holds, we have*≥*∑_t=1^T(πt)t(πt),so that*≤*. Furthermore, for any δ∈(0,1), with probability at least 1-δ,≤ + 2log(δ^-1). This result is desirable because regret minimization with the logarithmic loss is a well-studied problem in online learning. Efficient algorithms are known for model classes of interest <cit.>, and this is complemented by theory which provides minimax rates for generic model classes <cit.>. One example we have already seen (sec:intro) is the averaged exponential weights method, whichguarantees≤log for finite classes . Another example is that for linear models, where (i.e., (r,o|)=*ϕ(r,o,),θ for a fixed feature map in ϕ∈^d), algorithms with =(dlog(T)) are known <cit.>. All of these algorithms satisfy =(). We refer the reader to Chapter 9 of <cit.> for further examples and discussion.While (<ref>) is straightforward, (<ref>) is rather remarkable, as the remainder term does not scale with T. Indeed, a naive attempt at applying concentration inequalities to control the deviations of the random quantitiesandwould require boundedness of the loss function, which is problematic because the logarithmic loss can be unbounded. The proof exploits unique properties of the moment generating function for the log loss. §.§ : Lower Bound on RegretUp to this point, we have been focused on developing algorithms that lead to upper bounds on regret for specific model classes. We now turn our focus to lower bounds, and the question of optimality: That is, for a given class of models , what is the best regret that can be achieved by any algorithm? We will show that in addition to upper bounds, the actually leads to lower bounds on the optimal regret. Background: Minimax regret What does it mean to say that an algorithm is optimal for a model class ? There are many notions of optimality, but in this course we will focus on minimax optimality, which is one of the most basic and well-studied notions.For a model class , we define the minimax regret via[Here, for any algorithm p=p1,…,pT, p denotes the expectation with respect to the observation process (rt,ot)∼(πt) and any randomization used by the algorithm, whenis the true model.]= inf_p1,…,pTsup_∈p*(T),where pt=pt(·|t-1) is the algorithm's strategy for step t (a function of the history t-1), and where we write regret as (T) to make the dependence on T explicit. Intuitively, minimax regret asks what is the best any algorithm can perform on a worst-case model (in ) possibly chosen with the algorithm in mind. Another way to say this is: For any algorithm, there exists a model infor which *(T)≥. We will say that an algorithm is minimax optimal if it achieves eq:minimax_regret up to absolute constants that do not depend onor T.§.§.§ The Constrained We now show how to lower bound the minimax regret for any model classin terms of the for . Instead of working with the quantity () appearing in prop:upper_main directly, it will be more convenient to work with a related quantity called the constrained , which we define for a parameter >0 as[We adopt the convention that the value of (,) is zero is there exists p such that the set of M∈ with _π∼p*M(π)(π)≤^2 is empty.](,)= inf_p∈Δ(Π)sup_M∈*_π∼p*() - (π)|_π∼p*M(π)(π)≤^2 ,with()sup_∈()(∪,).This is similar to the definition for the we have been working with so far— which we will call the offsetgoing forward—-except that it places a hard constraint on the information gain as opposed to subtracting the information gain. Both quantities have a similar interpretation, since subtracting the information gain implicitly biases the max player towards model where the gain is small. Indeed, the offset can be thought of as a Lagrangian relaxation of the constrained , and always upper bounds it via(,)= inf_p∈Δ(Π)sup_M∈*_π∼p*() -(π)|*M(π)(π)≤^2 = inf_p∈Δ(Π)sup_M∈inf_γ≥0*_π∼p*() -(π) - γ**M(π)(π)-^2∨0 ≤inf_γ≥0inf_p∈Δ(Π)sup_M∈*_π∼p*() -(π) - γ**M(π)(π)-^2∨0= inf_γ≥0*(,) + γ^2 ∨0.For the opposite direction, it is straightforward to show that() [γ^-1/2]().This inequality is lossy, but cannot be improved in general. That is, there some classes for which the constrained DEC is meaningfully smaller than the offset . However, it is possible to relate the two quantities if we restrict to a “localized” sub-class of models that are not “too far” from the reference model . Given a modeland parameter α, define the localized subclass aroundvia[α]() = * M∈: () ≥() - α.For all >0 and γ≥c_1·^-1, we have ()≤ c_3·sup_γ≥c_1^-1sup_∈()([](),) , where c_2·γ^2, and c_1,c_2,c_3>0 areabsolute constants.For many “well-behaved” classes one can consider (e.g., multi-armed bandits and linear bandits), one has ([](),)≈(,) whenever (,)≈γ^2 (that is, localization does not change the complexity), so that lower bounds in terms of the constrained immediately imply lower bounds in terms of the offset .In general, this is not the case, and it turns out that it is possible to obtain tighter upper bounds that depend on the constrained by using a refined version of the algorithm. We refer to <cit.> for details and further background on the constrained . §.§.§ Lower Bound The main lower bound based on the constrained is as follows.Let c·1/√(T), where c>0 is a sufficiently small numerical constant. For all T such that the condition[The numerical constant here is not important.][]() ≥ 10is satisfied, it holds that for any algorithm, there exists a model M∈ for which*(T)[]()·T.prop:lower_main_expectation shows that for any algorithm and model class , the optimal regret must scale with the constrained in the worst-case. As a concrete example, we will show in the sequel that for the multi-armed bandit with A actions, ()∝√(A), which leads to*√(AT).We mention in passing that by combining <ref> with prop:constrained_to_offset, we obtain the following lower bound based on the (localized) offset .Fix T∈. Then for any algorithm, there exists a model M∈ for which*(T)sup_γ√(T)sup_∈()([α(T,γ)](),), where α(T,γ)c·γ/T for an absoluteconstant c>0 The is necessary and sufficient To understand the significance of <ref> more broadly, we state but do not prove the following upper bound on regret based on the constrained , which is based on a refined variant of . Letbe a finite class, and set c·√(log(/δ)/T), where c>0 is a sufficiently large numerical constant. Under appropriate technical conditions, there exists an algorithm that achieves*(T)[]()·Twith probability at least 1-δ.This matches the lower boud in <ref> upper to a difference in the radius: we have ∝√(1/T) for the lower bound, and ∝√(log(/δ)/T) for the upper bound. This implies that for any class where log<∞, the constrained is necessary and sufficient for low regret. By the discussion in the prequel, a similar conclusion holds for the offset DEC (albeit, with a polynomial loss in rate).The interpretation of the log gap between the upper and lower bounds is that the is capturing the complexity of exploring the decision space, but the statistical capacity required to estimate the underlying model is a separate issue which is not captured.§.§.§ Proof of Proposition <ref>prop:lower_main_expectationBefore proving prop:lower_main_expectation, let us give some background on a typical approach to proving lower bounds on the minimax regret for a decision making problem. Anatomy of a lower bound How should one go about proving a lower bound on the minimax regret in eq:minimax_regret? We will follow a general recipe which can be found throughout statistics, information theory, and decision making <cit.>. The approach will be to find a pair of models M andthat satisfy the following properties: * Any algorithm with regret much smaller than the DEC must query substantially different decisions in Π depending on whether the underlying model is M or . Intuitively, this means that any algorithm that achieves low regret must be able to distinguish between the two models. * M andare “close” in a statistical sense (typically via total variation distance or another f-divergence), which implies via change-of-measure arguments that the decisions played by any algorithm which interacts with the models only via observations (in our case, (πt,rt,ot)) will be similar for both models. In other words, the models are difficult to distinguish.One then concludes that the algorithm must have large regret on either M or .To make this approach concrete, classical results in statistical estimation and supervised learning choose the models M andin a way that is oblivious to the algorithm under consideration <cit.>. However, due to the interactive nature of the decision making problem, the lower bound proof we present now will choose the models in an adaptive fashion. Simplifications Rather than proving the full result in <ref>, we will make the following simplifying assumptions: * There exists a constant C such thatM(π)M'(π)≤C·M(π)M'(π)for all M, M'∈ and π∈Π.* Rather than proving a lower bound that scales with ()=sup_∈()(∪,), we will prove a weaker lower bound that scales with sup_∈(,).We refer to <cit.> for a full proof that removes these restrictions.Preliminaries We use the following technical lemma for the proof of prop:lower_main_expectation.Let (_1,_1),…,(_n,_n) be a sequence of measurable spaces, and let i=∏_i=t^i_t and i=⊗_t=1^i_t. For each i, let i(·|·) and i(·|·) be probability kernels from (i-1,i-1) to (_i,_i). Letandbe the laws of X_1,…,X_n under X_i∼i(·|X_1:i-1) and X_i∼i(·|X_1:i-1) respectively. Then it holds that= _*∑_i=1^ni(·|X_1:i-1)i(·|X_1:i-1). Fix T∈ and consider any fixed algorithm, which we recall is defined by a sequence of mappings p1,…,pT, where pt=pt(·|t-1). Let M denote the distribution over T for this algorithm when M is the true model, and let M denote the corresponding expectation.Viewed as a function of the history t-1, each pt is a random variable, and we can consider its expected value under the model M. To this end, for any model M∈, let±M*1/T∑_t=1^Tpt∈Δ(Π)be the algorithm's average action distribution when M is the true model. Our aim is to show that we can find a model infor which the algorithm's regret is at least as large as the lower bound in eq:lower_main. Let T∈, and fix a value >0 to be chosen momentarily. Fix an arbitrary model ∈ and setM = _M∈*_π∼*() - (π)|_π∼*M(π)(π)≤^2 ,The model M should be thought of as a “worst-case alternative” gto , but only for the specific algorithm under consideration. We will show that the algorithm needs to have large regret on either M or . To this end, we establish some basic properties; let us abbreviate (π)=()-(π) going forward: * For all models M, we have1/TM*(T)=_π∼±*(π).So, to prove the desired lower bound, we need to show that either _π∼±*(π) or _π∼*(π) is large.* By the definition of the constrained , we have_π∼*(π)≥(,) Δ,since by (<ref>), the model M is the best response to a potentially suboptimal choice . This is almost what we want, but there is a mismatch in models, sinceconsiders the model M whileconsiders the model .* Using the chain rule for KL divergence, we have= *∑_t=1^T_πt∼pt(πt)M(πt)≤ C·*∑_t=1^T_πt∼pt(πt)M(πt) = CT·_π∼*(π)M(π).To see why the first equality holds, we apply the chain rule to the sequence π1, z1, …, πT, zT with zt=(rt, ot). Let us use the bold notation t to refer to a random variable under consideration, and let zt refer to its realization. Then we have= *∑_t=1^T(t|t-1, πt)(t|t-1, πt) + (πt|t-1(πt|t-1)= *∑_t=1^T(πt)M(πt)since conditionally on t-1, the law of πt does not depend on the model.We can now choose = c_1·1/√(CT), where c_1>0 is a sufficiently small numerical constant, to ensure that≤≤1/100.In other words, with constant probability, the algorithm can fail to distinguish M and . Finally, we will make use of the fact that since rewards are in *0,1, we have_π∼*(π)-(π)≤_π∼*M(π)(π)≤√(_π∼*M(π)(π))≤. Step 1 Define =*π∈Π|(π)≤Δ/10. Observe that_π∼±*(π)≥Δ/10·±(π∉)≥Δ/10·((π∉)-±) ≥Δ/10·((π∉)-1/10),since ±≤≤1/10 by the data-processing inequality and eq:lb_prelim2. Going forward, let us assume that_π∼[](π)≤Δ/10,or else we are done, by eq:lb_prelim0. Our aim is to show that under this assumption, (π∉)≥1/2, which will imply that _π∼±*(π)Δ via eq:stepone.Step 2 By adding the inequalities eq:lb_assn and eq:lb_prelim1, we have that() - ()≥_π∼*(π) - (π) - _π∼*(π)-(π)≥9/10Δ - _π∼*(π)-(π).In addition, by eq:lb_prelim3, we have _π∼*(π)-(π)≤, so that() - () ≥9/10Δ - .Hence, as long as ≤1/10Δ, which is implied by eq:lower_regularity, we have() - () ≥4/5Δ. Step 3 Observe that if π∈, then(π)-(π)_+≥()-(π)-Δ/10_+≥()-()-Δ/10_+≥7/10Δ,where we have used eq:steptwo. As a result, using eq:lb_prelim3, ≥_π∼*(π)-(π)_+≥7/10Δ·(π∈).Hence, since ≤Δ/10 by eq:lower_regularity, we haveΔ/10≥7/10Δ·(π∈),or (π∈)≤1/7. Combining this with eq:stepone gives1/TM*(T) = _π∼±*(π)≥Δ/10·(1 - 1/7 -1/10)≥Δ/20.Finishing up Note that since the choice of ∈ for this lower bound was arbitrary, we are free to chooseto maximize (,). §.§.§ Examples for the Lower Bound We now instantiate the lower bound in prop:lower_main_expectation for concrete model classes of interest. We begin by revisiting the examples at the beginning of the section. ex:bandit_gaussian Let us lower bound the constrained for the Gaussian bandit problem from ex:bandit_gaussian. Set (π)=(1/2,1), and let M_1,…,M_A⊆ be a sub-family of models with M_i(π)=((π),1), where (π)1/2+Δπ=i for a parameter Δ whose value will be chosen in a moment. Observe that for all i, _π∼p*M_i(π)(π)≤1/2Δ^2p(i) by eq:hellinger_gaussian_ub, and _π∼p*()-(π)=(1-p(i))Δ, so we can lower bound(,)= inf_p∈Δ(Π)sup_M∈*_π∼p*() - (π)|_π∼p*M(π)(π)≤^2 ≥inf_p∈Δ(Π)max_i* (1-p(i))Δ| p(i)Δ^2/2≤^2For any p, there exists i such that p(i)≤1/A. If we choose Δ=·√(2A), this choice for i will satisfy the constraint p(i)Δ^2/2≤^2, and we will be left with(,) ≥ (1-p(i))Δ≥√(A/2),since 1-p(i)≥1/2.Plugging this lower bound on the constrained into prop:lower_main_expectation yields*≥(√(AT)).Generalizing the argument above, we can prove a lower bound on the for any model classthat “embeds” the multi-armed bandit problem in a certain sense.Let a reference modelbe given, and suppose that a classcontains a sub-class M_1,…,M_N and collection of decisions π_1,…,π_N with the property that for all i: * M_i(π)(π)≤β^2·π=π_i.* ()-(π)≥α·π≠π_i.Then(,)α·≥β/√(N). The examples that follow can be obtained by applying this result with an appropriate sub-family.ex:bandit_structured_noise Recall the bandit-type problem with structured noise from ex:bandit_structured_noise, where we have =M_1,…,M_A, with M_i(π)=(1/2,1)π≠i+(3/4)π=i. If we set (π)=(1/2,1), then this family satisfies the conditions of prop:hard_family with α=1/4 and β^2=2. As a result, we have ()≥√(2/A), which yields*(A)if we apply <ref>. ex:full_info_lower Consider the full-information variant of the bandit setting in ex:full_info_lower. By adapting the argument in ex:bandit_gaussian, one can show that(),which leads to a lower bound of the form *√(T).Next, we revisit some of the structured bandit classes considered in sec:structured.Consider the linear bandit setting in sec:linear, with =*↦θ,ϕ()|θ∈Θ, where Θ⊆_2^d(1) is a parameter set andϕ:Π→^d is a fixed feature map that is known to the learner. Letbe the set of all reward distributions with ∈ and 1-noise. Then()√(d),which gives*√(dT).Consider the Lipschitz bandit setting in sec:nonparametric, whereis a metric space with metric , and= *f:→0,1|f is 1-Lipschitz w.r.t . Letbe the set of all reward distributions with∈ and 1-noise. Let d>0 be such that thecovering number for Π satisfies (,) ≥^-d. Then()^2/d+2,which leads to * T^d+1/d+2. See <cit.> for further details.§.§ and : Application to Tabular RL In this section, we use the and meta-algorithm to provide regret bounds for the tabular reinforcement learning. This will be the most complex example we consider in this section, and showcases the full power of for general decision making. In particular, the example will show how the can take advantage of the observations ot, in the form of trajectories. This will provide an alternative to the optimistic algorithm () we introduced in sec:mdp, and we will build on this approach to give guarantees for reinforcement learning with function approximation in sec:rl.Tabular reinforcement learningWhen we view tabular reinforcement learning as a special case of the general decision making framework,is the collection of all non-stationary MDPs M=*, , _h_h=1^H, _h_h=1^H, d_1 (cf. <ref>), with state space =S, action space =A, and horizon H. The decision space = is the collection of all randomized, non-stationary Markov policies (cf. ex:rl). We assume that rewards are normalized such that ∑_h=1^Hr_h∈*0,1 almost surely (so that =*0,1). Recall that for each M∈, _h_h=1^H and _h_h=1^H denote the associated transition kernels and reward distributions, and d_1 is the initial state distribution. Occupancy measures The results we present make use of the notion of occupancy measures for an MDP M. Let ^M,π· denote the law of a trajectory evolving under MDP M and policy π. We define state occupancy measures viad^M,π_h(s)=^M,π(s_h=s)and state-action occupancy measures via d^M,π_h(s,a)=^M,π(s_h=s,a_h=a).Note that we have d^M,π_1(s)=d_1(s) for all M and π. Bounding the for tabular RL Recall, that to certify a bound on the , we need to—given any parameter γ>0 and estimator , exhibit a distribution (or, “strategy”) p such thatsup_M∈_∼p*()-(π) -γ·M()()≤(,)for some upper bound (,). For tabular RL, we will choose p using an algorithm called Policy Cover Inverse Gap Weighting. As the name suggests, the approach combines the inverse gap weighting technique introduced in the multi-armed bandit setting with the notion of a policy cover—that is, a collection of policies that ensures good coverage on every state <cit.>. The algorithm consists of two steps. First, in eq:pc_igw1, we compute the collection of policies =π_h,s,a_h∈H,s∈S,a∈A that constitutes the policy cover. The basic idea here is that each policies in the policy cover should balance (i) regret and (ii) coverage—that is—ensure that all the states are sufficiently reached, which means we are exploring. We accomplish this by using policies of the formπ_h,s,a_π∈π_h(s,a)/2HSA + η(()-(π))which—for each (s,a,h) tuple—maximize the ratio of the occupancy measure for (s,a) at layer h to the regret gap under . This inverse gap weighted policy cover balances exploration and exploration by trading off coverage with suboptimality. With the policy cover in hand, the second step of the algorithm computes the exploratory distribution p by simply applying inverse gap weighting to the elements of the cover and the greedy policy . The bound on the for the algorithm is as follows. Consider the tabular reinforcement learning setting with ∑_h=1^Hr_h∈0,1. For any γ>0 and ∈, the strategy with η=γ/21H^2, ensures thatsup_M∈_∼p*()-(π) -γ·M()()H^3SA/γ,and consequently certifies that (,)H^3SA/γ.We remark that it is also possible to prove this bound non-constructively, bymoving to the Bayesian and adapting the posterior sampling approach described in sec:dec_posterior. The strategy can be implemented in a computationally efficient fashion. Briefly, the idea is to solve eq:pc_igw1 by taking a dual approach and optimizing over occupancy measures rather than policies. With this parameterization, eq:pc_igw1 becomes a linear-fractional program, which can then be transformed into a standard linear program using classical techniques.How to estimate the model The bound on the we proved using the algorithm assumes that ∈, but in general, estimators from online learning algorithm such as exponential weights will produce t∈(). While it is possible to show that the same bound on the holds for ∈(), a slightly more complex version of the algorithm is required to certify such a bound. To run the algorithm as-is, we can use a simple approach to obtain a proper estimator ∈.Assume for simplicity that rewards are known, i.e. _h(s,a)=R_h(s,a) for all M∈. Instead of directly working with an estimator for the entire model M, we work with layer-wise estimators [1],…,[H]. At each round t, given the history *(i, ri,i)_i=1^t-1, the layer-h estimator [h] produces an estimate t_h for the true transition kernel _h. We measure performance of the estimator via layer-wise Hellinger error:∑_t=1^T_t∼ptπt*_h(s_h,a_h)t_h(s_h,a_h).We obtain an estimation algorithm for the full modelby taking t as the MDP that has _ht as the transition kernel for each layer h. This algorithm has the following guarantee. The estimator described above has≤(log(H))·∑_h=1^H[h].In addition, t∈.For each layer, we can obtain [h]≤(S^2A) using the averaged exponential weights algorithm, by applying the approach described in sec:online_estimation_hellinger to each layer. That is, for each layer, we obtain _ht by running averaged exponential weights with the loss t(P_h)=-log(P_h(s_h+1|s_h,a_h)). We obtain [h]≤(S^2A) with this approach because there are S^2A parameters for the transition distribution at each layer. A lower bound on theWe state, but do not prove a complementary lower bound on the for tabular RL. Letbe the class of tabular MDPs with S≥2 states, A≥2 actions, and ∑_h=1^Hr_h∈*0,1. If H≥2log_2(S/2), then() √(HSA).Using prop:lower_main_expectation, this gives √(HSAT). §.§.§ Proof of prop:igw_tabularToward proving prop:igw_tabular, we provide some general-purpose technical lemmas which will find further use in sec:rl. First, we provide a simulation lemma, which allow us to decompose the difference in value functions for two MDPs into errors between their per-layer reward functions and transition probabilities. For any pair of MDPs M=(,) and =(,) with the same initial state distribution and ∑_h=1^Hr_h∈*0,1, we have*( π)-(π) ≤M()()≤M()()≤1/2η + η/2M()()∀η>0,and(π)- (π) = ∑_h=1^Hπ**(_h-_h)_h+1(s_h,a_h)+ ∑_h=1^Hπ*_r_h∼_h(s_h,a_h)r_h - _r_h∼_h(s_h,a_h)r_h≤∑_h=1^Hπ*_h(s_h,a_h)_h(s_h,a_h) + _h(s_h,a_h)_h(s_h,a_h). Next, we provide a “change-of-measure” lemma, which allows one to move from between quantities involving an estimatorand those involving another model M.Consider any MDP M and reference MDPwhich satisfy ∑_h=1^Hr_h∈*0,1. For all p∈Δ(Π) and η>0 we have_π∼p*() - (π)≤_π∼p*() - (π) + η_π∼p*M()() + 1/4η.and_π∼pπ*∑_h=1^H(s_h,a_h)(s_h,a_h) + (s_h,a_h)(s_h,a_h)≤ 8H_π∼p*M()().Let M∈ be fixed. The main effort in the proof will be to bound the quantity _π∼p*() - ()in terms of the quantity on the right-hand side of eq:com1, then apply change of measure (lem:change_of_measure). We begin with the decomposition_π∼p*() - (π) = _π∼p*() - (π)_(I) + () - ()_(II).For the first term (I), which may be thought of as exploration bias, we have_π∼p*() - (π) =∑_∈∪() - ()/λ + η(() - ())≤2HSA/η,where we have used that λ≥0. We next bound the second term (II), which entails showing that the distribution explores enough. We have() - () = () - () - (() - ()).We use the simulation lemma to bound()- ()≤∑_h=1^H*_h(s_h,a_h)_h(s_h,a_h)+ _h(s_h,a_h)_h(s_h,a_h)=∑_h=1^H∑_s,a_h(s,a)_h(s,a),where _h(s,a) (s,a)(s,a) + (s,a)(s,a). Define _h(s,a) = _∼p[]π_h(s,a). Then, using the AM-GM inequality, we have that for any >0,∑_h=1^H∑_s,a_h(s,a)*_h(s,a) = ∑_h=1^H∑_s,a_h(s,a)*_h(s,a)/_h(s,a)^1/2(_h(s,a))^2≤1/2∑_h=1^H∑_s,a(_h(s,a))^2/_h(s,a) + η'/2∑_h=1^H∑_s,a_h(s,a) (_h(s,a))^2= 1/2∑_h=1^H∑_s,a(_h(s,a))^2/_h(s,a) +η'/2∑_h=1^H_π∼pπ*(_h(s_h,a_h))^2.The second term is exactly the upper bound we want, so it remains to bound the ratio of occupancy measures in the first term. Observe that for each (h,s,a), we have_h(s,a)/_h(s,a)≤_h(s,a)/_h(s,a)·1/p()≤_h(s,a)/_h(s,a)*2HSA + η(() -(),where the second inequality follows from the definition of p and the fact that λ≤2HSA. Furthermore, since= _π∈π_h(s,a)/2HSA +η(()-(π)), and since ∈, we can upper bound by _h(s,a)/_h(s,a)*2HSA + η(() -() = 2HSA + η(() -().As a result, we have∑_h=1^H∑_s,a(_h(s,a))^2/_h(s,a) ≤∑_h=1^H∑_s,a_h(s,a)(2HSA + η(() -()) = 2H^2SA +ηH(() -()).Putting everything together and returning to eq:igw_tabular1, this establishes that() - () ≤H^2SA/η' +η'/2∑_h=1^H_π∼pπ*(_h(s_h,a_h))^2 +ηH/2η'(() -())-(() - ()).We set η' = ηH/2 so that the latter terms cancel and we are left with() - () ≤2HSA/η + ηH/4∑_h=1^H_π∼pπ*(_h(s_h,a_h))^2.Combining this with eq:igw_tabular0 and eq:igw_tabular0.5 gives_π∼p*() - (π)≤4HSA/η + ηH/4∑_h=1^H_π∼pπ*(_h(s_h,a_h))^2≤4HSA/η + ηH/2∑_h=1^H_π∼pπ*(s_h,a_h)(s_h,a_h)+(s_h,a_h)(s_h,a_h).We conclude by applying the change-of-measure lemma (lem:change_of_measure), which implies that for any η'>0,_π∼p*() - ()≤4HSA/η + (4η')^-1 + (4H^2η+η')·_π∼p*M()().The result follows by choosing η=η'=γ/21H^2 (we have made no effort to optimize the constants here).§.§ Tighter Regret Bounds for theTo close this section, we provide a number of refined regret bounds based on the , which improve upon <ref> in various situations. §.§.§ Guarantees Based on Decision Space ComplexityIn general, low estimation complexity (i.e., a small bound onor log) is not required to achieve low regret for decision making. This is because our end goal is to make good decisions, so we can give up on accurately estimating the model in regions of the decision space that do not help to distinguish the relative quality of decisions. The following result provides a tighter bound that scales only with logΠ, at the cost of depending on the for a larger model class: () rather than .There exists an algorithm that for any δ>0, ensures that with probability at least 1-δ,inf_γ>0*[γ](())·T + γ·log(Π/δ). Compared to <ref>, this replaces the estimation term log with the smaller quantity logΠ, replaces () with the potentially larger quantity (()). Whether or not this leads to an improvement depends on the class . For multi-armed bandits, linear bandits, and convex bandits,is already convex, so this offers strict improvement. For MDPs though,is not convex: Even for the simple tabular MDP setting where =S and =A, grows exponentially (()) in either H or S, whereas () is polynomial in all parameters.We mention in passing that this result is proven using a different algorithm from ; see <cit.> for more background.§.§.§ General Divergences and Randomized Estimators1.3In this section we give a generalization of the algorithm that incorporates two extra features: general divergences and randomized estimators.General divergences The measures estimation error via the Hellinger distance M()(), which is fundamental in the sense that it leads to lower bounds on the optimal regret (<ref>). Nonetheless, for specific applications and model classes, it can be useful to work with alternative distance measures and divergences. For a non-negative function (“divergence”) ··, we define(,) = inf_p∈Δ()sup_M∈_∼p[()-(π) -γ·M].This variant of the naturally leads to regret bounds in terms of estimation error under ··. Note that we use notation M instead of say, (π)M(π), to reflect that fact that the divergence may depend on M (resp. ) and π through properties other than M(π) (resp. (π)). Randomized estimators The basic version of assumes that at each round, the online estimation oracle provides a point estimate t. In some settings, it useful to consider randomized estimators that, at each round, produce a distribution νt∈Δ() over models. For this setting, we further generalize the by defining(,ν) = inf_p∈Δ()sup_M∈_∼p[()-(π) -γ·_∼ν*M]for distributions ν∈Δ(). We additionally define ()=sup_ν∈Δ()(,ν).Algorithm A generalization of that incorporates general divergences and randomized estimators is given above on page alg:main_generalized. The algorithm is identical to with , with the only differences being that i) we play the distribution that solves the minimax problem eq:comp_general_randomized with the user-specified divergence ·· rather than squared Hellinger distance, and ii) we use the randomized estimate νt rather than a point estimate. Our performance guarantee for this algorithm depends on the estimation performance of the oracle's randomized estimates ν1,…,νT∈Δ() with respect to the given divergence ··, which we define as∑_t=1^T_t∼pt_t∼νt*[πt]t.We have the following guarantee. The algorithm for General Divergences and Randomized Estimators with exploration parameter γ>0 guarantees that≤()·T + γ·almost surely.Sufficient statistics and benefits of general divergences Many divergences of interest have the useful property that they depend on the estimated modelonly through a “sufficient statistic” for the model class under consideration. Formally, there exists a sufficient statistic spaceand sufficient statistic :→Ψ with the property that we can write (overloading notation)MM' = (M)M',(π)=f^(M)(π),=π_(M)for all models M,M'. In this case, it suffices for the online estimation oracle to directly estimate the sufficient statistic by producing a randomized estimator νt∈Δ(), and we can write the estimation error as∑_t=1^T_t∼pt_t∼νt*[πt]t.The benefit of this perspective is that for many examples of interest, since the divergence depends on the estimate only through ψ, we can derive bounds onthat scale with logΨ instead of log.For example, in structured bandit problems, one can work with the divergence()M()(()-())^2which uses the mean reward function as a sufficient statistic, i.e. (M)=. Here, it is clear that one can achieve log, whichimproves upon the rate log for Hellinger distance, and recovers the specialized version of the algorithm we considered in sec:structured. Analogously, for reinforcement learning, one can consider value functions as a sufficient statistic, and use an appropriate divergence based on Bellman residuals to derive estimation guarantees that scale with the complexity log of a given value function class ; see <ref> for details.Does randomized estimation help? Note that whenever D is convex in the first argument, we have ()≤sup_∈()(,)=() (that is, the randomized DEC is never larger than the vanilla DEC), but it is not immediately apparent whether the opposite direction of this inequality holds, and one might hope that working with the randomized in eq:comp_general_randomized would lead to improvements over the non-randomized counterpart. The next result shows that this is not the case: Under mild assumptions on the divergence D, randomization offers no improvement. Letbe any bounded divergence with the property that for all models M,M', and π∈Π,MM'≤ C*M + M'.Then for all γ>0,sup_(,) ≤_γ/(2C)(). Squared Hellinger distance is symmetric and satisfies Condition eq:triangle with C=2. Hence, writing () as shorthand for () with D=··, we obtain the following corollary.Suppose that ⊆*0,1. Then for all γ>0,() ≤sup_∈()(,) ≤sup_(,) ≤[γ/4](). This shows that for Hellinger distance—at least from a statistical perspective—there is no benefit to using the randomized compared to the original version. In some cases, however, strategies p that minimize (,ν) can be simpler to compute than strategies that minimize (,) for ∈(). §.§.§ Optimistic EstimationTo derive stronger regret bounds that allow for estimation with general divergences, we can combine with a specialized estimation approach introduced by <cit.> (see also <cit.>), which we refer to as optimistic estimation. The results we present here are based on <cit.>.Let a divergence ·· be fixed. An optimistic estimation oracleis an algorithm which, at each step t, given t-1=(π1,r1,o1),…,(πt-1,rt-1,ot-1), produces a randomized estimator νt∈Δ(). Compared to the previous section, the only change is that for a parameter γ>0, we will measure the performance of the oracle via optimistic estimation error, defined as∑_t=1^T_t∼pt_t∼νt*t + γ^-1(()-(πt).This quantity is similar to eq:general_error, but incorporates a bonus termγ^-1(()-(πt)),which encourages the estimation algorithm to over-estimate the optimal value () for the underlying model, leading to a form of optimism.[Structured bandits] Consider any structured bandit problem with decision space Π, function class ⊆(Π→0,1), and =. Letbe the class=*M|∈, M(π) is 1- ∀π.To derive bounds on the optimistic estimation error, we can appeal to an augmented version of the (randomized) exponential weights algorithm which, for a learning rate parameter η>0, sets νt() ∝exp* -η*∑_i<t((πi)-ri)^2 - γ^-1().For an appropriate choice of η, this method achieves []log + √(Tlog)/γ for D=·· <cit.>.Optimistic 1.3is an optimistic variant of , which we refer to as . At each timestep t, the algorithm calls the estimation oracle to obtain a randomized estimator νt using the data (π1,r1,o1),…,(πt-1,rt-1,ot-1) collected so far. The algorithm then uses the estimator to compute a distribution pt∈Δ(Π) and samples πt from this distribution. The main change relative to the version of on page alg:main_generalized is that the minimax problem in is derived from an “optimistic” variant of the tailored to the optimistic estimation error in <ref>. This quantity, which we refer to as the Optimistic , is defined for ν∈Δ() as(,ν) = inf_p∈Δ(Π)sup_M∈_π∼p_∼ν*() - (π)- γ·M.and() = sup_ν∈Δ()(,ν).The Optimistic the same as the generalized in <ref>, except that the optimal value () in <ref> is replaced by the optimal value () for the (randomized) reference model ∼ν. This seemingly small change is the main advantage of incorporating optimistic estimation, and makes it possible to bound the Optimistic for certain divergences D for which the value of the generalized in <ref> would otherwise be unbounded. When the divergence D admits a sufficient statistic :→Ψ, for any distribution ν∈Δ(), if we define ν∈Δ(Ψ) via ν(ψ) = ν(M∈: (M)=ψ), we have(,ν) = inf_p∈Δ(Π)sup_M∈_π∼p_∼ν*() - (π) - γ·M.In this case, by overloading notation slightly, we may simplify the definition in eq:optimistic_max to() = sup_ν∈Δ(Ψ)(,ν).Regret bound for optimisticThe following result shows that the regret of is controlled by the Optimistic and the optimistic estimation error for the oracle. ensures that≤()·T + γ·almost surely. This regret bound has the same structure as that of thm:upper_general_distance, but the and estimation error are replaced by their optimistic counterparts. When does optimistic estimation help? When does the regret bound in <ref> improve upon its non-optimistic counterpart in <ref>? It turns out that for asymmetric divergences such as those found in the context of reinforcement learning, the regret bound in <ref> can be much smaller than the corresponding bound in <ref>; see <ref> for an example. However, for symmetric divergences such as Hellinger distance, we will show now that the result never improves upon <ref>. Given a divergence D, we define the flipped divergence, which swaps the first and second arguments, byMM.Assume that For all pairs of models M,∈(), we have ((π) - (π))^2 ≤^2·M for a constant >0. Then for all γ>0,[]_3γ/2() - ^2/2γ≤() ≤[]_γ/2() + ^2/2γ. This result shows that the optimistic DEC with divergence D is equivalent to the generalized DEC in <ref>, but with the arguments to the divergence flipped. Thus, for symmetric divergences, the quantities are equivalent. In particular, we can combine <ref> with <ref> to derive the following corollary for Hellinger distance.Suppose that rewards are bounded in *0,1. Then for all γ>0,[2γ]() - 1/γ≤sup_(,) ≤[γ/6]() + 3/γ. For asymmetric divergences, in settings where there exists an estimation oracle for which the flipped estimation error^=∑_t=1^T_π∼pt_t∼νt*[πt]tis controlled, <ref> shows that to match the guarantee in thm:optimistic, optimism is not required, and it suffices to run the non-optimistic algorithm on pagealg:main_generalized. However, we show in <ref> that for certain divergences found in the context of reinforcement learning, estimation with respect to the flipped divergence is not feasible, yet working with the optimistic leads to meaningful guarantees. §.§ : Structural Properties In what follows, we state some structural properties of the , which are useful for calculating the value for specific model classes of interest.Consider any structured bandit problem with decision space Π, function class ⊆(Π→0,1), and =. Letbe the class=*M|∈, M(π) is 1- ∀π.Then, letting(,) =inf_p∈Δ(Π)sup_f∈_π∼p*f()-f(π) - γ(f(π)-(π))^2,we have[c_1γ]() ≤() ≤[c_2γ](),where c_1,c_2≥0 are numerical constants.Adding observations that are unrelated to the model under consideration never changes the value of the . In more detail, consider a model classwith observation space _1, and consider a class of conditional distributionsover a secondary observation space _2, where each D∈ has the form D(π)∈Δ(_2). For M∈ and D∈, let (M⊗D)(π) be the model that, given ∈, samples (r,o_1)∼M(π) and o_2∼D(π), then emits (r,(o_1,o_2)). Set⊗=*M⊗D|M∈,D∈.Then for all ∈ and D∈,(⊗,⊗D) = (,).This can be seen to hold by restricting the supremum in eq:dec to range over models of the form M⊗D.Passing observations through a channel never decreases the . Consider a class of modelswith observation space . Let ρ:→' be given, and define ρ∘M to be the model that, given decision π, samples (r,o)∼M(π), then emits (r,ρ(o)). Let ρ∘*ρ∘M|M∈. Then for all ∈, we have(,) ≤(ρ∘,ρ∘).This is an immediate consequence of the data processing inequality for Hellinger distance, which implies that []ρ∘M()[]ρ∘()≤M()().§.§ Deferred Proofs We first prove the in-expectation bound. By assumption, we have that∑_t=1^Tt(t) - ∑_t=1^Tt()≤.Taking expectations, ass:realizability implies that∑_t=1^T*(πt)t(πt)≤*.The bound now follows from lem:divergence_inequality.We now prove the high-probability bound using Lemma <ref>. Define Z_t=1/2(t(t) - t()). Applying lem:martingale_chernoff with the sequence (-Z_t)_t≤T, we are guaranteed that with probability at least 1-δ,∑_t=1^T-log*_t-1*e^-Z_t≤∑_t=1^TZ_t + log(δ^-1) = 1/2∑_t=1^T*t(t) - t() + log(δ^-1).Let t be fixed, and define abbreviate zt=(rt,ot). Let ν(·|π) be any (conditional) dominating measure for [t] and [], and observe that _t-1*e^-Z_t|πt = _t-1*√([t](zt|πt)/[](zt|πt))|πt= ∫[](z|πt)√([t](z|πt)/[](z|πt))(dz|πt)= ∫√([](z|πt)[t](z|πt))(dz|πt) = 1 - 1/2(πt)t(πt).Hence,_t-1*e^-Z_t = 1 - 1/2_t-1*(πt)t(πt)and, since -log(1-x)≥x for x∈*0,1, we conclude that1/2∑_t=1^T_t-1*(πt)t(πt)≤1/2∑_t=1^T*t(t) - t() + log(δ^-1). We first prove eq:simulation. Let X=∑_h=1^Hr_h. Since X∈*0,1 almost surely, we have*(π)-(π) = *MX-X≤M()()≤M()(). The final result now follows from the AM-GM inequality.We now prove eq:simulation_basic1. Fromlem:bellman_residual, we have(π)- (π)=∑_h=1^Hπ*_h(s_h,a_h) - r_h - _h+1(s_h+1)= ∑_h=1^Hπ**_h_h+1(s_h,a_h)- _h+1(s_h+1) +_r_h∼_h(s_h,a_h)r_h - _r_h∼_h(s_h,a_h)r_h= ∑_h=1^Hπ**(_h-_h)_h+1(s_h,a_h)+ ∑_h=1^Hπ*_r_h∼_h(s_h,a_h)r_h - _r_h∼_h(s_h,a_h)r_h≤∑_h=1^Hπ*_h(s_h,a_h)_h(s_h,a_h) + _h(s_h,a_h)_h(s_h,a_h),where we have used that _h+1(s)∈*0,1. We first prove eq:com0. For all η>0, we have_M∼μ_π∼p*() - (π)≤_M∼μ_π∼p*() - (π) + η_M∼μ_π∼p*M()() + 1/4η.We now prove eq:com1.Using lem:hellinger_pair, we have that for all h,*(s_h,a_h)(s_h,a_h) + *(s_h,a_h)(s_h,a_h)≤ 8M()().As a result,π*∑_h=1^H(s_h,a_h)(s_h,a_h) + (s_h,a_h)(s_h,a_h) ≤ 8HM()().Since this holds uniformly for all π, we conclude that_π∼pπ*∑_h=1^H(s_h,a_h)(s_h,a_h) + (s_h,a_h)(s_h,a_h)≤ 8H_π∼p*M()().§.§ Exercises Prove lem:divergence_inequality.In this exercise, we will prove prop:hellinger_randomized as follows: * Prove the first two inequalities.* Use properties of the Hellinger distance to show that for any π∈Π, μ∈Δ(), and ,_M∼μM()()≥1/4_M, M'∼μM()M'().Hint: start with the right-hand side and use symmetry and triangle inequality for Hellinger distance. * With the help of Part 2, show that for any ,(,) ≤sup_μ∈Δ()inf_p∈Δ()_∼p, M∼μ[()-(π) -γ/4_M'∼μM()M'()].* Argue that(,) ≤sup_ν∈Δ()sup_μ∈Δ()inf_p∈Δ()_∼p, M∼μ[()-(π) -γ/4_M'∼νM()M'()]. and conclude the third inequality in prop:hellinger_randomized.* Show that sup_(,) ≤sup_∈()[γ/4](,).In other words, the estimation oracle cannot significantly increase the value of the DEC by selecting models outside (). [Lower Bound on DEC for Tabular RL] We showed that for Gaussian bandits, (,) ≥√(A/2),for all 1/√(A) by considering a small sub-family models and explicitly computing the DEC for this sub-family. Show that ifis the set of all tabular MDPs with =S, =A, and ∑_h=1^Hr_h∈0,1,(,) ≳√(SA)for all 1/√(SA), as long as Hlog_A(S). [Structured Bandits with ReLU Rewards]We will show that structured bandits with ReLU rewards suffer from the curse of dimensionality. Let (x)=maxx,0 and take =_2^d(1) = *∈^d|_2≤1.Consider the class of value functions of the formf_θ() = (θ,-b),where θ∈Θ = ^d-1, is an unknown parameter vector and b∈[0,1] is a known bias parameter. Here ^d-1*v∈^d|*v=1 denotes the unit sphere. Let ={M_θ}_θ∈Θ, where for all π, M_θ(π) (f_θ(π), 1). We will prove that for all d≥16, there exists ∈ such that for all γ>0,(,) ≳e^d/8/γ∧ 1,for an appropriate choice of bias b. By slightly strengthening this result and appealing to eq:lower_main, it is possible to show that any algorithm must have * e^d/8. To prove eq:relu_lb, we will use the fact that for large d, arandom vector v chosen uniformly from the unit sphere is nearly orthogonal to any direction π. This fact is quantified as follows (see Ball '97):_v∼unif(^d-1)(π,v>α)≤exp*-α^2/2d.for any π with π=1. * Prove that for all π∈Π, v∈Θ, and any choice of b,max_π'∈f_v(π') - f_v(π)≥ (1-b)v,π≤bIn other words, instantaneous regret is at least (1-b) whenever the decision π does not align well with v.* Let (π)=(0,1). Show that for all ∈, v∈Θ, and for any choice of b,M_v()()≤1/2f_v^2()≤(1-b)^2/2v,>b,i.e. information is obtained by the decision-maker only if the decision π aligns well with v in the model M_v. * Show that(,)≥inf_p∈Δ()_v∼unif(^d-1)_∼p* (1-b) - (1-b)v,>b - γ(1-b)^2/2v,>b. * Set 1-b. Use (<ref>) and Part 3 above to argue that(',)≥ - exp(-d/8) - γ^2/2exp(-d/8). Conclude that for d≥ 8,(',)≥/2 - γ^2/2exp(-d/8) * Show that by choosing = e^d/8/6γ∧1/2 and recalling that b=1-, we get (<ref>).§ REINFORCEMENT LEARNING: FUNCTION APPROXIMATION AND LARGE STATE SPACES In this section, we consider the problem of online reinforcement learning with function approximation. The framework is the same as that of <ref> but, in developing algorithms, we no longer assume that the state and action spaces are finite/tabular, and in particular we will aim for regret bounds that are independent of the number of states. To do this, we will make use of function approximation—either directly modeling the transition probabilities for the underlying MDP, or modeling quantities such as value functions—and our goal will be to design algorithms that are capable of generalizing across the state space as they explore. This will pose challenges similar to that of the structured and contextual bandit settings, but we now face the additional challenge of credit assignment. Note that the online reinforcement learning framework is a special case of the general decision making setting in <ref>, but the algorithms we develop in this section will be tailored to the MDP structure.Recall (<ref>) that for reinforcement learning, each MDP M takes the formM=*, , _h_h=1^H, _h_h=1^H, d_1,whereis the state space,is the action space, _h:×→Δ() is the probability transition kernel at step h, _h:×→Δ() is the reward distribution, and d_1∈Δ(_1) is the initial state distribution. All of the results in this section will take =, and we will assume that ∑_h=1^Hr_h∈0,1 unless otherwise specified. §.§ Is Realizability Sufficient? For the frameworks we have considered so far (contextual and structured bandits, general decision making), all of the algorithms we analyzed leveraged the assumption of realizability, which asserts that we have a function class that is capable of modeling the underlying environment well. For reinforcement learning, there are various realizability assumptions one can consider: * Model realizability: We have a model classof MDPs that contains the true MDP .* Value function realizability: We have a classof state-action value functions (Q-functions) that contains the optimal functionfor the underlying MDP.* Policy realizability: We have a class Π of policies that contains the optimal policy .Note that model realizability implies value function realizability, which in turn implies policy realizability. Ideally, we would like to be able to say that whenever one of these assumptions holds, we can obtain regret bounds that scale with the complexity of the function class (e.g., log for model realizability, or log for value function realizability), but do not depend on the number of statesor other properties of the underlying MDP, analogous to the situation for statistical learning. Unfortunately, the following result shows that this is too much to ask for.For any S∈ and H∈, there exists a class of horizon-H MDPswith =S, =2, and log=log(S), yet any algorithm must have *√(minS,2^H·T). The interpretation of this result is that even if model realizability holds, any algorithm needs regret that scales with min,,2^H. This means additional structural assumptions on the underlying MDP —beyond realizability—are required if we want to obtain sample-efficient learning guarantees. Note that since this construction satisfies model realizability, the strongest form of realizability, it also rules out sample-efficient results for value function and policy realizability.In what follows, we will explore different structural assumptions that facilitate low regret for reinforcement learning with function approximation. Briefly, the idea will be to make assumptions that either i) allow for extrapolation across the state space, or ii) control the number of “effective” state distributions the algorithm can encounter. We will begin by investigating reinforcement learning with linear models, then explore a general structural property known as Bellman rank.<ref> is analogous to the impossibility result we proved for structured bandits (ex:structured_realizability), which is subsumed by the RL framework. That result required a large number of actions, while <ref> holds even when =2.There are many notions of realizability beyond those we consider above. For example, for value function approximation, one can assume that π∈ for all π, or assume that the classobeys certain notions of consistency with respect to the Bellman operator for . §.§ Linear Function Approximation Toward understanding the complexity of RL with function approximation, let us consider perhaps the simplest possible modeling approach: Linear function approximation. A natural idea here is to assume linearity of the underlying Q-function corresponding to the true model M, generalizing the linear bandit setting in sec:structured: _h(s,a) = *ϕ(s,a),_h,∀h∈Hwhere ϕ(s,a)∈_2^d(1) is a feature map that is known to the learner and _h∈ is an unknown parameter vector. Equivalently, we can define= * Q_h(s,a) = *ϕ(s,a),θ_h|θ_h∈_2^d(1) ∀h,and assume that ∈. This is called themodel. Linearity is a strong assumption, and it is reasonable to imagine that this would be sufficient for low regret. Indeed, one might hope that using linearity, we can extrapolate the value ofonce we estimate it for a small number of states. Unfortunately, even for this very simple class of functions, it turns out that realizability is still insufficient. For any d∈ and H∈ sufficiently large, any algorithm for the model must have*min*2^(d), 2^(H). This contrasts the situation for contextual bandits and linear bandits, where linear rewards were sufficient for low regret. The intuition is that, even thoughis linear, it might take a very long time to estimate the value for even a small number of states. That is, linearity of the optimal value function is not a useful assumption unless there is some kind of additional structure that can guide us toward the optimal value function to being with. We mention in passing that <ref> can be proven by lower bounding the <cit.>.The Low-Rank MDP model prop:realizability_insufficient_linear implies that linearity of the optimal Q-function alone is not sufficient for sample-efficient RL. To proceed, we will make a stronger assumption, which asserts that the transition probabilities themselves have linear structure: For all s∈, a∈, and h∈H, we have_h(s'|s,a) = *ϕ(s,a),_h(s'),*r_h|s,a=*ϕ(s,a),_h.Here, ϕ(s,a)∈ is a feature map that is known to the learner, _h(s')∈^d is another feature map which is unknown to the learner, and _h∈_2^d(√(d)) is an unknown parameter vector. Additionally, for simplicity, we assume that ∑_s'∈ |_h(s')|≤√(d), which in particular holds in the tabular example below. As before, assume that both cumulative and individual-step rewards are in [0,1]. For the remainder of the subsection, we letdenote the set of MDPs with these properties.The linear structure in <ref> implies that the transition matrix has rank at most d, thus facilitating (as we shall see shortly) information sharing and generalization across states, even when the cardinality ofandis large or infinite. For this reason, we refer to MDPs with this structure as low-rank MDPs <cit.>.Just as linear bandits generalize unstructured multi-armed bandits, the low-rank MDP model <ref> generalizes tabular RL, which corresponds to the special case in which d=||· ||, ϕ(s,a)=e_s,a, and (μ_h(s'))_s,a=_h(s'|s,a). Properties of low-rank MDPsThe linear structure of the transition probabilities and mean rewards is a significantly more stringent assumption than linearity of _h(s,a) in (<ref>). Notably, it implies that Bellman backups of arbitrary functions are linear. For any low-rank MDP M∈ and any Q:×→ and any h∈[H], the Bellman operator is linear in ϕ: *_h Q(s,a) = *ϕ(s,a), θQM for some θQM∈^d. In particular, this implies that for any policy π=(π1,…,πH), functions QM,π_h are linear in ϕ for every h. Finally, for Q:×→[0,1], it holds that θQM≤ 2√(d).As a special case, this lemma implies that for low-rank MDPs, _h is linear for all π. We have*_h Q(s,a)= *ϕ(s,a),_h + ∑_s'_h(s'|s,a)max_a'Q (s', a') = *ϕ(s,a),_h + ∑_s'*ϕ(s,a),_h(s')max_a' Q(s', a') = *ϕ(s,a), _h + ∑_s'_h(s')max_a'Q(s', a'). The second statement follows since QM,π_h=*_h QM,π_h+1. For the last statement, θQM≤_h + ∑_s'_h(s')Q(s')≤ 2√(d), since _h is a vector of distributions on . §.§.§ The AlgorithmTo provide regret bounds for the low-rank MDP model, we analyze an algorithm called (“Least Squares Value Iteration UCB”), which was introduced and analyzed in the influential paper of <cit.>. Similar to the algorithm we analyzed for tabular RL, the main idea behind the algorithm is to compute a state-action value t with the optimistic property that_ht(s,a) ≥_h(s,a)for all s,a,h. This is achieved by combining the principle of dynamic programming with an appropriate choice of bonus to ensure optimism. However, unlike , the algorithm does not directly estimate transition probabilities (which is not feasible whenis unknown), and instead implements approximate value iteration by solving a certain least squares objective. In more detail, for each episode t, the algorithm computes t_1,…,t_H through approximate dynamic programming. At layer h, given t_h+1, the algorithm computes a linear Q-function _ht(s,a)[]ϕ(s,a),_ht, by solving a least squares problem in which X=ϕ(s_h,a_h) is the feature vector and Y=r_h+max_a_h+1t(s_h+1,a) is the target/outcome. This is motivated by <ref>, which asserts that the Bellman backup *_h _h+1t(s,a) is linear. Given _ht, the algorithm forms the optimistic estimate _ht via th(s,a) = *_ht(s,a) + bth,δ(s,a) ∧1,wherebh,δt(s,a) = √(R)*ϕ(s,a)_(Σ_ht)^-1,withΣ_ht=∑_i<tϕ(s_hi,a_hi) ϕ(s_hi,a_hi)^ + I,is an elliptic bonus analogous to the bonus used within LinUCB. With this, the algorithm proceeds to the next layer h-1. Once t is computed for every layer, the algorithm executes the optimistic policy t given by th(s)=_a∈th(s,a). The algorithm enjoys the following regret bound. If any δ>0, if we set R=c·d^2log(HT/δ)for a sufficiently large numerical constant c and ρ=2√(d), has that with probability at least 1-δ,H√(d^3·Tlog(HT/δ)). §.§.§ Proof of prop:lsvi_ucbThe starting point of our analysis for was lem:reg_decomp_optimistic, which states that it is sufficient to construct optimistic estimates {_1,…,_H} (i.e. _h ≤_h) such that the Bellman residuals ^M, *(_h- _h^M_h+1)(s_h, a_h) are small under the greedy (with respect to 's) policy . In order to control these residuals, we constructed an estimated modeland defined empirical Bellman operators _h^ in terms of estimated transition kernels. We then set _h to be the empirical Bellman backup _h^_h+1, plus an optimistic bonus term. In contrast, does not directly estimate the model. Instead, it performs regression with a target that is an empirical Bellman backup. As we shall see shortly, subtleties arise in the analysis of this regression step due to lack of independence. Technical lemmas for regressionRecall from <ref> that for any fixed Q:×→, M*r_hi + max_aQ(s_h+1i, a) | si_h,ai_h = *_h Q(s_hi,a_hi).However, for layer h, the regression problem within concerns a data-dependent function Q=_h+1t (with i<t), which is chosen as a function of all the trajectories τ1,…,τt-1. This dependence implies that the regression problem solved by is not of the type studied, say, in prop:iid_finite_class. Instead, in the language of sec:sl, the mean of the outcome variable is itself a function that depends on all the data. The saving grace here is that this dependence does not result in arbitrarily complex functions, which will allow us to appeal to uniform convergence arguments. In particular, for every h and t, t_h belongs to the class* (s,a)↦*θ, ϕ(s,a) + √(R)*ϕ(s,a)_(Σ)^-1∧1 : θ≤ 2√(d), σ_(Σ)≥ 1.To make use of this fact, we first state an abstract result concerning regression with dependent outcomes. Letbe an abstract set with <∞. Let x_1,…,x_T∈ be fixed, and for each g∈, let y_1(g),…,y_T(g)∈ be 1-subGaussian outcomes satisfying*y_i(g) | x_i = f_g(x_i)for f_g ∈⊆{f:→}.[The random variables {y_i(g)}_g∈ may be correlated.] In addition, assume that y_1(g),…,y_T(g) are conditionally independent given x_1,…,x_T. For any latent g∈, define the least-squares solution _g ∈_f∈∑_i=1^T (y_i(g) - f(x_i))^2. With probability at least 1-δ, simultaneously for all g∈,∑_i=1^T (_g(x_i) - f_g(x_i))^2 ≲log(/δ). Fix g∈. To shorten the notation, it is useful to introduce empirical norms f^2_T = 1/T∑_i=1^T f(x_i)^2 and empirical inner product f,f'_T= ∑_i=1^T f(x_i)f'(x_i) for f,f'∈. Optimality of _g implies that ∑_i=1^T (y_i(g) - _g(x_i))^2 ≤∑_i=1^T (y_i(g) - f_g(x_i))^2 which can be written succinctly (with a slight abuse of notation) as []Y_g - _g_T^2 ≤Y_g-f_g_T^2 for Y_g=(y_1(g),…,y_T(g)). This implies []_g-f_g_T^2 ≤ 2[]Y_g - f_g, _g-f_g_T. Dividing both sides by []_g-f_g_T and taking supremum over _g∈ leads to []_g-f_g_T≤ 2max_f∈[]Y_g - f_g, f-f_g/[]f-f_g_T_T. The random vector Y_g-f_g has independent zero-mean 1-subGaussian entries by assumption, while the multiplier f-f_g/[]f-f_g_T is simply a T-dimensional vector of Euclidean length √(T), for each f∈. Hence, each inner product in (<ref>) is a sub-Gaussian vector with variance proxy 1/T (see def:subGaussian). Thus, with probability at least 1-δ, the maximum on the right-hand side does not exceed C√(log (*/δ)/T) for an appropriate constant C. Taking the union bound over g and squaring both sides of (<ref>) yields the desired bound. We may now apply lem:uniform_regression to analyze the regression step of . With probability at least 1-δ, we have that for all t and h,∑_i<t*t_h(s_hi,a_hi)-*_h_h+1t(s_hi,a_hi)^2d^2log(HT/δ). Let t∈T and h∈H be fixed. To make the correspondence with lem:uniform_regression explicit, for the data (shi, ahi, sh+1i, rhi), we define x_i = ϕ(shi, ahi) and y_i(Q) = rhi + max_a Q(sh+1i, a), with Q∈ playing the role of the index g∈. With this, we have *y_i(Q) | x_i = ^M*rhi + max_a Q(sh+1i, a) | s_hi,a_hi = *_h Q(s_hi,a_hi) = ϕ(s_hi,a_hi), θQMwhich is linear in xi=ϕ(s_hi,a_hi), with the vector of coefficients θQM depending on Q. The regression problem is well-specified as long as we choose =*ϕ(s,a)↦ϕ(s,a), θ: θ≤ 2√(d) andas in (<ref>). While both of these sets are infinite, we can to a standard covering number argument for an appropriate scale . The cardinalities of -discretized classes can be shown to be of size O(d) and O(d^2), respectively, up to factors logarithmic in 1/ and d. The statement follows after checking that discretization incurs a small price due to Lipschitzness with respect to parameters. Finally, we union bound over t and h. Establishing optimismThe next lemma shows that closeness of the regression estimate to the Bellman backup on the data *(shi, ahi)_i<t translates into closeness at an arbitrary (s,a) pair as long as ϕ(s,a) is sufficiently covered by the data collected so far. This, in turn, implies that t_1,…,t_H are optimistic. Whenever the event in lem:lsvi_ucb_confidence occurs, we have that for all (s,a,h) and t∈[T],*_ht(s,a)-*_h_h+1t(s,a)√(d^2log(HT/δ))·*ϕ(s,a)_(Σ_ht)^-1 bh,δt(s,a).and_ht(s,a) ≥_h(s,a).Writing the Bellman backup, via lem:lsvi_bellman_linear, as*_h_h+1t(s,a) = *ϕ(s,a), θ_htfor some θ_ht∈^d with θ_ht_2≤2√(d), we have that *_ht(s,a)-*_h_ht(s,a) = *ϕ(s,a),_ht-θ_ht= *(Σ_ht)^-1/2ϕ(s,a),(Σ_ht)^1/2(_ht-θ_ht)≤[]ϕ(s,a)_(Σ_ht)^-1·[]_ht-θ_ht_Σ_ht. lem:lsvi_ucb_confidence then implies (<ref>), since []_ht-θ_ht_Σ_ht^2= (_ht-θ_ht)^*∑_i<tϕ(s_hi,a_hi) ϕ(s_hi,a_hi)^ + I(_ht-θ_ht) = ∑_i<t*t_h(s_hi,a_hi)-*_h_h+1t(s_hi,a_hi)^2 +[]_ht-θ_ht^2 and []_ht-θ_ht^2≲ d by (<ref>). To show (<ref>), we proceed by induction on t_h ≥_h, as in the proof of lem:ucb_vi_optimism. We start with the base case h=H+1, which has t_H+1 = _H+1≡ 0. Assuming t_h+1≥_h+1, we first observe that _h is monotone and _h_h+1t≥_h_h+1 = _h. Hence, _ht =_ht - _h_h+1 + _h_h+1≥_ht - _h_h+1 + _h≥ -bh,δt + _h and thus _ht+ bh,δt≥_h. Since _h≤ 1, the clipped version _ht also satisfies _ht≥_h. This, in turn, implies _ht≥_h.Finishing the proof With the technical results above established, the proof of <ref> follows fairly quickly. Let M be the true model. Condition on the event in lem:lsvi_ucb_confidence. Then, sinceis optimistic by lem:lsvi_ucb_optimism, we have that for each timestep t,() - (t)≤_s_1∼d_1*_1t(s_1) - (t)=∑_h=1^HMt*t_h(s_h,a_h) - *_h_h+1t(s_h,a_h)by lem:bellman_residual. Using the definition of t and lem:lsvi_ucb_optimism, we have∑_h=1^HMt*t_h(s_h,a_h) - *_h_h+1t(s_h,a_h)√(R)∑_h=1^HMt**ϕ(s_h,a_h)_(Σ_ht)^-1.Summing over all timesteps t gives≤√(R)∑_t=1^T∑_h=1^HMt**ϕ(s_h,a_h)_(Σ_ht)^-1.By Hoeffding's inequality, we have that with probability at least 1-δ, this is at most√(R)∑_t=1^T∑_h=1^H*ϕ(st_h,at_h)_(Σ_ht)^-1 + √(RHTlog(1/δ)).The elliptic potential lemma (lem:elliptic_potential) now allows us to bound∑_t=1^T*ϕ(st_h,at_h)_(Σ_ht)^-1√(dTlog(T/d))for each h, which gives the result.§.§ Bellman RankIn this section, we continue our study of value-based methods, which assume access to a classof state-action value functions such that ∈. In the prequel, we saw that the Low-Rank MDP assumption facilitates sample-efficient reinforcement learning whenis a class of linear functions, but what if we want to learn with nonlinear functions such as neural networks? To this end, we will introduce a new structural property, Bellman rank <cit.>, which allows for sample-efficient learning with general classes , and subsumes a number of well-studied MDP families, including:2 * Low-Rank MDPs <cit.>* Block MDPs and reactive POMDPs <cit.>. * MDPs with Linear Q^⋆ and V^⋆ <cit.>.* MDPs with low occupancy complexity <cit.>.* Linear mixture MDPs <cit.>.* Linear dynamical systems (LQR) <cit.>. We will learn about these examples in <ref>.Building intuition Bellman rank is a property of the underlying MDPwhich gives a way of controlling distribution shift—that is, how many times a deliberate algorithm can be surprised by a substantially new state distribution d^M,π when it updates its policy. To motivate the property, let us revisit the low-rank MDP model. Let M be a low-rank MDP with feature map ϕ(s,a)∈^d, and let Q_h(s,a)=*ϕ(s,a),_h be an arbitrary linear value function. Observe that since M is a Low-Rank MDP, we have *_hQ(s,a)=*ϕ(x,a),θ̃_hM,Q, where θ̃_hM,Q_h+∫_h(s')max_a'Q_h+1(s',a')ds'. As a result, for any policy π, we can write the Bellman residual for Q asMπ*Q_h(s_h,a_h)-r_h-max_aQ_h+1(s_h+1,a) = *Mπ*ϕ(s_h,a_h),_h-_h-θ̃_hM,Q= *_h(π),_h(Q),where _h(π)Mπ*ϕ(s_h,a_h)∈^d is an “embedding” that depends on π but not Q, and _h(Q)_h-_h-θ̃_hM,Q∈^d is an embedding that depends on Q but not π (both embeddings depend on M). Notably, if we view the Bellman residual as a huge Π× matrix _h(·,·)∈^Π× with _h(π,Q) Mπ*Q_h(s_h,a_h)-*r_h+max_aQ_h+1(s_h+1,a),then the property <ref> implies that (_h(·,·))≤d. Bellman rank is an abstraction of this property.[Bellman rank was originally introduced in the pioneering work of <cit.>. The definition of Bellman rank we present, which is slightly different from the original definition, is taken from the later work of <cit.>, and is often referred to as Q-type Bellman rank. ]For an MDP M with value function classand policy class Π, the Bellman rank is defined as(M)=max_h∈H(_h(π,Q)_π∈Π,Q∈).Equivalently, Bellman rank is the smallest dimension d such that for all h, there exist embeddings _h(π),_h(Q)∈^d such that_h(π,Q)=*_h(π),_h(Q)for all π∈Π and Q∈.The utility of Bellman rank is that the factorization in eq:bellman_rank_factorization gives a way of controlling distribution shift in the MDP M, which facilitates the application of standard generalization guarantees for supervised learning/estimation. Informally, there are only d effective directions in which we can be “surprised” by the state distribution induced by a policy π, to the extent that this matters for the classunder consideration; this property was used implicitly in the proof of the regret bound for .As we will see, low Bellman rank is satisfied in many settings that go beyond the Low-Rank MDP model.§.§.§ The AlgorithmWe now present an algorithm, <cit.>, which attains low regret for MDPs with low Bellman rank under the realizability assumption that∈.Like many of the algorithms we have covered, is based on confidence sets and optimism, though the way we will construct the confidence sets and implement optimism is new. PAC versus regret For technical reasons, we will not directly give a regret bound for . Instead, we will prove a PAC (“probably approximately correct”) guarantee. For PAC, the algorithm plays for T episodes, then outputs a final policy , and its performance is measured via() - ().That is, instead of considering cumulative performance as with regret, we are only concerned with final performance. For PAC, we want to ensure that () - ()≤ for some ≪ 1 using a number of episodes that is polynomial in ^-1 and other problem parameters. This is an easier task than achieving low regret: If we have an algorithm that ensures that *√(CT) for some problem-dependent constant C, we can turn this into an algorithm that achieves PAC errorusing *C/^2 episodes via online-to-batch conversion. In the other direction, if we have an algorithm that achieves PAC errorusing *C/^2 episodes, one can use this to achieve *C^1/3T^2/3 using a simple explore-then-commit approach; this is lossy, but is the best one can hope for in general.Algorithm overview proceeds in K iterations, each of which consists of n episodes. The algorithm maintains a confidence set k⊆ of value functions (generalizing the confidence sets we constructed for structured bandits in <ref>), with the property that∈ with high probability. Each iteration k consists of two parts: * Given the current confidence set k, the algorithm computes a value function Qk and corresponding policy πkπQk that is optimistic on averageQk=_Q∈k_s_1∼d_1*Q_1(s_1,(s_1)).The main novelty here is that we are only aiming for optimism with respect to the initial state distribution. * Using the new policy πk, the algorithm gathers n episodes and uses these to compute estimators _hk(Q)_h∈H which approximate the Bellman residual _h(πk,Q) for all Q∈. Then, in eq:bilinucb_confidence_set, the algorithm computes the new confidence set k+1 by restricting to value functions for which the estimated Bellman residual is small for π1,…,πk. Eliminating value functions with large Bellman residual is a natural idea, because we know from the Bellman equation thathas zero Bellman residual. Main guarantee The main result for this section is the following PAC guarantee for . Suppose thathas Bellman rank d and ∈. For any >0 and δ>0, if we set nH^3dlog(/δ)/^2, KHdlog(1+n/d), and β∝ c·Klog+log(HK/δ)/n, then learns a policysuch()-() ≤with probability at least 1-δ, and does so using*H^4d^2log(/δ)/^2episodes.This result shows that low Bellman rank suffices to learn a near-optimal policy, with sample complexity that only depends on the rank d, the horizon H, and the capacity log for the value function class; this reflects that the algorithm is able to generalize across the state space, with d and log controlling the degree of generalization. The basic principles at play are: * By choosing Qk optimistically, we ensure that the suboptimality of the algorithm is controlled by the Bellman residual for Qk, on-policy, similar to what we saw for and . An important difference compared to the algorithm we covered in the previous section is that is only optimistic “on average” with respect to the initial state distribution, i.e.,_s_1∼d_1*Qk_1(s_1,πQk(s_1))≥(),while aims to find a value function that is uniformly optimistic for all states and actions.* The confidence set construction eq:bilinucb_confidence_set explicitly removes value functions that have large Bellman residual on the policies encountered so far. The key role of the Bellman rank property is to ensure that there are only (d) “effective” state distributions that lead to substantially different values for the Bellman residual, which means that eventually, only value functions with low residual will remain. Interestingly, the Bellman rank property is only used for analysis, and the algorithm does not explicitly compute or estimate the factorization. Regret bounds The algorithm can be lifted to provide a regret guarantee via aexplore-then-commit strategy:Run the algorithm for T_0 episodes to learn a -optimal policy, then commit to this policy for the remaining rounds. It is a simple exercise to show that by choosing T_0 appropriately, this approach gives≤* (H^4d^2log(/δ))^1/3·T^2/3.Under an additional assumption known as Bellman completeness, it is possible to attain √(T) with a variant of this algorithm that uses a slightly different confidence set construction <cit.>.§.§.§ Proof of prop:bilinucb_regretRecall from the definition of Bellman rank that there exist embeddings _h(π),_h(Q)∈^d such that for all π∈Π and Q∈,_h(π,Q) = *_h(π),_h(Q).We assume throughout this proof that *_h(π),*_h(Q)_2≤1 for simplicity. Technical lemmas Before proceeding, we state two technical lemmas. The first lemma establishes validity for the confidence set k constructed by . For any δ>0, if we set β = c·Klog+log(HK/δ)/n, where c>0 is sufficiently large absolute constant, then with probability at least 1-δ, for all k∈K: * All Q∈k have∑_i<k(_h(πi,Q))^2 β∀h∈H. * ∈k.Using Hoeffding's inequality and a union bound (lem:hoeffding), we have that with probability at least 1-δ, for all k∈K, h∈H, and Q∈,*_hk(Q)-_h(πk,Q)≤ C·√(log(HK/δ)/n),where c is an absolute constant.To prove Part 1, we observe that for all k, using the AM-GM inequality, we have that for all Q∈,∑_i<k(_h(πi, Q))^2≤ 2∑_i<k(i_h(Q))^2 + 2∑_i<k(_h(πi,Q) - i_h(Q))^2.For Q∈k, the definition of k implies that ∑_i<k(i_h(Q))^2≤β, while eq:bilinucb_conv implies that ∑_i<k(_h(πi,Q) - i_h(Q))^2β, which gives the result. For Part 2, we similarly observe that for all k, h and Q∈,∑_i<k(i_h(Q))^2≤ 2∑_i<k(_h(πi,Q))^2 + 2∑_i<k(_h(πi,Q) - i_h(Q))^2.Sincehas _h(π,)=0 ∀π by Bellman optimality, we have∑_i<k(i_h())^2 ≤ 2∑_i<k(_h(πi,) - i_h())^2 ≤ 2C^2log(HK/δ)/n,where the last inequality uses eq:bilinucb_conv. It follows that as long as β≥2C^2log(HK/δ)/n, we will have ∈k for all k. The next result shows that whenever the event in the previous lemma holds, the value functions constructed by are optimistic. Whenever the event in lem:bilinucb_confidence occurs, the following properties hold: * DefineΣ_hk=∑_i<k_h(πi)_h(πi)^.For all k∈K, all Q∈k satisfy*_h(Q)_Σ_hk^2β. * For all k, Qk is optimistic in the sense that_s_1∼d_1*Qk_1(s_1,(s_1))≥_s_1∼d_1[]_1(s_1,(s_1)) = (). For Part 1, recall that by the Bilinear class property, we can write _h(πk,Q)=*(πk),(Q), so that eq:bilinucb_confidence1 implies that*(Q)_Σ_hk = ∑_i<k*(πi),(Q)^2 = ∑_i<k(_h(πi,Q))^2 β.For Part 2, we observe that for all k, since ∈k, we have_s_1∼d_1*Qk_1(s_1,(s_1)) = sup_Q∈_s_1∼d_1[]Q_1(s_1,(s_1))≥_s_1∼d_1[]_1(s_1,(s_1))= (). Proving the main result Equipped with the lemmas above, we prove <ref>. We first prove a generic bound on the suboptimality of each policy πk for k∈K. Let us condition on the event in lem:bilinucb_confidence, which occurs with probability at least 1-δ. Whenever this event occurs, lem:bilinucb_good_event implies that Qk is optimistic, so we have can bound() - (πk)≤_s_1∼d_1*Q_1k(s_1,πQk(s_1) - (πk) = ∑_h=1^Hπk*Q_hk(s_h,a_h) - r_h - max_a∈Q_h+1k(s_h+1,a)= ∑_h=1^H*_h(πk),_h(Qk),where the first equality uses the Bellman residual decomposition (lem:bellman_residual), and the second inequality uses the Bellman rank assumption. For any λ≥0, using Cauchy-Schwarz, we can bound∑_h=1^H*_h(πk),_h(Qk) ≤∑_h=1^H*_h(πk)_(λI+Σ_hk)^-1*_h(Qk)_λI+Σ_hkFor each h∈H, applying the bound in eq:bilinucb_elliptic_bound gives*_h(Qk)_λI+Σ_hk≤√(λ*_h(Qk)_2^2 + β)≤λ^1/2+β^1/2,where we have used that *_h(Qk)_2≤1 by assumption. This allows us to bound∑_h=1^H*_h(πk),_h(Qk)(λ^1/2+β^1/2)·∑_h=1^H*_h(πk)_(λI+Σ_hk)^-1.If we can find a policy πk for which the of eq:bilinucb_intermediate is small, this policy will be guaranteed to have low regret. The following lemma shows that such a policy is guaranteed to exist. For any λ>0, as long as K≥Hd log*1+λ^-1K/d, there exists k∈K such that*_h(πk)_(λI+Σ_hk)^-1^2 Hd log*1+λ^-1K/d/K∀h∈H. We choose λ=β, which implies that it suffices to take KHdlog(1+n/d) to satisfy the condition in lem:bilinucb_potential. By choosing k to satisfy eq:bilinucb_inverse and plugging this bound into eq:bilinucb_intermediate, we conclude that the policy πk has() - (πk) H√(β·Hdlog(1+β^-1K/d)/K)*H^3/2√(dlog(/δ)/n)as desired. Finally, we need to argue that the policyreturned by the algorithm is at least as good as πk. This is straightforward and we only sketch the argument: By Hoeffding's inequality and a union bound, we have that with probability at least 1-δ, for all k,*(πk)-k√(log(K/δ)/n),which implies that ()(πk) - √(log(K/δ)/n). The error term here is of lower order than eq:bilinucb_final. Deferred proofs To finish up, we prove <ref>.To prove the result, we need a variant of the elliptic potential lemma (lem:elliptic_potential). Let a_1,…,a_T∈^d satisfy a_t_2≤ 1 for all t∈[T]. Fix λ>0, and let V_t = λI + ∑_s<t a_s a_s^. Then ∑_t=1^T log(1 + a_t^2_V_t^-1) ≤ d log*1+λ^-1T/d. For any λ>0, applying this result for each h∈H and summing gives∑_k=1^K∑_h=1^Hlog[]1 + _h(πk)^2_(λI+Σ_hk)^-1≤ Hd log*1+λ^-1K/d.This implies that there exists k such that∑_h=1^Hlog[]1 + _h(πk)^2_(λI+Σ_hk)^-1≤Hd log*1+λ^-1K/d/K,which means that for all h∈H, log[]1 + _h(πk)^2_(λI+Σ_hk)^-1≤Hd log*1+λ^-1K/d/K, or equivalently:_h(πk)^2_(λI+Σ_hk)^-1≤exp*Hd log*1+λ^-1K/d/K-1.As long as K≥Hd log*1+λ^-1K/d, using that e^x≤1+2x for 0≤x≤1, we have_h(πk)^2_(λI+Σ_hk)^-1≤ 2Hd log*1+λ^-1K/d/K.§.§.§ Bellman Rank: ExamplesWe now highlight concrete examples of models with low Bellman rank. We start with familiar examples, the introduce new models that allow for nonlinear function approximation.[Tabular MDPs] If M is a tabular MDP with ≤S and ≤A, we can write the Bellman residual for any function Q and policy π as_h(π,Q)= Mπ*Q_h(s_h,a_h)-*r_h+max_aQ_h+1(s_h+1,a)= ∑_s,a_h(s,a)M*Q_h(s,a)-*r_h+max_a'Q_h+1(s_h+1,a')|s_h=s,a_h=a.It follows that if we define_h(π)=*_h(s,a)_s∈,a∈∈^SAand_h(Q)=*M*Q_h(s,a)-*r_h+max_a'Q_h+1(s_h+1,a')|s_h=s,a_h=a_s∈,a∈∈^SA,we have_h(π,Q) = *_h(π),_h(Q).This shows that (M)≤SA. [Low-Rank MDPs] The calculation in eq:low_rank_mdp_bellman shows that by choosing _h(π)Mπ*ϕ(s_h,a_h)∈^d and _h(Q)_h-_h-θ̃_hM,Q∈^d, any Low-Rank MDP M has (M)≤d. When specialized to this setting, the regret of is worse than that of (though still polynomial in all of the problem parameters). This is because is a more general algorithm, and does not take advantage of an additional feature of the Low-Rank MDP model known as Bellman completeness: If M is a Low-Rank MDP, then for all Q∈, we have _hQ_h+1∈_h+1. By using a more specialized relative of that incorporates a modified confidence set construction to exploit completeness, it is possible to match and actually improve upon the regret of <cit.>. We now explore Bellman rank for some MDP families that have not already been covered. [Low Occupancy Complexity]An MDP M is said to have low occupancy complexity if there exists a feature map (s,a)∈^d such that for all π, there exists _h∈^d such that_h(s,a) = *(s,a),_h.Note that neithernoris assumed to be known to the learner. If M has low occupancy complexity, then for any value function Q and policy π, we have_h(π,Q)= Mπ*Q_h(s_h,a_h)-*r_h+max_aQ_h+1(s_h+1,a)= ∑_s,a_h(s,a)M*Q_h(s,a)-*r_h+max_a'Q_h+1(s_h+1,a')|s_h=s,a_h=a=∑_s,a*(s,a),_hM*Q_h(s,a)-*r_h+max_a'Q_h+1(s_h+1,a')|s_h=s,a_h=a= *_h,∑_s,a(s,a)M*Q_h(s,a)-*r_h+max_a'Q_h+1(s_h+1,a')|s_h=s,a_h=a.It follows that if we define_h(π)=_hand_h(Q)=∑_s,a(s,a)M*Q_h(s,a)-*r_h+max_a'Q_h+1(s_h+1,a')|s_h=s,a_h=a,then _h(π,Q) = *_h(π),_h(Q), which shows that (M)≤d.This setting subsumes tabular MDPs and low-rank MDPs, but is substantially more general. Notably, low occupancy complexity allows for nonlinear function approximation: As long as the occupancies satisfy <ref>, the Bellman rank is at most d for any class , which might consist of neural networks or other nonlinear models. We close with two more new examples.[LQR] A classical problem in continuous control is the Linear Quadratic Regulator, or LQR. Here, we have ==^d, and states evolve vias_h+1=s_h + a_h + ζ_h,where ζ_h∼(0,I), and s_1∼(0,I). We assume that rewards have the form[LQR is typically stated in terms of losses; we negate because we consider rewards.]r_h = -s_h^QMs_h - a_h^RMa_hfor matrices QM,RM0. A classical result, dating back to Kalman, is that the optimal controller for this system is a linear mapping of the form πM,h(s) = KM_hs,and that the value function (s,a) = (s,a)^P_hM(s,a) is quadratic. Hence, it suffices to taketo be the set of all quadratic functions in (s,a). With this choice, it can be shown that (M)≤d^2+1. The basic idea is to choose _h(π)=(vec(Mπ*s_hs_h^), 1) and use the quadratic structure of the value functions. [Linear /] In prop:realizability_insufficient_linear, we showed that for RL with linear function approximation, assuming only thatis linear is not enough to achieve low regret. It turns out that if we also assume in addition thatis linear, the situation improves.Consider an MDP M. Assume that there known feature maps ϕ(s,a)∈^d and ψ(s')∈^d such that_h(s,a)=*ϕ(s,a),_h,_h(s)=*ψ(s),_h.Let=*Q| Q_h(s,a)=*ϕ(s,a),θ_h: θ_h∈^d ,∃ w max_a∈*ϕ(s,a),θ_h=*ψ(s),w ∀s.Then (M) ≤ 2d. We will not prove this result, but the basic idea is to choose _h(π) = Mπ*(ϕ(s_h,a_h),ψ(s_h+1)). See <cit.> for further examples. §.§.§ Generalizations of Bellman RankWhile we gave eq:bellman_rank_factorization as the definition for Bellman rank, there are many variations on the assumption that also lead to low regret. One well-known variant is V-type Bellman rank <cit.>, which asserts that for all π∈Π and Q∈,^M,πM_s_h+1|s_h,a_h∼(s_h)* Q_h(s_h,a_h) -r_h - max_a∈Q_h+1(s_h+1,a)=*_h(π),_h(Q).This is the same as the definition eq:bellman_rank_factorization (which is typically referred to as Q-type Bellman rank), except that we take a_h=(s_h) instead of a_h=π(s_h).[The name “V-type” refers to the fact that eq:v_type only depends on Q through the induced V-function s↦Q_h(s,(s)), while eq:bellman_rank_factorization depends on the full Q-function, hence “Q-type”.] With an appropriate modification, can be shown to give sample complexity guarantees that scale with V-type Bellman rank instead of Q-type. This definition captures meaningful classes of tractable RL models that are not captured by the Q-type definition eq:bellman_rank_factorization, with a canonical example being Block MDPs. [Block MDP] The Block MDP <cit.> is a model in which the (“observed”) state spaceis large/high-dimensional, but the dynamics are governed by a (small) latent state space . Formally, a Block MDP M=(,,P,R,H,d_1)is defined based on an (unobserved) latent state space , with z_h denoting the latent state at layer h. We first describe the dynamics for the latent space. Given initial latent state z_1, the latent states evolve viaz_h+1∼_h(z_h,a_h).The latent state z_h is not observed. Instead, we observes_h∼_h(z_h),where _h:→Δ() is an emission distribution with the property that (q_h(z))∩(q_h(z'))=∅ if z≠ z'. This property (decodability) ensures that there exists a unique mapping _h:→ that maps the observed state s_h to the corresponding latent state z_h. We assume that _h(s,a)=_h(_h(s),a), which implies that optimal policydepends only on the endogenous latent state, i.e. πM,h(s)=πM,h(_h(s)).The main challenge of learning in Block MDPs is that the decoderis not known to the learner in advance. Indeed, given access to the decoder, one can obtain regret (H,,)·√(T) by applying tabular reinforcement learning algorithms to the latent state space. In light of this, the aim of the Block MDP setting is to obtain sample complexity guarantees that are independent of the size of the observed state space , and scale as (, , H,log), whereis an appropriate class of function approximators (typically either a value function classcontainingor a class of decoders Φ that attempts to modeldirectly).We now show that the Block MDP setting admits low V-type Bellman rank. Observe that we can write^M,πM_s_h+1|a_h,s_h∼(s_h)* Q_h(s_h,(s_h)) -r_h - max_a∈Q_h+1(s_h+1,(s_h+1)) =∑_z∈_h(z)_s∼_h(z)M_s_h+1|s_h,a_h∼(s_h)* Q_h(s_h,(s_h)) -r_h - max_a∈Q_h+1(s_h+1,(s_h+1)) .This implies that we can take_h(π) = *_h(z)_z∈and_h(Q) = *_s∼_h(z)M_s_h+1|s_h,a_h∼(s_h)* Q_h(s_h,(s_h)) -r_h - max_a∈Q_h+1(s_h+1,(s_h+1)) _z∈so that the V-type Bellman rank is at most . This means that as long ascontains , we can obtain sample complexity guarantees that scale withrather than , as desired.There are a number of variants of Bellman rank, including Bilinear rank <cit.> and Bellman-Eluder dimension <cit.>, which subsume and slightly generalize both Bellman rank definitions.§.§.§ for Bellman Rank An alternative to the method is to appeal to the meta-algorithm and the . The following result <cit.> shows that the is always bounded for classes with low Bellman rank. For any class of MDPsfor which all M∈ have Bellman rank at most d, we have() H^2d/γ. This implies that that meta-algorithm has *H√(dT·) whenever we have access to a realizable model class with low Bellman rank. As a special case, for any finite class , using averaged exponential weights as an estimation oracle gives*H√(dTlog).We will not prove prop:dec_bellman_rank, but interested readers can refer to <cit.>. The result can be proven using two approaches, both of which build on the techniques we have already covered. The first approach is to apply a more general version of the algorithm from sec:dec_tabular, which incorporates optimal design in the space of policies. The second approach is to move to the Bayesian and appeal to posterior sampling, as in sec:dec_posterior.Value-based guarantees via optimistic estimation In general, the model estimation complexity log in <ref> can be arbitrarily large compared to the complexity log for a realizable value function class (consider the low-rank MDP—since μ is unknown, it is not possible to construct a small model class ). To derive value-based guarantees along the lines of what achieves in <ref>, a natural approach is to replace the Hellinger distance appearing in the with a divergence tailored to value function iteration, following the development in <ref>. Once such choice is the divergenceQM = ∑_h=1^H*Mπ*Q_h(s_h,a_h)-*r_h+max_aQ_h+1(s_h+1,a)^2,which measures the squared bellman residual for an estimated value function under M. With this choice, we appeal to the optimistic algorithm () from <ref>. One can show that the optimistic for this divergence is bounded as () H·d/γ.This implies that , with an appropriate choice of estimation algorithm tailored to ··, achieves* (H^2dlog)^1/2T^3/4.Note that due to the asymmetric nature of ··, it is critical to appeal to optimistic estimation to derive this result. Indeed, the non-optimistic generalized DEC [] does not enjoy a polynomial bound. See <cit.> for details. plain § TECHNICAL TOOLS§.§ Probabilistic Inequalities §.§.§ Tail Bounds with Stopping Times For random variables Z_1,…,Z_T taking values in [a,b] almost surely, with probability at least 1-δ,1/T'∑_i=1^T' Z_i - *Z≤ (b-a)√(log(T/δ)/2T')∀1≤T'≤T.As a consequence, for any random variable τ∈T with the property that for all t∈T, τ≤t is a measurable function of Z_1,…,Z_t-1 (τ is called a stopping time), we have that with probability at least 1-δ,1/τ∑_i=1^τ Z_i - *Z≤ (b-a)√(log(T/δ)/2τ).lem:hoeffding states that for any fixed T'∈T, with probability at least 1-δ,1/T'∑_i=1^T' Z_i - *Z≤ (b-a)√(log(T/δ)/2T').eq:hoeffding_adaptive1 follows by applying this result with δ'=δ/T and taking a union bound over all T choices for T'∈T. For eq:hoeffding_adaptive2, we observe that1/τ∑_i=1^τ* Z_i - *Z -(b-a)√(log(T/δ)/2τ)≤max_T'∈T*1/T'∑_i=1^T'*Z_i - *Z -(b-a)√(log(T/δ)/2T').The result now follows from eq:hoeffding_adaptive1. §.§.§ Tail Bounds for Martingales For any sequence of real-valued random variables X_t_t≤T adapted to a filtration _t_t≤T, it holds that with probability at least 1-δ, for all T'≤T,∑_t=1^T'X_t ≤∑_t=1^T'log*_t-1*e^X_t + log(δ^-1).We claim that the sequenceZ_τexp*∑_t=1^τX_t- log*_t-1*e^X_t is a nonnegative supermartingale with respect to the filtration (_τ)_τ≤T. Indeed, for any choice of τ, we have_τ-1*Z_τ =_τ-1*exp*∑_t=1^τX_t- log*_t-1*e^X_t= exp*∑_t=1^τ-1X_t- log*_t-1*e^X_t·_τ-1*exp*X_τ- log*_τ-1*e^X_τ= exp*∑_t=1^τ-1X_t-log*_t-1*e^X_t= Z_τ.Since Z_0=1, Ville's inequality (e.g., <cit.>) implies that for all λ>0,_0(∃τ : Z_τ>λ)≤1/λ.The result now follows by the Chernoff method. The next result is a martingale counterpart to Bernstein's inequality (<ref>). Let (X_t)_t≤T be a real-valued martingale difference sequence adapted to a filtration _t_t≤T. If *X_t≤R almost surely, then for any η∈(0,1/R), with probability at least 1-δ, for all T'≤T,∑_t=1^T' X_t ≤η∑_t=1^T'_t-1*X_t^2 + log(δ^-1)/η. Without loss of generality, let R=1, and fix η∈(0,1). The result follows by invoking lem:martingale_chernoff with η X_t in place of X_t, and by the facts that e^a ≤ 1+a+(e-2)a^2 for a≤ 1 and 1+b≤ e^b for all b∈. The following result is an immediate consequence of lem:freedman.Let (X_t)_t≤T be a sequence of random variables adapted to a filtration _t_t≤T. If 0≤X_t≤R almost surely, then with probability at least 1-δ,∑_t=1^TX_t ≤3/2∑_t=1^T_t-1*X_t + 4Rlog(2δ^-1), and ∑_t=1^T_t-1*X_t≤ 2∑_t=1^TX_t + 8Rlog(2δ^-1). §.§ Information Theory §.§.§ Properties of Hellinger Distance For any distributionsandover a pair of random variables (X,Y),_X∼_X*_Y|X_Y|X≤4_X,Y_X,Y.Since squared Hellinger distance is an f-divergence, we have_X∼_X*_Y|X_Y|X= _Y|X_X_Y|X_X.Next, using that Hellinger distance satisfies the triangle inequality, along with the elementary inequality (a+b)^2≤2(a^2+b^2), we have,_X∼_X*_Y|X_Y|X ≤ 2_Y|X_X_Y|X_X + 2_Y|X_X_Y|X_X = 2_X,Y_X,Y + 2_X_X≤ 4_X,Y_X,Y,where the final line follows from the data processing inequality. Let (_1,_1),…,(_n,_n) be a sequence of measurable spaces, and let i=∏_i=t^i_t and i=⊗_t=1^i_t. For each i, let i(·|·) and i(·|·) be probability kernels from (i-1,i-1) to (_i,_i). Letandbe the laws of X_1,…,X_n under X_i∼i(·|X_1:i-1) and X_i∼i(·|X_1:i-1) respectively. Then it holds that≤ 10^2log(n)·_*∑_i=1^ni(·|X_1:i-1)i(·|X_1:i-1).§.§.§ Change-of-Measure Inequalities Suppose that X∼ and Y∼ are both σ^2-. Then_*X-_*Y≤√(2σ^2·).lemmampminLetandbe probability measures on (,). For all h:→ with 0≤h(X)≤R almost surely underand , we have*_*h(X) - _*h(X)≤√(2R(_*h(X) + _*h(X))·).In particular, _*h(X) ≤ 3_*h(X) + 4R·.Let a measurable event A be fixed. Let p = (A) and q=A. Then we have(p-q)^2/2(p+q)≤(√(p)-√(q))^2≤(p,1-p)(q,1-q)≤,where the third inequality is the data-processing inequality. It follows that*p-q≤√(2(p+q) ),To deduce the final result for R=1, we observe that _*h(X)=∫_0^1h(X)>tdt and likewise for _*h(X), then apply Jensen's inequality. The result for general R follows by rescaling.The inequality in eq:mp_min follows by applying the AM-GM inequality to eq:mp_min_sqrt and rearranging. §.§ Minimax TheoremLetandbe convex sets in linear topological spaces, and assumeis compact. Let f:×→ be such that (i) f(x, ·) is concave and upper semicontinuous overfor all x∈ and (ii) f(·,y) is convex and lower semicontinuous overfor all y∈. Theninf_x∈sup_y∈f(x,y) = sup_y∈inf_x∈f(x,y).
http://arxiv.org/abs/2312.16730v1
{ "authors": [ "Dylan J. Foster", "Alexander Rakhlin" ], "categories": [ "cs.LG", "math.OC", "math.ST", "stat.ML", "stat.TH" ], "primary_category": "cs.LG", "published": "20231227215845", "title": "Foundations of Reinforcement Learning and Interactive Decision Making" }
6D Radar Sensing and Tracking in Monostatic Integrated Sensing and CommunicationsSystem Hongliang Luo, Feifei Gao, Fan Liu and Shi JinH. Luo and F. Gao are with Department of Automation, Tsinghua University, Beijing 100084, China (email: [email protected]; [email protected]). F. Liu is with the Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen 518055, China (e-mail: [email protected]). S. Jin is with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China (e-mail: [email protected]).January 14, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this paper,we propose a novel scheme for six-dimensional (6D) radar sensing and tracking of dynamic target based on multiple input andmultiple output (MIMO) array for monostatic integrated sensing and communications (ISAC) system. Unlike most existing ISAC studiesbelieving that only the radial velocity of far-field dynamic target can be measured based on one single base station (BS),we find that the sensing echo channel of MIMO-ISAC systemactually includes the distance, horizontal angle, pitch angle, radial velocity, horizontal angular velocity, and pitch angular velocity of the dynamic target. Thus we may fully rely on one single BS to estimate the dynamic target's 6D motion parameters from the sensing echo signals. Specifically, we first propose thelong-term motion and short-term motion model of dynamic target, in which the short-term motion model serves the single-shot sensing of dynamic target, while the long-term motion model serves multiple-shots tracking of dynamic target. As a step further, we derive the sensing channel model corresponding to the short-term motion.Next, for single-shot sensing, we employ thearray signal processing methods to estimate the dynamic target'shorizontal angle, pitch angle, distance, and virtual velocity. By realizing that the virtual velocities observed by different antennas are different, we adopt planefitting to estimate the radial velocity, horizontal angular velocity, and pitch angular velocity ofdynamic target. Furthermore, we implement the multiple-shots tracking of dynamic targetbased oneach single-shot sensing results and Kalman filtering. Simulation resultsdemonstrate the effectiveness of the proposed 6D radar sensing and tracking scheme.6D MIMO radar, angular velocity estimation, integrated sensing and communications, dynamic target sensing, dynamic target tracking. § INTRODUCTION In the past decade, the integration of wireless communications and radar sensing has promoted the researches ondual functions radar communications (DFRC) systems<cit.>.With the further expansion of the connotation and extension of sensing, integrated sensing and communications (ISAC) that incorporates more diverse sensingtechnologies based on DFRC has been recognized as a promising air-interface technology for next-generation wireless networks<cit.>. Since ISACallows sensing systems and communications systems to share spectrum resources, and can serve variousintelligent applications, it has also been officially approvedby ITU-R IMT 2030 as one of the six key usage scenarios for the sixth generation (6G) mobile communications<cit.>.The ultimate functionalityof sensing is to construct the mapping relationship fromreal physical world todigital twin world, where the former includesstatic environment (such as roads and buildings) and dynamic targets (such as pedestrians and vehicles).Therefore, realizing static environment reconstructionand dynamic target sensing is becomingone consensus among researchers. Specifically,dynamic target sensing,as a research focus, refers to the discovery, detection, parameters estimation, tracking, and recognition oftarget based on the radar sensing function ofISAC system.On the other side,the base stations (BSs) in ISAC systems are usually stationary and are configured with massive multiple input andmultiple output (MIMO) arrays. Depending on the number and location of BSs,the ISAC system can be divided into:1) monostatic ISAC system (only one BS in the system);2) bistatic ISAC system (two BSs in the system);and3) multistatic ISAC system (multiple BSs in the system)<cit.>. Among different ISAC architectures, monostatic ISAC system has received tremendous research attention due to low implementation complexity, as it does not require high-precision synchronization among BSs. More relevant to this work, estimating the motion parameters of dynamic target with monostatic ISAC system has been well-investigated in the past few years. For example,X. Chen et. al.proposed a multiple signal classificationbased monostatic ISAC system that canattain highaccuracy for target's angle, distance, and radial velocity estimation<cit.>. W. Jiang et. al.proposed a model-driven ISAC scheme, which simultaneously accomplished tasks ofdemodulatinguplink communications signals and estimatingdistance and radial velocity of dynamic target<cit.>. In order to trackdynamic target, F. Liu et. al. investigated asensing assisted predictive beamforming design for vehicle communicationsby exploitingISAC technique<cit.>. On top of that,Z. Du et. al.proposed a tracking scheme for extendedtarget based on extended Kalman filtering andbeamwidth adjustment, whichleveraged matched filtering and maximum likelihood estimation to obtain the angle, distance, and radial velocity of target, and then tracks the target<cit.>. It should be noted thatall of these works adopt the traditional view in the field of radar sensing that only the radial velocity of dynamic target can be measured based on one single BS, while the angular velocitycannot be directlymeasured. Even in the field of radar sensing, the most advanced researches currently available suggest that the monostatic MIMO radarcan and only can realize the 4D sensing of dynamic target, namely,measuring the dynamic target's horizontal angle, pitch angle, distance, and radial velocity<cit.>. However, by re-examiningthe relationship between themotion parameters of dynamic targetand the sensing echo channel of MIMOsystem,one may realizethat the sensingchannelalready encompassesthe distance, horizontal angle, pitch angle, radial velocity, horizontal angular velocity, and pitch angular velocity of the dynamic target.As a consequence, it becomes possible toestimate the dynamic target's 6D motion parameters from theecho signals based on one single BS. Evidently, there are alreadysome preliminary studies focusing on measuring the angular velocity based on monostatic radar system.J. A. Nanzer et. al.first proposed the theoretical method for measuring the angular velocity of moving object based on spatial interferometry using one single station radar system,which was later verified through hardware experiments<cit.>. X. Wang et. al.extended this work to multiple targets angular velocitiesmeasurement scenarios through conceiving sophisticated algorithms<cit.>. However, all of these studies only considered the spatial interference effect between two or three antennas, resulting in severe sensing performance loss.To the best knowledge of the authors, estimating theangular velocity based on singlestation massive MIMO-OFDM system still remains widely unexplored. In this paper, we attempt to fill in this research gap by proposing a novel scheme for 6D radar single-shot sensing and multiple-shotstracking of dynamic target based on massive MIMO array for monostatic ISAC system. The contributions of this paper are summarized as follows. * Based onthe working pipeline ofradar sensinginISAC system, weconstruct the long-term motionand short-term motion model for dynamic target in3D space,which correspond to multiple-shots tracking and single-shot sensing of dynamic target, respectively.* We re-examinethe relationship between the 6D motion parameters of dynamic targetand the sensing echo channel ofMIMO-ISAC system, and reveal that the sensingchannelactually includes the distance, horizontal angle, pitch angle, radial velocity, horizontal angular velocity, and pitch angular velocity of target, which enables us toestimate the dynamic target's 6D motion parameters from echoes by solely relying on one single ISAC BS. * For single-shot sensing, we employthe array signal processing methods to estimate the dynamic target's distance, horizontal angle, pitch angle, and virtual velocity. Then we showthat the virtual velocities observed at different antennas are distinct from each other,allowing us to utilize planefitting to estimate the radial velocity, horizontal angular velocity, and pitch angular velocity of the dynamic target. * Based on the single-shot 6D parameters sensing results, we further propose a multiple-shots tracking approach for dynamic target through Kalman filtering. The remainder of this paper is organized as follows. In Section 2, we propose the 6D motion model of dynamic target, and derive thecorresponding sensing channel model. In Section 3, we propose a novel 6D sensing and tracking scheme for dynamic target sensing. Simulation results and conclusions are given in Section 4 and Section 5, respectively.Notation: Lower-case and upper-case boldface letters 𝐚 and 𝐀 denote a vector and a matrix; 𝐚^T and 𝐚^H denote the transpose and the conjugate transpose of vector 𝐚, respectively; [𝐚]_ndenotes the n-th element of the vector 𝐚; [𝐀]_i,j denotes the (i,j)-th element of the matrix 𝐀; 𝐀[i_1:i_2,:] is the submatrix composed of all columns elements in rows i_1 to i_2 of matrix 𝐀; 𝐀[:,j_1:j_2] is the submatrix composed of all rows elements in columns j_1 to j_2 of matrix 𝐀; eig(·) represents the matrix eigenvalue decomposition function.§ SYSTEM MODEL AND PROPOSED ISAC FRAMEWORK In this section, we provide the generic model for massive MIMO based monostatic ISAC system, and propose the 6D motion model of dynamic target, as well as derive the sensing channel model ofISAC system. §.§ ISAC BS Model A massive MIMO based monostatic ISAC system operating inmmWave orTerahertz frequency bands with OFDM modulation is depicted in Fig. 1, which employs only one dual-functional BS for wireless communications and radar sensing at the same time. Generally, we consider that the BS consists of one hybrid unit (HU) and one radarunit (RU), where both the HU and the RU are configured with uniform planar arrays (UPAs).By designing the beamforming strategy, HU is responsible for transmitting downlink communications signals and receiving uplink communications signals, as well as transmitting downlink sensingsignals to realize dynamic target sensing; whileRU is responsible for receiving echo signals to realize dynamic targetsensing.The HU andRU are each equipped with one UPAof N_H=N^x_H× N_H^z and N_R=N_R^x× N_R^z antenna elements,named as HU-UPA and RU-UPA, respectively.Assume that both the HU-UPA and theRU-UPA are vertically mounted on the 2D plane y = 0 at BS side as shown in Fig. 2, and the antenna spacing between the antennas distributed alongx-axis and z-axis ared_x = d≤λ/2 andd_z = d≤λ/2, respectively, with λ being the wavelength.Without loss of generality, we denote the position of the n_H-th antenna elementinthe HU-UPA as 𝐩_n_H=𝐩_0_H+[d· n_H^x,0,d· n_H^z]^T, where 𝐩_0_H is the position of the referenceelement, n_H^x ∈{0,1,...,N_H^x-1} and n_H^z ∈{0,1,...,N_H^z-1} are the antenna indices.Here weuse two types of index numbers to represent the same antenna, that is, the n_H-th antenna may also be named as the (n^x_H,n^z_H)-th antenna. Similarly, we denote the position of the n_R-th antenna elementinthe RU-UPA as 𝐩_n_R= 𝐩_0_R + [d· n_R^x,0,d· n_R^z]^T with n_R^x ∈{0,1,...,N^x_R-1} and n_R^z∈{0,1,...,N^z_R-1}. Generally, according to the spatial consistency of the arrays,we furtherassume thatthe HU-UPA andRU-UPA are co-located and are parallel to each other, i.e., 𝐩_0_H = 𝐩_0_R = [0,0,0]^T, such that they may see the targets at the same propagation directions[Since the normal communications and sensing distance is much longer than the protection distance betweenHU-UPA and RU-UPA, it can be considered that HU-UPA and RU-UPA are located in the same position.]<cit.>. Besides, by balancing the hardware costs and system performance, we assume that the HU-UPA employs thehardware architecture based onphase shifter (PS) structure, in which a total of N_H,RF≪ N_H radio frequency (RF) chains are deployed, and each antenna is connected to a PSto realize beamforming. On the other side, weassume that theRU-UPA employs the fully-digital receiving array, with each antenna connected to one RF chain,to realize super-resolution sensingperformance, which follows the setting in <cit.>and <cit.>.Suppose that the ISAC system emits OFDM signals with M subcarriers, where thelowestfrequency and the subcarrierinterval of OFDM signals are f_0 and Δ f, respectively. Then the transmission bandwidth is W=(M-1)Δ f, andthe frequency of the m-th subcarrieris f_m=f_0+mΔ f, where m=0,1,...,M-1.Further, we consider that an OFDM frame contains N consecutive OFDM symbols, where the time interval between adjacent OFDM symbols isT_s = T'_s+T_g,with T'_s = 1/Δ f and T_g being the OFDM symbol duration and guard interval, respectively.Weemploy the spherical coordinates (r,θ,ϕ) to represent oneposition in 3D space. As shown in Fig. 2, rrepresents the polar distance withmathematical range of r ≥ 0, θ representsthe horizontal angle withmathematical range of 0^∘≤θ≤ 180^∘, and ϕrepresents the pitch angle withmathematical range of -90^∘≤ϕ≤ 90^∘.Moreover, the spherical coordinate (r,θ,ϕ)may be translated to its Cartesian counterpart(x,y,z) through x=rcosϕcosθ, y=rcosϕsinθ, z=rsinϕ.SinceBS is located at the origin ofcoordinate system, wedenotethe service area of BS as{(r,θ,ϕ)|r_min≤ r ≤ r_max,θ_min≤θ≤θ_max,ϕ_min≤ϕ≤ϕ_max}.Suppose that there are P single-antenna communications users, K dynamic targets, as well as widely distributed static environment within this service area. We assume that the 6D motion parameters of the k-th dynamic target are {r_k, θ_k, ϕ_k, v_r,k, ω_θ,k, ω_ϕ,k}, in which (r_k,θ_k,ϕ_k) represents the position of the k-th target,v_r,k,ω_θ,k and ω_ϕ,k represent the radial velocity, horizontal angular velocity, and pitch angular velocity, respectively. Besides, weassume thatthe position of the p-th user is (R_p,ϑ_p,φ_p), which are known andstationary toBS, due to the fact that they canbe easily obtained throughuser reporting,or other techniques<cit.>.§.§ The Proposed ISAC Framework The task ofISAC system is to sense all K dynamic targets while serving the communications of all P users. As described in Fig. 3, the proposed ISAC framework consists of two stages: sensing beam scanning (SBS) stage andsensing beam tracking (SBT) stage. For the aspect of communications, BS continuously generates P communications beamstowards Pusers tomaintaincommunications service during both SBS stage and SBT stage.For the aspect of sensing,BSgenerates onesensing beam that canscantheservice area duringSBS stage,during which theBS may detect thetargets and estimate their parameters. To proceed,BS generates a single sensing beamto track all Kdynamic targets in a time division manner duringSBT stage, that is, the sensing beam may sequentially illuminate eachtarget and continuously track them. In this work, wemainly focuses on theSBT stage, and refer readers to our previous work <cit.> for more details on the SBS stage. Assume that K dynamic targets possess different physicaldirections. At each direction,the BS may adopt one OFDM frame, i.e., N consecutive OFDM symbols,to realize dynamic target sensing.As shown in Fig. 3, wedivide the SBT stageinto L tracking time slots (TTSs), and each TTS lasts for a time duration of T_TTS = K· N· T_s. Besides, each TTS is further divided into K unit time slots (UTSs), and each UTS lasts for a time duration of T_UTS =N· T_s. Clearly, there is T_TTS = K· T_UTS. DuringL consecutive TTSs,BS shouldtrack all K dynamic targets using the methods such as Kalman filtering with T_TTS astime step, which is namedas multiple-shots tracking. To realize continuous target tracking and reliable users communications, in the (l,k')-th UTS, BS needs to generate P communications beams pointing to P users andgenerate one sensing beam pointing to the sensing tracking direction (ξ_lk',η_lk') fromHU-UPA, where k'=1,2,...,K.The BS needs to update the motion parameter sensing results of the k'-th dynamic target within this UTS, known as single-shot sensing. Ideally, (ξ_lk',η_lk') should be equal to the physicaldirection of the k'-th dynamic target in the (l,k')-th UTS.Here we assume that the transmission power ofBS is P_t, the energy of sensing beam isρ_lk' P_t, and the remaining energy (1-ρ_lk')P_t is evenly distributed amongcommunications beams, where ρ_lk'∈ [0,1] is the power distribution coefficient used in the (l,k')-th UTS. Due to the short duration of one UTS, we keep the directions of transmitting beams unchanged within one UTS. Then the transmission signals fromHU-UPAon the m-th subcarrier of the n-thsymbol in the (l,k')-th UTS should be represented as𝐱_lk',n,m = ∑_p=1^P𝐰_c,p,lk's^c,p,lk'_n,m+𝐰_s,lk's^s,lk'_n,m= ∑_p=1^P√((1-ρ_lk')P_t/PN_H)𝐚_H(Γ_p,Υ_p)s^c,p,lk'_n,m+√(ρ_lk'P_t/N_H)𝐚_H(Ξ_lk',Θ_lk')s^s,lk'_n,m,where (Γ_p, Υ_p) = (cosφ_p cosϑ_p,sinφ_p) is the spatial-domain direction corresponding to the physicaldirection (ϑ_p,φ_p) of the p-th user, and (Ξ_lk',Θ_lk')=(cosη_lk'cosξ_lk',sinη_lk') is the spatial-domain direction corresponding to thesensing tracking direction(ξ_lk',η_lk'). Besides, 𝐰_c,p,lk'=√((1-ρ_lk')P_t/PN_H)𝐚_H(Γ_p, Υ_p) and 𝐰_s,lk'=√(ρ_lk'P_t/N_H)𝐚_H(Ξ_lk',Θ_lk') are the communications beamforming vector for the p-th userand the sensing beamforming vector for thesensing tracking direction, respectively, and𝐚_H(Ψ,Ω) is the array steering vectorof HU-UPA with the form𝐚_H(Ψ,Ω) = 𝐚_H^x(Ψ)⊗𝐚_H^z(Ω) ∈ℂ^N_H× 1,where ⊗ denotes the Kronecker product, and 𝐚_H^x(Ψ) =[1,e^j2π f_0dΨ/c,...,e^j2π f_0dΨ/c(N_H^x-1)]^T ∈ℂ^N_H^x× 1, 𝐚_H^z(Ω) = [1,e^j2π f_0dΩ/c,...,e^j2π f_0dΩ/c(N_H^z-1)]^T ∈ℂ^N_H^z× 1.Moreover,s^c,p,lk'_n,m and s^s,lk'_n,m arecommunications signals for the p-th user and sensing detection signals, respectively. Based on (2), the BS may realize both communications function and sensing function by optimizing ρ_lk' during eachUTS,which may be conceived through the power allocation strategy proposed in <cit.>, via maximizing the sensing performance while ensuring users communications performance.To that end, we only consider dynamic target sensingproblem while omitting the design of communications function in this work.Consequently, there is always one beamtowards the sensingtracking direction (ξ_lk',η_lk'), andwe can rewrite the transmission signals from the HU-UPAon the m-th subcarrier of the n-th OFDM symbol in the (l,k')-th UTS as𝐱_lk',n,m =√(ρ̀_lk'P_t/N_H)𝐚_H(Ξ_lk',Θ_lk')s^t,lk'_n,m,where ρ̀_lk'P_t isthe power allocated to(ξ_lk',η_lk') direction, and s^t,lk'_n,m is thesignal transmitted to(ξ_lk',η_lk') direction.§.§ The 6D Motion Model of Dynamic Target The motion of dynamic target can be described from two levels: 1) long-term motion, and 2) short-term motion.Specifically, the dynamic target undergoes long-term motion within L TTSs. Given the long tracking time of the target, the radial velocity andangular velocities of the target are susceptible to various disturbances. In long-term motion, the time interval for describing the target motion is T_TTS.We refer to the 6D motion parameters of the k-th target at the l-th TTS as the state of this target, denoted as𝐒^Long_k,l=[r^Long_k,l, θ^Long_k,l, ϕ^Long_k,l, v^Long_r,k,l, ω^Long_θ,k,l, ω^Long_ϕ,k,l]^T ∈ℝ^6× 1,where l=0,1,...,L-1.Without loss ofgenerality,during themotion process, the direction in which the polar distance decreases, the direction in which the horizontal angle value decreases, and the direction in which the pitch angle value decreasesare taken as the positive directions for radial velocity, horizontal angular velocity, and pitch angular velocity, respectively. Then the 6D motion model of the k-th dynamic target for long-term motion can be represented asr^Long_k,l+1 = r^Long_k,l - v^Long_r,k,lT_TTS - 1/2 u^Long_r,k,l T_TTS^2, θ^Long_k,l+1 =θ^Long_k,l-ω^Long_θ,k,lT_TTS-1/2 u^Long_θ,k,l T_TTS^2 , ϕ^Long_k,l+1 =ϕ^Long_k,l - ω^Long_ϕ,k,lT_TTS - 1/2 u^Long_ϕ,k,l T_TTS^2 , v^Long_r,k,l+1 =v^Long_r,k,l - u^Long_r,k,l T_TTS, ω^Long_θ,k,l+1 =ω^Long_θ,k,l- u^Long_θ,k,l T_TTS, ω^Long_ϕ,k,l+1 =ω^Long_ϕ,k,l- u^Long_ϕ,k,l T_TTS,where 𝐮^Long_k,l=[u^Long_r,k,l,u^Long_θ,k,l,u^Long_ϕ,k,l]^T represents the random disturbance during the long-term motion. In addition to sense the long-term motion, the BS needs to observe the k'-th dynamic target within the (l,k')-th UTS.The dynamic targetalso undergoes short-term motion within this UTS.Due to the short duration of the short-termmotion,one usually assumes that the velocities of the dynamic target remain constant within one UTS time, i.e., N OFDM symbol times<cit.>. Then the 6D motion parameters of the k-th dynamic target within the n-th OFDM symbol time of the (l,k')-th UTS can be expressed as 𝐒^Short_k,lk',n=[r^Short_k,lk',n,θ^Short_k,lk',n,ϕ^Short_k,lk',n, v^Short_r,k,lk',n,ω^Short_θ,k,lk',n,ω^Short_ϕ,k,lk',n]^T with n=0,1,...,N-1, which satisfies r^Short_k,lk',n = r^Long_k,l - v^Long_r,k,lnT_s, θ^Short_k,lk',n =θ^Long_k,l-ω^Long_θ,k,lnT_s, ϕ^Short_k,lk',n =ϕ^Long_k,l - ω^Long_ϕ,k,lnT_s, v^Short_r,k,lk',n =v^Long_r,k,l, ω^Short_θ,k,lk',n =ω^Long_θ,k,l, ω^Short_ϕ,k,lk',n =ω^Long_ϕ,k,l. Naturally,the long-term motion model corresponds to the multiple-shots tracking of dynamic target, while short-term motion model corresponds to single-shot sensing of dynamic target. The ISAC system needs to utilize𝐒^Short_k,lk',n as much as possible to sense and track𝐒^Long_k,l of the k-th dynamic target, that is, the ISAC system needs to utilize single-shot sensing to realizemultiple-shots tracking.§.§ Sensing Channel Model of ISAC System In the (l,k')-th UTS, the BS transmits the detection signals through HU-UPA at the beginning of sensing, which will be reflected by dynamic targets and cause echoes. Then, the RU-UPA will receive the sensing echo signals. Let us definethe path from the n_H-th antenna ofHU-UPA to the k-th dynamic target and then back to the n_R-th antenna of RU-UPA as the (n_H,k,n_R)-th propagation path.Then we denote τ^lk',n_k,n_H,n_R=(D^lk',n_k,n_H+D^lk',n_k,n_R)/cas the time delay of the (n_H,k,n_R)-th propagation path in the n-th OFDM symbol time during the (l,k')-th UTS, where c represents the speed of light, D^lk',n_k,n_H is the distance between the n_H-th antenna of HU-UPA and the k-th dynamic target, and D^lk',n_k,n_R is the distance between the n_R-th antenna of RU-UPA and the k-th dynamic target. Suppose that the signal transmitted by the n_H-th antenna ofHU-UPA is s(t),and the corresponding passband signal isℛ{s(t)e^j2π f_0t}.Then the echo signal will be a delayed version of the transmitting signal with amplitude attenuation. Specifically, the passband echo signal received by the n_R-th antenna through the (n_H,k,n_R)-th propagation path at the n-th OFDM symbol duringthe (l,k')-th UTSis ℛ{α^lk'_k s(t-τ^lk',n_k,n_H,n_R)e^j2π f_0(t-τ^lk',n_k,n_H,n_R)}. The corresponding baseband echo signal is α^lk'_k s(t-τ^lk',n_k,n_H,n_R)e^-j2π f_0τ^lk',n_k,n_H,n_R,andthe basebandequivalent channel ish^lk',n_k,n_H,n_R(t) = α^lk'_k e^-j2π f_0τ^lk',n_k,n_H,n_Rδ(t-τ^lk',n_k,n_H,n_R),where α^lk'_k is thechannel fading factor and δ (·) denotes the Dirac delta function.Taking the Fourier transform of (21), the baseband frequency-domain channel response can be obtain ash^lk',n,F_k,n_H,n_R(f) = α^lk'_k e^-j2π (f_0+f)τ^lk',n_k,n_H,n_R.Thus the frequency-domain sensing echo channelof the (n_H,k,n_R)-th propagation pathon the m-th subcarrier of the n-th OFDM symbol during the (l,k')-th UTS is h^lk',n,m_k,n_H,n_R=α^lk'_ke^-j2π f_mτ^lk',n_k,n_H,n_R=α^lk'_ke^-j2π f_m D^lk',n_k,n_H+D^lk',n_k,n_R/c. Based on (1), the Taylor expansion approximation ofD^lk',n_k,n_H can be expressed asD^lk',n_k,n_H≈r^Short_k,lk',n - (n^x_Hdcosϕ^Short_k,lk',ncosθ^Short_k,lk',n+n^z_Hdsinϕ^Short_k,lk',n). Similarly, D^lk',n_k,n_R can be approximated as D^lk',n_k,n_R≈r^Short_k,lk',n - (n^x_Rdcosϕ^Short_k,lk',ncosθ^Short_k,lk',n+n^z_Rdsinϕ^Short_k,lk',n). Let us denote Ψ^Short_k,lk',n=cosϕ^Short_k,lk',ncosθ^Short_k,lk',n, Ω^Short_k,lk',n=sinϕ^Short_k,lk',n, 24and then there isτ^lk',n_k,n_H,n_R= 2r^Short_k,lk',n-(n^x_H+n^x_R)dΨ^Short_k,lk',n-(n^z_H+n^z_R)Ω^Short_k,lk',n/c. 25Then (23) can be rewritten ash^lk',n,m_k,n_H,n_R=α^lk'_k e^-j2π f_m 2r^Short_k,lk',n-(n^x_H+n^x_R)dΨ^Short_k,lk',n-(n^z_H+n^z_R)Ω^Short_k,lk',n/c = α^lk'_k e^- j 4π f_m r^Long_k,l/c e^j4π f_mv^Long_r,k,lnT_s/c e^j2π f_m(n^x_H+ n^x_R)dΨ^Short_k,lk',n+( n^z_H+ n^z_R)dΩ^Short_k,lk',n/c. 26In narrowband OFDM systems, the Doppler squint effect and beam squint effect are typically negligible <cit.>, and thus (26) can befurther represented as h^lk',n,m_k,n_H,n_R= α^lk'_k e^-j 4π f_mr^Long_k,l/c e^j4π f_0v^Long_r,k,lnT_s/c× 20pte^j2π f_0(n^x_H+ n^x_R)dΨ^Short_k,lk',n +(n^z_H+ n^z_R)dΩ^Short_k,lk',n/c. 27 We denote 𝐇^lk',n,m_k ∈ℂ^N_R× N_H as the overall frequency-domain sensing echo channel matrix on the m-th subcarrier at the n-th OFDM symbol within the (l,k')-th UTS from the HU-UPA to the k-th dynamictarget and then back to the RU-UPA, whose (n_R,n_H)-th element is [𝐇^lk',n,m_k]_n_R,n_H = h^lk',n,m_k,n_H,n_R.Moreover, the matrix 𝐇^lk',n,m_k can be decomposed as 𝐇^lk',n,m_k=α^lk'_ke^-j4π f_m r^Long_k,l/c e^j4π f_0v^Long_r,k,lnT_s/c× 16pt𝐚_R(Ψ^Short_k,lk',n,Ω^Short_k,lk',n) 𝐚^T_H(Ψ^Short_k,lk',n,Ω^Short_k,lk',n), 28where 𝐚_R(Ψ,Ω) is the arraysteering vector for spatial-domain direction (Ψ,Ω) of RU-UPA with the form𝐚_R(Ψ,Ω) = 𝐚_R^x(Ψ)⊗𝐚_R^z(Ω) ∈ℂ^N_R× 1. 29Here ⊗ denotes the Kronecker product, and 𝐚_R^x(Ψ) =[1,e^j2π f_0dΨ/c,...,e^j2π f_0dΨ/c(N_R^x-1)]^T ∈ℂ^N_R^x× 130, 𝐚_R^z(Ω) = [1,e^j2π f_0dΩ/c,...,e^j2π f_0dΩ/c(N_R^z-1)]^T ∈ℂ^N_R^z× 131.Besides, α^lk'_kis usually modeled as α^lk'_k =√(λ^2/(4π)^3 (r^Long_k,l)^4)σ_k, andσ_kis the radar cross section (RCS) of the k-th dynamic target. Without loss ofgenerality, weassume that the RCS follows the Swerling 1 model. Based on (29), thesensing echo channel of all K dynamic targets on the m-th subcarrier of the n-th OFDM symbol in the (l,k')-th UTS can be represented as𝐇^target_lk',n,m = ∑_k=1^K 𝐇^lk',n,m_k. 32 In addition, since the real physical world is composed of dynamic targets and static environment, the RU-UPAwill receive both the effective echoes caused by interested dynamic targets (dynamic target echoes) and the undesiredechoes caused by uninterested background environment (clutter).Following our previous work <cit.>, wemodel the static environmental clutter channelon the m-th subcarrier of the n-th OFDM symbol in the (l,k')-th UTS as𝐇^background_lk',n,m=∑_i'=1^I'β^𝔠,lk'_i' e^-j4π f_mr^𝔠_i',lk'/c𝐚_R(Ψ^𝔠_i',lk',Ω^𝔠_i',lk') 𝐚^T_H(Ψ^𝔠_i',lk',Ω^𝔠_i',lk'), 33where I' is the total number of static environmental clutter scattering units,Ψ^𝔠_i',lk'=cosϕ^𝔠_i',lk'cosθ^𝔠_i',lk', Ω^𝔠_i',lk'=sinϕ^𝔠_i',lk',(r^𝔠_i',lk',θ^𝔠_i',lk',ϕ^𝔠_i',lk') is the position of the i'-thclutter scattering unit, β^𝔠,lk'_i' = √(λ^2/(4π)^3 (r^𝔠_i',lk')^4)σ^𝔠_i' is the channel fading factor,and σ^𝔠_i' is the RCS of the i'-th clutter scattering unit that alsofollows the Swerling 1 model. Due to the random distribution of clutter scattering units in various directions and distances, when the number of clutter scattering units I' is large enough, 𝐇^background_lk',n,m can beconsidered as a random channel. Therefore, a low complexity clutter channel generation method is to directly approximate 𝐇^background_lk',n,m as𝐇^background_lk',n,m≈β^𝔠_lk'𝐇^𝔠_lk',m, 34where 𝐇^𝔠_lk',mis a complex Gaussian matrix with the dimension of N_R × N_H, and β^𝔠_lk'is the clutter power regulation factor. Formulas (33) and (34) indicate that since the staticclutter scattering unit remains stationary for N OFDM symbol times, 𝐇^background_lk',n,m would remain unchanged for Nsymbols.Based on (32) and (34), the overall sensing echo channel ofdynamic targets and static environment on the m-th subcarrier of the n-th OFDM symbol in the (l,k')-th UTS is 𝐇^sensing_lk',n,m = 𝐇^target_lk',n,m +𝐇^background_lk',n,m. 35 § 6D RADAR SENSING AND TRACKING In this section, we provide the sensing echo signals model, and thenpropose a novel 6D sensing and tracking scheme for dynamic target sensing.§.§ Echo Signals Model We design that the BS employs the Kalman filtering algorithm to track the dynamic targets within LTTSs. Assume that the predicted value of the 6D parameters for the k'-th dynamic target within the l-th TTS is 𝐒^Long_k',l=[r^Long_k',l, θ^Long_k',l, ϕ^Long_k',l, v^Long_r,k',l, ω^Long_θ,k',l, ω^Long_ϕ,k',l]^T. Then, the HUUPA of the BS needs to generate one sensing beam towards (θ^Long_k',l, ϕ^Long_k',l) direction during the (l,k')-th UTS to re-sense the k'-th target. Hence based on (6), thetransmission signals of HU-UPA during the (l,k')-th UTS can be expressedas 𝐱_lk',n,m=√(ρ̀_lk'P_t/N_H)𝐚_H(cosϕ^Long_k',lcosθ^Long_k',l,sinϕ^Long_k',l)s^t,lk'_n,m. 36Then the sensing echo signals on the m-th subcarrier of the n-th OFDM symbol received by the RU-UPA in the (l,k')-th UTS can be represented as𝐲^lk'_n,m=𝐇^sensing_lk',n,m𝐱_lk',n,m^* +𝐧^lk'_n,m =𝐇^target_lk',n,m𝐱_lk',n,m^* +𝐇^background_lk',n,m𝐱_lk',n,m^* +𝐧^lk'_n,m, 55ptn=0,...,N-1, 4pt m=0,...,M-1, 37where [𝐧^lk'_n,m]_n_R is the zero-mean additive Gaussian noisewith varianceσ_lk'^2. Note that 𝐲^lk'_n,m represents the echo signals received by all N_R=N_R^x× N_R^z receiving antennas, and then we can reformat thevector 𝐲^lk'_n,m into matrix form as 𝐘^lk'_n,m𝐘^lk'_n,m = reshape{𝐲^lk'_n,m,[N_R^z,N_R^x]}∈ℂ^N_R^z× N_R^x. 38Furthermore, we can stack 𝐘^lk'_n,m into one echoes tensor𝐘_cube^lk'∈ℂ^N_R^z× N_R^x × N× M,whose (n^z_R,n^x_R,n,m)-th element is 𝐘_cube^lk'[n^z_R,n^x_R,n,m] = [𝐘^lk'_n,m]_n^z_R,n^x_R.Note that 𝐘_cube^lk' includes the sensingchannel 𝐇^sensing_lk',n,m, transmitting beamforming, and transmission symbols s^t,lk'_n,m, while targets sensing can be understood as an estimation of 𝐇^sensing_lk',n,m. However, random transmission symbols would affect the estimation ofsensingchannel, and thus we need toerase the transmission symbols from the received signals to obtainequivalent echochannel (EEC). Specifically, the EEC corresponding to 𝐘^lk'_n,m can be obtained as 𝐘̃^lk'_n,m = 𝐘^lk'_n,m/s^t,lk'_n,m. Then we can stack 𝐘̃^lk'_n,minto an EEC tensor 𝐘̃_cube^lk'∈ℂ^N_R^z× N_R^x × N× M with 𝐘̃_cube^lk'[n^z_R,n^x_R,n,m] = [𝐘̃^lk'_n,m]_n^z_R,n^x_R. §.§ Static Environmental Clutter Filtering It can be analyzed from (35) and (37) that the echo signals 𝐘_cube^lk' includes both dynamic target echoes and static environment echoes,and theEEC 𝐘̃_cube^lk'also includes both the EEC ofdynamic targets (DT-EEC) and the EEC ofstatic environment (SE-EEC). When we focus on dynamic target sensing, the SE-EEC inoriginal echo signals would cause negative interference to dynamic target sensing, and thusSE-EEC can be referred to asclutter-EEC. To address this negative interference,we needto filter out the interference of clutter-EECandto extract theeffective DT-EEC from 𝐘̃_cube^lk'.While the environmental clutter filtering isnecessary in sensing processing,it is not the focus of this work. According to the clutter suppression methodin <cit.>, we may express the effective DT-EEC after static clutter filtering as 𝐘̌_cube^lk', whose [:,:,n,m]-th sub-matrix is 𝐘̌_cube^lk'[:,:,n,m]= 𝐘̌^lk'_n,m =reshape{𝐲̌^lk'_n,m,[N_R^z,N_R^x]} with𝐲̌^lk'_n,m ≈𝐇^target_lk',n,m𝐱_lk',n,m^*/s^t,lk'_n,m+𝐧̌^lk'_n,m = ∑_k=1^K 𝐇^lk',n,m_k 𝐱_lk',n,m^*/s^t,lk'_n,m+𝐧̌^lk'_n,m, 39where 𝐧̌^lk'_n,mis the noise after static clutter filtering. §.§ Echo Signals Analysis Based on (28), (32) and (36), 𝐲̌^lk'_n,m in (39) can be calculated as (40) at the top ofnext page.When the Kalman filter predicts accurately, based on (16) and (17), there should be(cosϕ^Long_k',lcosθ^Long_k',l,sinϕ^Long_k',l)=(cosϕ^Short_k',lk',0cosθ^Short_k',lk',0,sinϕ^Short_k',lk',0) =(Ψ^Short_k',lk',0,Ω^Short_k',lk',0). Then in massive MIMO system, due to K dynamic targets owning different directions, (40) can be calculated as (41) at the top ofnext page, where 𝒢^lk'_k' = α^lk'_k'√(ρ̀_lk' P_t/N_H)sin [πf_0d/c (Ω^Short_k',lk',n-Ω^Short_k',lk',0)N_H^z]/sin [πf_0d/c (Ω^Short_k',lk',n-Ω^Short_k',lk',0)]sin [πf_0d/c (Ψ^Short_k',lk',n-Ψ^Short_k',lk',0)N_H^x]/sin [πf_0d/c (Ψ^Short_k',lk',n-Ψ^Short_k',lk',0)]. Next, based on (41), the [n^z_R,n^x_R,n,m]-th element in𝐘̌_cube^lk' can beexpressed as (42) at the top of next page. To further simplify (42), we note that ϕ^Short_k,lk',0 =ϕ^Long_k,l (based on (17)) and find that Ω^Short_k',lk',n-Ω^Short_k',lk',0 = sinϕ^Short_k',lk',n - sinϕ^Short_k',lk',0 =- sinϕ^Short_k',lk',0 - sin (ϕ^Short_k',lk',0-ω^Long_ϕ,k',lnT_s )/ω^Long_ϕ,k',lnT_sω^Long_ϕ,k',lnT_s = - sinϕ^Long_k',l - sin (ϕ^Long_k',l-ω^Long_ϕ,k',lnT_s )/ω^Long_ϕ,k',lnT_sω^Long_ϕ,k',lnT_s≈ - cosϕ^Long_k',lω^Long_ϕ,k',lnT_s. 43Besides, we also note that θ^Short_k,lk',0=θ^Long_k,l (based on (16)). According to the definition and properties of directional derivatives of binary function, wecompute Ψ^Short_k',lk',n-Ψ^Short_k',lk',0 as shown in (44) at the top of this page, where 𝒲_k',l = √((ω^Long_ϕ,k',l)^2+(ω^Long_θ,k',l)^2).Based on (43) and (44), the [n^z_R,n^x_R,n,m]-th element in𝐘̌_cube^lk', i.e.,the y̌^lk'_n,m,n^z_R,n^x_Rin (42)can becalculated as shown in (45) at the top of this page. Formula (45) indicates that y̌^lk'_n,m,n^z_R,n^x_R includes the distance term, pitch angle term, horizontal angle term, radial velocity term, pitch angular velocity term, and horizontal angular velocity term. Therefore, it is definite to estimate the 6D motion parameters of dynamic targets from DT-EEC 𝐘̌_cube^lk' and realize 6D radar sensing.TempEqCnt §.§ Angle Direction and Distance Estimation Let us transform 𝐘̌_cube^lk'∈ℂ^N_R^z× N_R^x × N× M into an Ω-matrix 𝐘^lk'_Ω with the dimension of N_R^z× N_R^xNM. Based on (45), 𝐘^lk'_Ωcan be represented as𝐘^lk'_Ω= 𝐤^lk'_Ω(Ω^Long_k',l)·𝐱^lk'_Ω + 𝐍^lk'_Ω∈ℂ^N_R^z× N_R^xN M, 46where Ω^Long_k',l = sinϕ^Long_k',l, 𝐤^lk'_Ω(Ω) = [1,e^j2π f_0dΩ/c,...,e^j2π f_0dΩ/c(N_R^z-1)]^T ∈ℂ^N_R^z× 1 is defined as the second spatial-domain direction array steering vector, 𝐱^lk'_Ω∈ℂ^1× N_R^xN M and 𝐍^lk'_Ω∈ℂ^N_R^z× N_R^xN M. Since 𝐘^lk'_Ω is the array signals form related to the second spatial-domain direction array, we can estimate Ω^Long_k',l from 𝐘^lk'_Ω by utilizing array signal processing methods.Here we adopt the estimating signal parameters via rotational variation techniques (ESPRIT) method for parameter estimation<cit.>. Specifically, the covariance matrix of 𝐘^lk'_Ω can be calculated as 𝐑_Ω^lk' = 1/N^x_RNM𝐘^lk'_Ω(𝐘^lk'_Ω)^H.We perform eigenvalue decomposition of𝐑_Ω^lk'to obtain the diagonal matrix with eigenvalues ranging from large to small (Σ_Ω^lk') and the corresponding eigenvector matrix (𝐔_Ω^lk'), that is, [𝐔_Ω^lk', Σ_Ω^lk']= eig(𝐑_Ω^lk'). Thenthe minimum description length (MDL) criterion is utilized to estimate the number of dynamic targets from Σ_Ω^lk'as K_lk'^Ω<cit.>. Weextract theparallel signal subspaces from 𝐔_Ω^lk' as𝐔_Ω,1^lk' = [ 𝐔_Ω^lk'[1:N_R^z-1,1:K_lk'^Ω], 𝐔_Ω^lk'[2:N_R^z,1:K_lk'^Ω] ] ∈ℂ^(N_R^z-1)× 2K_lk'^Ω and compute𝐑̃_Ω^lk' = (𝐔_Ω,1^lk')^H𝐔_Ω,1^lk'∈ℂ^2K_lk'^Ω× 2K_lk'^Ω. Then we perform eigenvalue decomposition of𝐑̃_Ω^lk' to obtain the diagonal matrix with eigenvalues ranging from large to small (Σ̃_Ω^lk') and the corresponding eigenvector matrix (𝐔̃_Ω^lk'), that is, [𝐔̃_Ω^lk', Σ̃_Ω^lk']= eig(𝐑̃_Ω^lk'). We extract 𝐔̃_Ω,a^lk' = 𝐔̃_Ω^lk'[1:K_lk'^Ω,K_lk'^Ω+1:2K_lk'^Ω] ∈ℂ^K_lk'^Ω× K_lk'^Ω and𝐔̃_Ω,b^lk' = 𝐔̃_Ω^lk'[K_lk'^Ω+1:2K_lk'^Ω,K_lk'^Ω+1:2K_lk'^Ω] ∈ℂ^K_lk'^Ω× K_lk'^Ω, and we compute 𝐑̌_Ω^lk'=-𝐔̃_Ω,a^lk'(𝐔̃_Ω,b^lk')^-1. Next, we perform eigenvalue decomposition of𝐑̌_Ω^lk'to obtain the diagonal matrix with eigenvalues ranging from large to small (Σ̌_Ω^lk') and the corresponding eigenvector matrix (𝐔̌_Ω^lk'), that is, [𝐔̌_Ω^lk', Σ̌_Ω^lk']= eig(𝐑̌_Ω^lk'). We take out the elements on the main diagonal of Σ̌_Ω^lk' to form oneeigenvalues set as {λ_Ω,1^lk',λ_Ω,2^lk',...,λ_Ω,K_lk'^Ω^lk'}, andcompute the space values as κ_Ω,i^lk' =arctan Imag(λ_Ω,i^lk')/ Real(λ_Ω,i^lk'), wherei=1,2,...,K_lk'^Ω. Since 𝐘̌_cube^lk' only contains one dynamic target, there should beK_lk'^Ω=1, and thus weabbreviate the space value corresponding to 𝐘^lk'_Ω as κ_Ω^lk'.Then the second spatial-domain direction of the k'-th dynamic target within the l-th TTS can be estimated as Ω̌^Long_k',l =cκ_Ω^lk'/2π f_0 d. 47Finally, the pitch angle of the k'-th dynamic target within the l-th TTS can be estimated as ϕ̌_k',l^Long = arcsin( Ω̌^Long_k',l) =arcsin( cκ_Ω^lk'/2π f_0 d). 48 Similarly, let us transform 𝐘̌_cube^lk'∈ℂ^N_R^z× N_R^x × N× M into an Ψ-matrix 𝐘^lk'_Ψ with the dimension of N_R^x× N_R^zNM. Based on (45), 𝐘^lk'_Ψcan be represented as𝐘^lk'_Ψ= 𝐤^lk'_Ψ(Ψ^Long_k',l)·𝐱^lk'_Ψ + 𝐍^lk'_Ψ∈ℂ^N_R^x× N_R^zN M, 49where Ψ^Long_k',l = cosϕ^Long_k',lcosθ^Long_k',l, 𝐤^lk'_Ψ(Ψ) = [1,e^j2π f_0dΨ/c,...,e^j2π f_0dΨ/c(N_R^x-1)]^T ∈ℂ^N_R^x× 1 is defined as the first spatial-domain direction array steering vector, 𝐱^lk'_Ψ∈ℂ^1× N_R^zN M and 𝐍^lk'_Ψ∈ℂ^N_R^x× N_R^zN M. Since 𝐘^lk'_Ψ is the array signals form related to the first spatial-domain direction array, we can estimate Ψ^Long_k',l from 𝐘^lk'_Ψ by utilizing array signal processing methods. Similarly, we can employ the ESPRIT method to obtain the space valuecorresponding to 𝐘^lk'_Ψ as κ_Ψ^lk'. Then the first spatial-domain direction of the k'-th dynamic target within the l-th TTS can be estimated as Ψ̌^Long_k',l =cκ_Ψ^lk'/2π f_0 d. 50Then the horizontal angleof the k'-th dynamic target within the l-th TTS can be estimated as θ̌_k',l^Long = arccos( Ψ̌^Long_k',l/cosϕ̌_k',l^Long). 51 To estimate the distance of the target, let us transform 𝐘̌_cube^lk'∈ℂ^N_R^z× N_R^x × N× M into a distance-matrix 𝐘^lk'_r with the dimension of M× N_R^z N_R^xN. Based on (45), 𝐘^lk'_rcan be represented as𝐘^lk'_r= 𝐤^lk'_r(r^Long_k',l)·𝐱^lk'_r + 𝐍^lk'_r∈ℂ^M× N_R^z N_R^xN, 52where𝐤^lk'_r(r) = [1,e^-j4π rΔ f/c,...,e^-j4π rΔ f/c(M-1)]^T ∈ℂ^M× 1 is defined as the distance array steering vector, 𝐱^lk'_r∈ℂ^1× N_R^z N_R^xN and 𝐍^lk'_r∈ℂ^M× N_R^z N_R^xN. Since 𝐘^lk'_r is the array signals form related to the distance array, we can estimate r^Long_k',l from 𝐘^lk'_r by utilizing array signal processing methods. Similarly, we can employ the ESPRIT method to obtain the space valuecorresponding to 𝐘^lk'_r as κ_r^lk'. Then the polar distance of the k'-th dynamic target within the l-th TTS can be estimated as ř^Long_k',l =-cκ_r^lk'/4πΔ f. 53§.§ Radial Velocityand Angular Velocities Estimation It can be analyzed from (45) that each antenna observes one virtual-velocity composed of the radial velocity, horizontal angular velocity, and pitch angular velocity of the dynamic target. Note that the virtual-velocityobserved by different antenna is different. We can design and derive the virtual-velocity of the k'-th dynamic target observed by the (n^x_R,n^z_R)-th antenna within the l-th TTS from (45) as v^Long,vir_k',l,n^x_R,n^z_R, which is shown in (54) at the top of this page. Then (45) can be rewritten as (55) at the top of this page.Next, we need to estimate the virtual-velocity observed by each antenna. We can extract the DT-EEC of the (n^x_R,n^z_R)-th antenna on all subcarriers of all OFDM symbols from 𝐘̌_cube^lk'∈ℂ^N_R^z× N_R^x × N× M as𝐘^lk'_v_vir,n^x_R,n^z_R=𝐤^lk'_v_vir(v^Long,vir_k',l,n^x_R,n^z_R)·𝐱^lk'_v_vir,n^x_R,n^z_R+𝐍^lk'_v_vir,n^x_R,n^z_R, 56where 𝐘^lk'_v_vir,n^x_R,n^z_R∈ℂ^N× M, 𝐱^lk'_v_vir∈ℂ^1× M, 𝐍^lk'_v_vir∈ℂ^N× M,[𝐘^lk'_v_vir,n^x_R,n^z_R]_n,m=y̌^lk'_n,m,n^z_R,n^x_R, and𝐤^lk'_v_vir(v_vir) = [1,e^j4π f_0v_virT_s/c,...,e^j4π f_0v_virT_s/c(N-1)]^T ∈ℂ^N× 1 is defined as the virtual-velocity array steering vector.Since 𝐘^lk'_v_vir,n^x_R,n^z_R is the array signals form related to the virtual-velocity array, we can estimate v^Long,vir_k',l,n^x_R,n^z_R from 𝐘^lk'_v_vir,n^x_R,n^z_R by utilizing array signal processing methods. Similarly, we can employ the ESPRIT method to obtain the space valuecorresponding to 𝐘^lk'_v_vir,n^x_R,n^z_R as κ^lk'_v_vir,n^x_R,n^z_R. Then the virtual-velocity of the k'-th dynamic target observed by the (n^x_R,n^z_R)-th antenna within the l-th TTS can be estimated as v̌^Long,vir_k',l,n^x_R,n^z_R =c κ^lk'_v_vir,n^x_R,n^z_R/4π f_0T_s. 57By traversing each antenna, we can obtain the virtual-velocity observed by each antenna,which is record as(n_R^x,n_R^z,v̌^Long,vir_k',l,n^x_R,n^z_R) withn_R^x ∈{0,1,...,N^x_R-1} and n_R^z∈{0,1,...,N^z_R-1}. Then we need to estimate the radial velocity, horizontal angular velocity, and pitch angular velocity of the target from these N_R=N_R^xN_R^z ternary pairs.In fact, we can expressv^Long,vir_k',l,n^x_R,n^z_R as a binary function of (n_R^x,n_R^z). Based on (54), there isv^Long,vir_k',l,n^x_R,n^z_R = A_k',l+B_k',l· n^x_R+C_k',l· n^z_R, 58whereA_k',l= v^Long_r,k',l-d/4[(N_H^z-1)cosϕ^Long_k',l-(N_H^x-1)sinϕ^Long_k',lcosθ^Long_k',l]ω^Long_ϕ,k',l +d/4 (N^x_H-1)cosϕ^Long_k',lsinθ^Long_k',lω^Long_θ,k',l, 59 B_k',l= d/2 (sinϕ^Long_k',lcosθ^Long_k',lω^Long_ϕ,k',l+cosϕ^Long_k',lsinθ^Long_k',lω^Long_θ,k',l), 60 C_k',l= -d/2cosϕ^Long_k',lω^Long_ϕ,k',l. 61Formula (58) indicates thatternary pairs (n_R^x,n_R^z,v^Long,vir_k',l,n^x_R,n^z_R) could form a plane in three-dimensional space.Therefore, we can use the least squares (LS) method for planar fitting of {(n_R^x,n_R^z,v̌^Long,vir_k',l,n^x_R,n^z_R)|n_R^x =0,1,...,N^x_R-1; n_R^z =0,1,...,N^z_R-1}, andwe record the parameter results of plane fitting as Ǎ_k',l, B̌_k',l and Č_k',l.Then based on (59), (60) and (61), the pitch angular velocity, horizontal angular velocity, and radial velocity of the k'-th dynamic target within the l-th TTS can be sequentially estimated asω̌_ϕ,k',l^Long= -2Č_k',l/dcosϕ^Long_k',l, 62 ω̌_θ,k',l^Long= -2B̌_k',l/d -sinϕ̌^Long_k',lcosθ̌^Long_k',lω̌_ϕ,k',l^Long/cosϕ̌^Long_k',lsinθ̌^Long_k',l, 63 v̌_r,k',l^Long=Ǎ_k',l+d/4[(N_H^z-1)cosϕ̌^Long_k',l-(N_H^x-1)sinϕ̌^Long_k',lcosθ̌^Long_k',l]ω̌^Long_ϕ,k',l 35pt -d/4 (N^x_H-1)cosϕ̌^Long_k',lsinθ̌^Long_k',lω̌^Long_θ,k',l. 64 Based on (48), (51), (53), (62), (63) and (64), we have estimated the 6D motion parameters of the k'-th dynamic target within the l-th TTS as 𝐒̌^Long_k',l = [ř^Long_k',l,θ̌_k',l^Long,ϕ̌_k',l^Long,v̌_r,k',l^Long,ω̌_θ,k',l^Long,ω̌_ϕ,k',l^Long]^T. Fig. 4 shows an example of virtual velocity estimation and fitting.It can be seen from the figure that different antennas have observed different virtual velocities for the same dynamic target,the 2D antenna index and virtual velocity form a plane in the 3D coordinate system, and thus we can recover the radial velocity, horizontal angular velocity, and pitch angular velocity of the target from these virtual velocities {(n_R^x,n_R^z,v̌^Long,vir_k',l,n^x_R,n^z_R)|n_R^x =0,1,...,N^x_R-1; n_R^z =0,1,...,N^z_R-1} based on formulas from (58) to (64).§.§ Long-Term Motion Tracking We consider the 6D motion parameters of the k-th dynamic target 𝐒^Long_k,l as the state of one micro-system, and consider the 𝐒̌^Long_k,lobtained through the 6Dsensingalgorithm as the observation of this micro-system. Based on the formulas from (8) to (13), the state equationcan be expressed as𝐒^Long_k,l+1=Φ𝐒^Long_k,l+ 𝐁𝐮^Long_k,l+ 𝐰_k,l^Long, 65where Φ =[ 𝐈_3× 3 -T_TTS𝐈_3× 3; 0_3× 3 𝐈_3× 3 ] isstate transition matrix, 𝐁 =[ -1/2T_TTS^2𝐈_3× 3;-T_TTS𝐈_3× 3 ] isdisturbance driven matrix, 𝐰_k,l^Long is the state noise matrix and itscovariance matrix is 𝐐. Besides, the observation equation of the micro-system can be represented as𝐒̌^Long_k,l=𝐆𝐒^Long_k,l+ 𝐯_k,l^Long, 66where 𝐆=𝐈_6× 6 is the observation matrix, 𝐯_k,l^Long is equivalent observation noise vector and its covariance matrix is 𝐑. Then we can useKalman filtering (KF) <cit.> to track the long-term motion of the k-th dynamic target as follows: 1) Initialization: ISAC BS can obtain the 6D parameters estimation 𝐒^SBS_k of the k-th dynamic target through beamscanning duringSBS stage. Next, to enter the SBT stage, we initializethe time as l = 0,the observation as𝐒̌^Long_k,0 = 𝐒^SBS_k,the state estimation as 𝐒̂^Long_k,0 = 𝐒^SBS_k, and𝐏̂_k,0=𝐈_6× 6.2) State prediction:Based on𝐒̂^Long_k,l-1,the state predictionwithinthe l-th TTS can be calculated as𝐒̃^Long_k,l=Φ𝐒̂^Long_k,l-1.3) Observation prediction: The observation prediction within the l-th TTS can be computed as 𝐒̃̌̃^Long_k,l=𝐆𝐒̃^Long_k,l.4) Calculate Kalman gain: Based on 𝐏̂_k,l-1, we can compute 𝐏̃_k,l=Φ𝐏̂_k,l-1Φ^T. Then the Kalman gain can be obtained as 𝐊^gain_k,l=𝐏̃_k,l𝐆^T (𝐆𝐏̃_k,l𝐆^T+𝐑)^-1.5) State estimation update: The KF estimation of the 6D motionparameters can be updated andrepresented as 𝐒̂^Long_k,l =𝐒̃^Long_k,l+ 𝐊^gain_k,l(𝐒̌^Long_k,l-𝐒̃̌̃^Long_k,l).Besides, 𝐏̂_k,l can be updated as 𝐏̂_k,l=(𝐈_6× 6-𝐊^gain_k,l𝐆)𝐏̃_k,l. Based on the abovesteps, we can continuously track the dynamic targets within L TTS, and we employ 𝐒̂^Long_k,l as the final 6D motion parameters estimation result of the k-th dynamic target within the l-th TTS.§ SIMULATION RESULTS In simulations,we setthe lowest carrier frequency of the ISAC system as f_0 = 100 GHz, set the subcarrier frequency interval as Δ f = 480 kHz,and set the antenna spacing as d=1/2λ. To succinctly display the simulation results, we set the horizontal angle and horizontal angular velocity of the dynamic target as θ_k = 90^∘ and ω_θ,k=0, and thus the dynamic target is fixed to move within a 2D plane. Then wefocus on the sensing accuracy of the {r_k,ϕ_k, v_r,k,ω_ϕ,k} parameters of the dynamic target.Specifically,for the aspect of evaluating6D radar sensing, the rootmean square error (RMSE) of distance sensing,angle sensing,radial velocity sensing andangular velocity sensingare defined asRMSE_r=√(∑ _i=1^Count(ř_s(i)-r_s)^2/Count), RMSE_ϕ=√(∑ _i=1^Count(ϕ̌_s(i)-ϕ_s)^2/Count), RMSE_v_r=√(∑ _i=1^Count(v̌_r,s(i)-v_r,s)^2/Count), and RMSE_ω_ϕ=√(∑ _i=1^Count(ω̌_ϕ,s(i)-ω_ϕ,s)^2/Count),where Count is the number of the Monte Carlo runs, the real parameters of the dynamic target is (r_s,ϕ_s,v_r,s,ω_ϕ,s), and (ř_s,ϕ̌_s,v̌_r,s,ω̌_ϕ,s) is the estimation parameters of the target. §.§ The Performance of 6D Radar Single-Shot Sensing We set thatthe number of subcarriers is M=128,the number of OFDM symbols is N=64, the number of the antennas in HU-UPA is N_H=64, the number of the antennas in RU-UPA is N_R=256. Fig. 5 shows the single-shot sensing RMSE of the proposed scheme for different dynamic targetswith different motion parameters versus SNR. It can be seenthat the RMSE_ϕ, RMSE_r, RMSE_v_r, and RMSE_ω_ϕ gradually decrease with the increase of SNR. When SNR =0 dB, the average sensing RMSEs are RMSE_ϕ=0.0097^∘, RMSE_r=0.0031m, RMSE_v_r=0.0267m/s, and RMSE_ω_ϕ=0.2208^∘/s. When SNR increases to 20 dB, the average sensing RMSEs decrease to RMSE_ϕ=0.0007^∘, RMSE_r=0.0003m, RMSE_v_r=0.0024m/s, and RMSE_ω_ϕ=0.0200^∘/s.Unlike most existing ISAC studiesbelieving that only the radial velocity of far-field dynamic target can be measured based on one single BS.These simulation results indicate that the proposed 6D radar sensing algorithm has high sensing accuracy, especially confirming that one single BS with MIMO array can effectively estimate the angular velocity of the dynamic target. Besides,it is found from Fig. 5(b) andFig. 5(c) that under the same system parameter settings, the distance sensing and radial velocity sensing performance of dynamic targets with different motion parameters are basically consistent. However, it is seen from Fig. 5(a) andFig. 5(d) thatunder the same system parameter settings,the accuracy of dynamic target angle sensing and angular velocity senisng gradually improves as the target approaches 0^∘, mainly because the MIMO array hasnarrower beamwidth near 0^∘, thus improving the accuracy of angle sensing. Since the angular velocity sensing depends on the angle change of the target, thenarrower beam near 0^∘ also brings higher angular velocity sensing accuracy. §.§ The Impact of System Parameters on the Performance of 6D Radar Single-Shot Sensing We take the sensing ofthedynamic target with motion parameters (ϕ=20^∘, r=120m, v_r=15m/s, ω_ϕ=8^∘/s) as the example, and investigate the impact of system parameter settings on the performance of 6D radar single-shot sensing.Fig. 6 shows the variation curves of distance sensing, radial velocity sensing, and angular velocity sensing versus SNR under different number ofOFDM symbols. It can be seen from Fig. 6(a)that the RMSE_r gradually decreases as the number of OFDM symbols N increases. This is because more OFDM symbols bring more observations to the distance array, making the estimation of the covariance matrix of the distance array more accurate, andthereby improving the accuracy of distance sensing. Besides,it can be found from Fig. 6(b) and Fig. 6(c) that the RMSE_v_r and the RMSE_ω_ϕ significantly decrease with the increase of N. This is because more OFDM symbols form a larger virtual velocity array, making the sensing of radial velocity and angular velocity more accurate.Fig. 7 shows the variation curves ofsensing RMSEs versus SNR under different number ofsubcarriers. It can be seen from Fig. 7(a) that the RMSE_r gradually decreases with the increase of the number of subcarriers M, because more subcarriers can form a larger distance array, thereby improving the accuracy of distance sensing. It can be found fromFig. 7(b) and Fig. 7(c) that theRMSE_v_r and the RMSE_ω_ϕgradually decrease with the increase of M. This is because more subcarriers bring more observations to the virtual velocity array, making the covariance matrix estimation of the virtual velocity array more accurate, thereby improving the sensing accuracy of radial velocity and angular velocity.Fig. 8 shows the variation curves ofsensing RMSEs versus SNR under different number of antennas.It can be seen from Fig. 8(a) that RMSE_r gradually decreases as the number of antennas N_R increases. This is because more receiving antennas provide more observations fordistance array, thereby improving the accuracy of distance sensing. More importantly, it can be observed fromFig. 8(b) and Fig. 8(c) that theRMSE_v_r and the RMSE_ω_ϕ gradually decrease with the increase of N_R. This is because when there are more receiving antennas measuring the virtual velocity, the system can better fit the virtual velocity plane, thereby more accurately recovering the radial velocity and angular velocity of dynamic target.§.§ The Performance of Multiple-Shots Tracking Weset the dynamic target with the initial motion parameter of ϕ=55^∘, r=100m, v_r=8m/s, ω_ϕ=4^∘/s, and the BS needs to track this target within 8 seconds. We set the system parameters as M=128, N=64, N_H=64, and N_R=256. Fig. 9 shows the tracking performance of multiple-shots tracking for dynamic target when SNR =0 dB. It can be seen from the figurethat the dynamic target tracking algorithm based on Kalman filtering can further improve the sensing accuracy of 6D radar single-shot sensing, especially the performance of angular velocity sensing. These simulation results verify the effectiveness of the proposed scheme. § CONCLUSIONS In this paper,we have proposed a novel scheme for 6D radar sensing and tracking of dynamic target based on MIMO array for monostatic ISAC system. We have re-examined and re-derived the relationship between6D motion parameters of dynamic targetandsensing echo channel of MIMO-ISAC system, andfound that the sensing echo channelactually includes the distance, horizontal angle, pitch angle, radial velocity, horizontal angular velocity, and pitch angular velocity ofdynamic target.Specifically, we have proposed the 6D long-term motion and short-term motion model of dynamic target. Then we have derived the sensing channel model corresponding toshort-term motion.Next, for single-shot sensing, we employed thearray signal processing methods to estimate the dynamic target's distance, horizontal angle, pitch angle, and virtual velocity. Then we found that the virtual velocities observed by different antennas were different, which allowed us to utilize plane parameter fitting to estimate the radial velocity, horizontal angular velocity, and pitch angular velocity of the dynamic target. Furthermore, we have realized the multiple-shots tracking of dynamic targetbased on each single-shot sensing results and Kalman filtering. Simulation results have been provided to demonstrate the effectiveness of the proposed 6D radar sensing and tracking scheme. ieeetr
http://arxiv.org/abs/2312.16441v1
{ "authors": [ "Hongliang Luo", "Feifei Gao", "Fan Liu", "Shi Jin" ], "categories": [ "eess.SP" ], "primary_category": "eess.SP", "published": "20231227071106", "title": "6D Radar Sensing and Tracking in Monostatic Integrated Sensing and Communications System" }
=1 =1 top=1.5 cm, left=1.5 cm, right=1.5 cm, bottom=1.5 cm1#1#1 #1#1 #1#1 Department of Physics, Shiv Nadar Institution of Eminence (SNIoE), Gautam Buddha Nagar, Uttar Pradesh 201314, India We consider a superconductor-barrier-superconductor (S-B-S) sandwich configuration built with Rarita-Schwinger-Weyl semimetal featuring four band crossings at a single nodal point. Assuming a homogenous s-wave pairing in each superconducting region, and the barrier region created by applying a voltage of magnitude V_0 across a piece of normal state semimetal, we apply the BdG formalism to compute the discreet energy spectrum ε of the subgap Andreev bound states in the short-barrier regime. In contrast with the two-band semimetals studied earlier, we find upto four pairs of localized states (rather than one pair for two-band semimetals) in the thin-barrier limit, and each value of ε has a complicated dependence on the phase difference φ_12 via cosine and sine functions, which cannot be determined analytically. These are artifacts of multiple band crossings hosting quasiparticles of pseudospin value greater than 1/2. Using the bound state energies, we compute the Josephson current across the junction configuration.Andreev bound states in superconductor-barrier-superconductor junctions of Rarita-Schwinger-Weyl semimetals Ipsita Mandal   ===========================================================================================================§ INTRODUCTION A large number of gapless topological phases have been discovered in recent years which are characterized by the Brillouin zone (BZ) harbouring pairs of points where two or more bands cross <cit.>. This results in a nontrivial topology in the momentum space exhibiting nonzero Chern numbers about each band-crossing point. The associated materials are called semimetals due to the existence of the gapless nodal points where the density of states goes to zero. The simplest and most well-known three-dimensional (3d) semimetal is the Weyl semimetal (WSM) <cit.> which exhibits an isotropic linear-in-momentum dispersion with two bands crossing at a point. A simple generalization of the WSM is a multifold semimetal with isotropic linear dispersion, whose low-energy effective Hamiltonian can be expressed as ∼𝐤·𝒮, where 𝒮 represents the vector consisting of the matrices for a particular value of pseudospin, with the nomenclature “pseudospin” being used to unambiguously differentiate it from the actual (relativistic) spin. The higher-pseudospin-semimetals (i.e., with pseudospin value greater than 1/2) constitute natural generalizations of the WSM Hamiltonian ∼𝐤·σ, which originate from the number of bands being higher than two.[Here we have used the usual convention that σ represents the vector of the three Pauli matrices implying that the WSM hosts pseudospin-1/2 quasiparticles.] Examples of multifold semimetals include the pseudospin-1 Maxwell fermions <cit.> (with threefold band-crossings) and the pseudospin-3/2 Rarita-Schwinger-Weyl (RSW) semimetals <cit.> (with fourfold band-crossings). In the branch of high-energy physics, the Rarita-Schwinger (RS) equation describes the field equation for elementary (relativistic) with spin value of 3/2. Although they are postulated to exist in models based on supergravity <cit.>, they do not appear in the standard model, and none has been detected experimentally either. On the other hand, an analogue of these relativistic spin-3/2 fermions exists in the form of quasiparticles carrying pseudospin-3/2 in condensed matter systems  <cit.>, which of course are non-relativistic. In an effective Hamiltonian of the form shown in Eq. (<ref>), the four bands show linear-in-momentum dispersions fixed to the values ± 3|𝐤|/2 and ± |𝐤|/2 [cf. Eq. (<ref>)].It has been argued that the large topological charges found in various materials like CoSi <cit.>, RhSi <cit.>, AlPt <cit.>, and PdBiSe <cit.> represent the features of an RSW.One way to understand the consequences of the existence of nonzero pseudospin quasiparticles is to study Josephson effect in set-ups consisting of junctions between the normal (i.e., non-superconducting) (abbreviated by “N”) and supercondcting (abbreviated by “S”) states of various semimetals. The superconductivity is induced by proximity-effect by placing a conventional s-wave superconductors atop the corresponding region <cit.>. Examples of relevant configurations include N-S <cit.>, S-N-S <cit.>, and S-B-S (where “B” indicates a potential barrier in the N region, which can be created by applying a gate voltage V_0 across the normal state region) <cit.> junctions. These studies have considered both 2d <cit.> and 3d <cit.> semimetals. Although the pseudospin-1 semimetal has three bands crossing a point, it has a flat (i.e., nondispersive) band which does not participate in transport. Hence, we extend the earlier studies by considering an S-B-S set-up, as shown in Fig. <ref>(a), constructed out of RSW semimetals, with the superconducting region exhibiting spin-singlet s-wave pairing <cit.>. We consider the short-barrier regime such that the barrier thickness L is taken to be L ≪ξ, where ξ is the superconducting coherence length (i.e., the subgap excitations decay over a length ξ inside the superconductor).Let the strength of the superconducting order parameter be given by Δ = Δ_0 e^i φ and let ε represent the energy of the states. We get a set of discrete states for |ε| <Δ_0, also known as the subgap excitations. For |ε | >Δ_0, the states form a continuum. Due to the fact that four bands cross at a single node of an RSW semimetal, it is expected to exhibit features which are disctinct from the systems studied so far. In particular, for two-band semimetals, it has been shown <cit.> that the energy of the Andreev bound states (ABSs) in the thin-barrier-limit is given by ε = ± Δ_0√(1 - T_N sin^2 ( φ_12 / 2) ), where φ_12 is the difference of the superconducting phases on the two sides of the barrier region and T_N is the transmission coefficient in an analogous set-up with the superconducting regions replaced by the normal state of the semimetal. This result follows from the fact that the solution for η≡cos(2 ε/Δ_0 ) is obtained from a linear equation (i.e., a first-order polynomial equation in η) whose η-independent coefficient contains a term proportional to cosφ_12. However, this result will not hold true for an RSW semimetal, where we get more than two pairs of ABSs with energies ± |ε|. This results from the fact that the solution for the RSW case involves a complex-valued quartic equation in the variable η_R≡exp(2 i ε/Δ_0 ), with both cosφ_12 and sinφ_12 (and their products) appearing in various coefficients. We consider the propagation of quasiparticles and quasiholes in a slab of square cross-section with a transverse width W, where W is assumed to be large enough to impose periodic boundary conditions along these transverse directions. The propagation direction of the quasipaticles/quasiholes are taken to be parallel/antiparallel to the z-axis. In the short-barrier regime, the dominant contribution to the Josephson current comes from the subgap states (i.e., the bound states populating the discrete Andreev energy levels) <cit.>, because the contributions from the excited states in the continuum (with the magnitude of the energy ε exceeding Δ_0) are smaller by a factor L/ξ and, hence, are negligible in this limit. Furthermore, we compute the energies of the ABSs in the thin-barrier-limit, which is the limit when the strength of the potential barrier V_0 →∞ and L → 0, with χ≡ V_0L held fixed at a finite value.The paper is organized as follows. In Sec. <ref>, we describe the low-energy effective Hamiltonian of the RSW in its normal a state, and show its eigenvalues and eigenfunctions. In Secc <ref>, the S-B-S junction set-up is explained and the BdG Hamiltonian is constructed. The expressions for electronlike and holelike wavefunctions are also elucidated there. This is followed by Sec. <ref>, where the methodology employed to obtain the ABS spectrum is explained and some representative values are illustrated in various parameter regimes. We also numerically find the Josephson current from these bound states. Finally, we end with a summary and outlook in Sec. <ref>.§ RSW SEMIMETALIt has been shown that crystal structures belonging to the eight space groups 207-214 host fourfold topological degeneracies about the Γ, R, and/or H points <cit.>. On linearizing the 𝐤·𝐩 Hamiltonian about such a nodal point, we arrive at the effective continuum Hamiltonian in the low-energy limit captured byℋ_RSW(𝐤)= v𝐤·𝐉 ,where v denotes the magnitude of the group velocity of the quasiparticles and 𝒮=𝐉. Henceforth, we will set v=1 for the sake of simplicity.The system hosts pseudospin-3/2 RSW quasiparticles which is reflected by the fact that the three components of 𝐉 form the spin-3/2 representation of the SO(3) group. A standard representation of 𝐉 is given byJ_x = ( [0 √(3)/200; √(3)/2010;010 √(3)/2;00 √(3)/20;]),J_y = ( [0-√(3)i /200; √(3)i /20 -i0;0i0 - √(3)i /2;00 √(3)i /20;]) , J_z =1/2( [3000;0100;00 -10;000 -3;]) . The energy eigenvalues take the forms:ℰ_3/2^s(𝐤) = s 3 |𝐤|/2 ,and ℰ_1/2^s(𝐤) = s|𝐤|/2 ,wheres=± ,demonstrating four linearly dispersing bands crossing at a point [cf. Fig. <ref>(b)]. Here the “+" and “-" signs, as usual, refer to the conduction and valence bands, respectively. The corresponding orthonormal eigenvectors are given byΨ^s_3/2 ( 𝐤) = 1/𝒩^s_3/2 [s k ( k_x^2+k_y^2+4k_z^2)+ k_z (3k_x^2+3 k_y^2+4k_z^2)/ (k_x+ ik_y)^3√(3) [ 2k_z(sk+k_z)+k_x^2+k_y^2 ]/ (k_x+i k_y )^2√(3) (sk+k_z) /k_x+ik_y 1 ]^T(for energy ℰ_3/2^s) and Ψ^s_1/2( 𝐤) = 1/𝒩^s_1/2[- ( sk+k_z ) (k_x-i k_y)/(k_x+ik_y)^22k_z( sk+k_z) -k_x^2 - k_y^2/√(3) (k_x+i k_y)^2 sk+3 k_z/√(3) (k_x+i k_y)1 ] ^T (for energy ℰ_1/2^s),where k=√(k_x^2+k_y^2+k_z^2), and 1 / 𝒩^s_3/2 and 1 / 𝒩^s_1/2 denote the corresponding normalization factors. If the Fermi energy cuts the bands at energy E, then for propagation along the z-direction, the corresponding plane waves will have the factors e^i sgn(E)k_z^(3/2)z and e^i sgn(E)k_z^(1/2)z such that k_z^(3/2)= √(( E/3/2)^2-k_x^2 -k_y^2 ) and k_z^(1/2)= √(( E/3/2)^2-k_x^2 -k_y^2 ). § S-B-S JUNCTION In order to get the S-B-S configuration [cf. Fig. <ref>(a)], we model the superconducting pair potential as <cit.> Δ (z) =Δ_0 e^i φ_1Γfor z≤ 0 0 for0≤ z< LΔ_0 e^i φ_2Γforz ≥ L ,Γ= i ( J_y J_z +J_z J_y /√(3)) ( J_x J_y + J_y J_x /√(3)),representing Cooper pairing in the s-wave channel. Due to the presence of the barrier region, we need to consider the potential energyV(z) =0for z≤ 0and0≤ z≥ L V_0 for0≤ z< L.The resulting Bogoliubov–de Gennes (BdG) Hamiltonian is given by H = 1/2∑_𝐤Ψ^†_𝐤 H_BdG (𝐤) Ψ_𝐤 , Ψ_𝐤 = [ c_1^T (𝐤) c_2^T (𝐤) c_3^T (𝐤) c_4^T (𝐤)c_1 (-𝐤)c_2 (-𝐤)c_3 (-𝐤)c_4 (-𝐤) ]^T,H_BdG (𝐤) = [ ℋ_RSW(𝐤) -E_F + V(z) Δ(z); Δ^†(z) E_F- V(z) -ℋ_RSW^T(-𝐤);].Here we demarcate the left superconducting region as “region I”, the middle barrier region as “region II”, and the right superconducting region as “region III”. The electron-like and hole-like BdG quasiparticles are obtained from the eigenvalue equationH_BdG ( 𝐤→ -i ∇_𝐫)ψ_𝐤 (𝐫)= ε ψ_𝐤 (𝐫),where 𝐫=(x, y,z) is the position vector.Using Eq. (<ref>), let us now elucidate the form of the eigenfunctionψ_𝐤 (𝐫) = ψ_I (𝐫, k_⊥)Θ(-z)+ ψ_II (𝐫 , k_⊥)Θ(z) Θ(L-z) +ψ_III (𝐫 , k_⊥)Θ(z-L)in a piecewise manner for the three regions when the Fermi energy is E_F. We assume that[The condition Δ_0 ≪ E_F ensures that the mean-field approximation, applicable for using the BdG formalism, is valid. The second condition (V_0-E_F) ≫ E_F arises because we are focussing on the short-barrier regime.]V_0 ≫ E_F≫Δ_0 and (V_0-E_F) ≫ E_F. Since the propagation direction is along the z-axis, the translation symmetry is broken in that direction, whereas the transverse momentum components k_x and k_y are conserved across the S-B and B-S junctions. We denote the magnitude of the transverse component as k_⊥ =√(k_x^2 + k_y^2 ) and the azimuthal angle ϕ =arctan(k_y /k_x ). *In the right superconductor region, the wavefunction localizing at the interface is described by a linear combination of the following form (see chaper 5 of Ref. <cit.>): ψ_III ( 𝐫, k_⊥ ) = a_32r ψ_3/2^er( 𝐫, θ _32^r )+ a_12r ψ_1/2^er( 𝐫,θ_12^r ) + b_32r ψ_3/2^hr( 𝐫, θ̃_32^r )+ b_12r ψ_1/2^hr ( 𝐫, θ̃_12^r ) ,whereψ_3/2^er ( 𝐫, θ _32^r )= e^ i {k_xx + k_yy+ k_z^(3/2),el (z-L) }e^- iϕ 1_2× 2⊗ J_z/√(2) ×[ e^i βcos ^3 ( θ _32^r/2 )/√(3)e^i βsinθ _32^rcos( θ _32^r/2 ) /2e^i βsin( θ _32^r/2 ) sinθ _32^r /2e^i βsin ^3 ( θ _32^r/2 )/√(3)-i e^-i φ_2sin ^3 ( θ _32^r/2 )/√(3) ie^-i φ_2sin( θ _32^r/2 ) sinθ _32^r/2- ie^-i φ_2sinθ _32^rcos( θ _32^r/2 ) /2 i e^-i φ_2cos ^3 ( θ _32^r/2 )/√(3) ]^T , sinθ _32^r≃3k_⊥ /2/E_F,k_z^(3/2),er ≃k^(3/2)_mod + iκ_1, k^(3/2)_mod≃√(( E_F/3/2) ^2- k_⊥^2 ) , κ_1 = E_FΔ_0 sinβ/ (3/2)^2k^(3/2)_mod , tanθ _32^r≃k_⊥/ k^(3/2)_mod , ψ_1/2^er( 𝐫, θ _12^r )= e^ i { k_xx + k_yy + k_z^(1/2),el(z-L) }e^- iϕ 1_2× 2⊗ J_z/√(2)×[ -√(3)e^i βsinθ _12^rcos (θ _12^r/2 ) /2 e^i βcos(θ _12^r/2 )(3 cosθ _12^r-1) /2 e^i βsin(θ _12^r/2 )(3 cosθ _12^r +1 )/2√(3)e^i βsin (θ _12^r/2 )sinθ _12^r /2 - i√(3) e^-i φ_2sin (θ _12^r/2 )sinθ _12^r /2ie^-i φ_2sin(θ _12^r/2 ) (3 cosθ _12^r +1)/2 i e^-i φ_2cos (θ _12^r/2 ) (1-3 cosθ _12^r )/2 - i √(3)e^-i φ_2sinθ _12^rcos (θ _12^r/2 ) /2 ]^T ,sinθ _12^r≃ k_⊥ /2/E_F,k_z^(1/2),er ≃k^(1/2)_mod + iκ_2, k^(1/2)_mod≃√(( E_F/1/2) ^2- k_⊥^2 ) , κ_2 = E_FΔ_0 sinβ/ (1/2)^2k^(1/2)_mod , tanθ _12^r≃k_⊥/ k^(1/2)_mod , ψ_3/2^hr ( 𝐫, θ̃_32^r ) = e^ i { k_xx + k_yy + k_z^(3/2), hl (z-L) }e^- iϕ 1_2× 2⊗ J_z/√(2) ×[ e^i φ_2 cos ^3 ( θ̃_32^r/2 )/√(3) e^i φ_2 sinθ _32^rcos( θ̃_32^r/2 ) /2e^i φ_2 sin( θ̃_32^r/2 ) sinθ _32^r /2e^i φ_2 sin ^3 ( θ̃_32^r/2 )/√(3)-ie^i βsin ^3 ( θ̃_32^r/2 )/√(3) ie^i βsin( θ̃_32^r/2 ) sinθ̃_32^r/2- ie^i βsinθ̃_32^rcos( θ̃_32^r/2 ) /2 i e^i βcos ^3 ( θ̃_32^r/2 )/√(3) ]^T , sinθ̃_32^r≃3k_⊥ /2/E_F ,k_z^(3/2),hr ≃-k^(3/2)_mod + iκ_1, tanθ̃_32^r≃k_⊥/ -k^(3/2)_mod ψ_1/2^hr( 𝐫 , θ̃_12^r)= e^ i { k_xx + k_yy + k_z^(1/2),hl (z-L) }e^- iϕ 1_2× 2⊗ J_z/√(2)×[ -√(3)e^i φ_2sinθ̃_12^rcos (θ̃_12^r/2 ) /2 e^i φ_2cos(θ̃_12^r/2 )(3 cosθ̃_12^r-1) /2e^i φ_2sin(θ̃_12^r/2 )(3 cosθ̃_12^r +1 )/2√(3)e^i φ_2sin (θ̃_12^r/2 )sinθ̃_12^r /2 - i√(3)e^i βsin (θ̃_12^r/2 )sinθ̃_12^r /2i e^i βsin(θ̃_12^r/2 ) (3 cosθ̃_12^r +1)/2 i e^i βcos (θ̃_12^r/2 ) (1-3 cosθ̃_12^r )/2 - i √(3)e^i βsinθ̃_12^rcos (θ̃_12^r/2 ) /2 ]^T ,sinθ̃_12^r≃ k_⊥ /2/E_F ,k_z^(1/2),hr ≃-k^(1/2)_mod + iκ_2 , tanθ̃_12^r≃k_⊥/ -k^(1/2)_mod ,andβ =arccos(ε /Δ_0) .The above represent right-moving electron-like and hole-like wavefunctions (using the nomenclature from Sec. S2 of Ref. <cit.>). The expressions for the various angles and the z-components of the momenta shown above are valid in the limit Δ_0 ≪ E_F, which we have assumed to hold true. Clearly, in this regime, we find that θ̃_32^r ≃π - θ_32^r andθ̃_12^r ≃π - θ_12^r.The “right-moving” wavefunctions are the admissible ones in this region follows from the fact that when we solve for bound state problems in quantum mechanics (for example, a Schrödinger particle tunneling through a Dirac delta potential barrier), we get both decaying and exponentially increasing wavefunctions — but to get physically admissible solutions, we retain only the decaying ones.* In the normal state region, we will have a linear combination of the following form:ψ_II ( 𝐫, k_⊥ ) = a_32 ψ_3/2^e+ ( 𝐫, θ_32n ) + b_32 ψ_3/2^e-( 𝐫 , θ_32n ) + a_12 ψ_1/2^e+( 𝐫 , θ_12n ) + b_12 ψ_1/2^e-( 𝐫, θ_12n ) + c_32 ψ_3/2^h+( 𝐫 , θ̃_32n )+ d_32 ψ_3/2^h-( 𝐫, θ̃_32n ) + c_12 ψ_1/2^h+( 𝐫, θ̃_12n )+ d_12 ψ_1/2^h- ( 𝐫, θ̃_12n),where ψ_3/2^e+( 𝐫 , θ_32n)=e^i (k_xx + k_yy + k_z^(3/2),ez )f_1(θ_32n), f_1(θ_32n) = e^- iϕ 1_2× 2⊗ J_z[ -sin ^3 (θ_32n/2 )√(3) sin (θ_32n/2) sinθ_32n/2 -√(3) sinθ_32ncos(θ_32n/2 ) /2cos ^3 (θ_32n/2 )0000 ]^T ,ψ_3/2^e-( 𝐫 , θ_32n)=e^i (k_xx + k_yy -k_z^(3/2),ez )f_1( π-θ_32n) , k_z^(3/2),e= -√(( V_0 -E_F -ϵ/3/2 )^2- k_⊥^2 ),cosθ_32n = k_z^(3/2),e/2(ϵ+E_F -V_0) /3 ,sinθ_32n = k_⊥/2(ϵ+E_F -V_0) /3 ,ψ_1/2^e+( 𝐫 , θ_12n) = e^ i( k_xx + k_yy +k_z^(1/2),ez )f_2( θ_12n), f_2( θ_12n)= e^- iϕ 1_2× 2⊗ J_z[√(3)sin(θ_12n/2 ) sinθ_12n/2 - sin(θ_12n/2 ) (3 cosθ_12n+1) /2 cos(θ_12n/2 ) (3 cosθ_12n-1) /2√(3)sinθ_12ncos(θ_12n/2 ) /2 0 0 0 0 ] ^T,ψ_1/2^e- ( 𝐫 , θ_12n) = e^ i(k_xx + k_yy -k_z^(1/2),ez)f_2(π-θ_12n) ,k_z^(1/2),e= -√(( V_0 -E_F -ϵ/1/2 )^2- k_⊥^2 ),cosθ_32n = k_z^(1/2),e/2(ϵ+E_F -V_0),sinθ_32n = k_⊥/2(ϵ+E_F -V_0), ψ_3/2^h+ ( 𝐫, θ̃_32n) =e^i (k_xx + k_yy +k_z^(3/2),h z ) f_3 (θ̃_32n),f_3 (θ̃_32n) = e^- iϕ 1_2× 2⊗ J_z [0000 cos ^3 (θ̃_32n/2 ) √(3)sinθ̃_32ncos (θ̃_32n/2 )/2 √(3)sin(θ̃_32n/2 ) sinθ̃_32n/2 sin ^3 (θ̃_32n/2 ) ]^T,ψ_3/2^h- ( 𝐫, θ̃_32n ) =e^i (k_xx + k_yy -k_z^(3/2),h z )f_3 ( π - θ̃_32n) ,k_z^(3/2),h = √(( V_0 -E_F + ϵ/3/2 )^2- k_⊥^2 ),cosθ̃_32n = k_z^(3/2),h/2(ϵ-E_F +V_0) /3 ,sinθ̃_32n = k_⊥/2(ϵ - E_F + V_0) /3 , ψ_1/2^h+ ( 𝐫 , θ̃_12n) =e^ i(k_xx + k_yy + k_z^(1/2),hz )f_4 (θ̃_12n) ,f_4 (θ̃_12n)= e^- iϕ 1_2× 2⊗ J_z [0000- √(3)sinθ̃_12ncos(θ̃_12n/2 )/2 cos (θ̃_12n/2 ) (3 cosθ̃_12n-1) /2 sin(θ̃_12n/2 ) (3 cosθ̃_12n+1)/2 √(3)sin(θ̃_12n/2 ) sinθ̃_12n/2 ] ^T ,ψ_1/2^h-( 𝐫, θ̃_12n )= e^ i( k_xx + k_yy - k_z^(1/2),hz )f_4 (π -θ̃_12n) , k_z^(1/2),h= √(( V_0 -E_F + ϵ/1/2 )^2- k_⊥^2 ),cosθ̃_12n = k_z^(1/2),h/2(ϵ-E_F +V_0),sinθ̃_12n = k_⊥/2(ϵ - E_F + V_0).*In the left superconductor region, we will have a linear combination of the following form:ψ_I( 𝐫, k_⊥ ) = a_32l ψ_3/2^el( 𝐫, θ_32^r )+ a_12l ψ_1/2^el( 𝐫, θ_12^r ) + b_32l ψ_3/2^hl( 𝐫, θ̃_32^r )+ b_12l ψ_1/2^hl ( 𝐫 , θ̃_12^r ) ,where {ψ_3/2^el ( 𝐫, θ_32^r ),ψ_1/2^el ( 𝐫, θ_32^r),ψ_3/2^hl ( 𝐫, θ̃_32^r ), ψ_1/2^hl ( 𝐫,θ̃_32^r ) } = {ψ_3/2^er ( 𝐫, π- θ_32^r ), ψ_1/2^er ( 𝐫, π- θ_12^r ),ψ_3/2^hr ( 𝐫, π-θ̃_32^r ) ,ψ_1/2^hr ( 𝐫, π- θ̃_12^r )}|_φ_2 →φ_1, (z-L) → z . This amounts to flipping the signs of { k_z^(3/2),er , k_z^(1/2),er ,k_z^(3/2),hr,k_z^(1/2),hr }, which is because we need to consider here left-moving electron-like and hole-like wavefunctions <cit.>. The “left-moving” wavefunctions are the admissible ones in this region because they are the decaying ones. Since the final results depend on the phase difference φ_12 = φ_2 -φ_1, for simplification of the notations, we can set φ_1 = 0 and φ_2 = φ_12, without any loss of generality. Imposing the continuity of the wavefunction at the junctions located at z=0 and z=L, we get the following conditions:ψ_I(x, y, 0, k_⊥) = ψ_II(x, y, 0, k_⊥)and ψ_II(x, y, L, k_⊥)= ψ_III(x, y, L, k_⊥) .From the eight components of the wavefunction, we get 2× 8=16 linear homogenous equations in the 16 variables ( a_32l, a_12l,b_32l,b_12l, a_32, a_12,b_32,b_12,c_32, c_12,d_32,d_12 , a_32r, a_12r,b_32r,b_12r), which constitute the 16 unknown coefficients of the piecewise-defined wavefunction. In these resulting equations, while the overall z-independent factors of e^ i ( k_xx + k_yy ) totally cancel out, the phase factors introduced by the action of e^- iϕ 1_2× 2⊗ J_z also cancel out component by component. Let M be the 16× 16 matrix constructed from the coefficients of the 16 variables. The consistency of the equations is ensured by the condition det M = 0. From this equation, we can determine the energy eigenvalues of the subgap ABSs, which are localized near the junctions with localization lengths κ^-1_1 and κ^-1_2 from the barrier, because they decay exponentially as we move away from the junction location into the superconducting region. § RESULTS To simplify the calculations, instead of trying to compute the determinant of a 16 × 16 matrix, we adopt the following strategy to obtain the solutions for ε in the thin-barrier limit defined by V_0 →∞ and L → 0, with χ≡ V_0L held fixed at a finite value. Although this limit is equivalent to a Dirac delta potential, due to the fact that we do not have any constraint on the derivative of the wavefunctions (due to the nature of the RSW Hamiltonian which is linear in position space derivatives when written in the position space representation), the standard delta function potential approximation <cit.> for thin barriers cannot be taken from the start <cit.>. Instead, we need to start with Eq. (<ref>) and impose the appropriate limits in the expressions appearing in the equations obtained from the boundary conditions. Next, in region II, we employ the simplificationk_z^(3/2),eL → - 2/3χ ,k_z^(1/2),eL → - 2χ ,k_z^(3/2),hL →2/3χ ,k_z^(1/2),hL → 2χin the exponential factors representing plane waves along the z-direction. Furthermore, the ϵ-dependence disappears from the angles since -θ_32n≃θ̃_32n≃arcsin(V_0-E_F/3k_⊥/ 2) and -θ_12n≃θ̃_12n≃arcsin(V_0-E_F/ k_⊥/ 2).Plugging in the above approximations, wesolve for (a_32, a_12,b_32,b_12) and ( c_32, c_12,d_32,d_12) using the first four and the last four components of the matrix equation ψ_I(x, y, 0) = ψ_II(x, y, 0), respectively, in terms of the remaining 8 variables. This is because, in region II, the last(first) four entries/components of each electron(hole) wavefunction are zero. The resulting expressions are used to eliminate the normal state coefficients in the matrix equation ψ_II(x, y, L)= ψ_III(x, y, L) and we obtain an 8 × 8 matrix M̃ involving 8 independent variables. The values of ε can now be obtained by demanding detM̃ = 0 for consistency. Even after adopting these simplifying steps, we end up with a quartic equation in the variable η_R ≡exp(2 i β ) with lengthy coefficients accompanying various powers of η_R. Consequently, a simple analytic expression for ε cannot be obtained, unlike semimetals having two bands <cit.>.As detailed above, we need to find the values of ε by numerically solving polynomials of quadratic order in sin(2 β ) andcos (2 β ). Looking for real solutions from the real and imaginary components of the resulting complex-valued equation, we get upto four distinct values of |ε| for a given set of parameter values. This is because we have two real equations of quartic order in cos (2β ) and sin (2β ), whereas for each of the two-band models studied earlier, only a linear order equation in cos (2β ) had to be solved (which gave rise to only one pair of Andreev bound states). The energies of the subgap states appear as the pair ± |ε| for each value of |ε|. In Fig. <ref>, we show the their behaviour as functions of k_⊥ (with a fixed value of φ_12) and φ_12 (with a fixed value of k_⊥) for some representative values of V_0, L, and E_F. The bound state energies areperiodic in φ_12 with period 2π and are symmetric about the line φ_12 = π. Fig. <ref>(a) illustrates the variation of the four pairs of ε-values against the k_⊥-φ_12 plane and, hence, shows the dependence of ε on both these variables in a combined way.The Josephson current density across the two junction at a temperature T is given by <cit.> I_J(φ_12) = -2e/ħ W^2/(2 π)^2∑_n=1 ^ 8 ∫dk_x dk_y∂_φ_12ϵ_n f(ϵ_n),where ϵ_n labels the energy values of the eight Andreev bound states and f(x) = 1 / (1+e^x/k_BT) is the Fermi-Dirac distribution function. Fig. <ref>(b) shows the behaviour of I_J as a function of φ_12, scaled by appropriate numbers/variables (where we denote this scaled quantity as Ĩ), for the same set of barrier parameters and E_F as in Fig. <ref>(a). § SUMMARY AND OUTLOOKIn this paper, we have have considered an S-B-S sandwich configuration built with Rarita-Schwinger-Weyl semimetal with the aim to determine the spectrum of ABSs in the thin-barrier limit. We have assumed a weak and homogenous s-wave pairing in each superconducting region, which can be created via proximity effect <cit.> by placing a superconducting electrode near it. The barrier region can be implemented by applying a voltage of magnitude V_0 across a piece of normal state semimetal. By using the BdG Hamiltonian, we have determines the wavefunction localizing at the boundaries in a piecewise continuous manner. Enforcing consistency of the equations obtained from matching the boundary conditions, we have found the complex-valued characteristic polynomial from the vanishing of the relevant determinant. The solutions of this equation give the discreet energy spectrum ε of the subgap Andreev bound states. Due to the higher order of the characteristic polynomial to be solved, a closed-form analytical expression could not be found. Hence, we have solved for the admissible roots of the equation numerically and shown the results for some representative parameter values. As anticipated, in contrast with the two-band semimetals studied extensively so far, there exist multiple localized states (rather than two for two-band semimetals) in the thin-barrier limit. Furthermore, unlike the two-band semimetals, each value of ε has a complicated dependence on the phase difference φ_12 which cannot be determined analytically. We have also derived the behaviour of the Josephson current, determined by the ABSs, and have illustrated it via a representative plot.In future, it will be worthwhile to study a generalization of the isotropic version of the RSW semimetal studied here, where the full rotational symmetry of RSW is broken to the O_h symmetry <cit.>, with the dispersion featuring anisotropic velocity parameters. An S-B-S junction set-up with such an anisotropic system is expected to show a richer structure of the ABSs, albeit with the need to solve more complicated equations. Another avenue to explore is to introduce tilt in the band dispersion <cit.> and investigate thte resulting ABSs. Yet another interesting set-up is to consider scenarios where the dispersion is rotated about the z-axis across the junction(s), as considered in Refs. <cit.>. Lastly, RSW S-B-S junctions for higher angular momentum channels (e.g., d-wave symmetric pairing channel <cit.>) and for FFLO pairings <cit.> are left for future investigations.
http://arxiv.org/abs/2312.16164v1
{ "authors": [ "Ipsita Mandal" ], "categories": [ "cond-mat.supr-con", "cond-mat.mes-hall", "hep-th" ], "primary_category": "cond-mat.supr-con", "published": "20231226185307", "title": "Andreev bound states in superconductor-barrier-superconductor junctions of Rarita-Schwinger-Weyl semimetals" }
State-of-the-Art in Nudity Classification:A Comparative Analysis Renzo Guido, Luis G. Sarasua, Arturo C. Martí January 14, 2024 ==================================================================Large language models (LLMs) offer impressive performance in various zero-shot and few-shot tasks. However, their success in zero-shot and few-shot settings may be affected by task contamination, a potential limitation that has not been thoroughly examined. This paper investigates how zero-shot and few-shot performance of LLMs has changed chronologically over time. Utilizing GPT-3 series models and several other recent open-sourced LLMs, and controlling for dataset difficulty, we find that on datasets released before the LLM training data creation date, LLMs perform surprisingly better than on datasets released after. This strongly indicates that, for many LLMs, there exists task contamination on zero-shot and few-shot evaluation for datasets released prior to the LLMs' training data creation date. Additionally, we utilize training data inspection, task example extraction, and a membership inference attack, which reveal further evidence of task contamination. Importantly, we find that for classification tasks with no possibility of task contamination, LLMs rarely demonstrate statistically significant improvements over simple majority baselines, in both zero and few-shot settings. § INTRODUCTION Recently there has been much interest in few-shot methods, in particular in-context learning (ICL, Brown et al. 2020) with large language models.In-context learning has the benefit of yielding excellent performance while requiring very little data, sometimes relying on only a few examples for the taskmaybe change 'few' to '< 10' to give concrete number?.These promising results have led to an explosion of work on in-context learning methods across a wide variety of tasks <cit.>,shorten to just the highlights, 3-4 citations including prompt tuning methods <cit.>, chain-of-thought methods <cit.>, tool-based methods <cit.>.However, along with this explosion of work in ICL, many have raised concerns about data contamination <cit.>, that is, prior knowledge of data or a task which is thought to be unseen by the model.add citation to GPT paper that did data contamination analysis.define data contamination Data contamination can happen in multiple ways. One common contaminant is test data contamination, the inclusion of test data examples and labels in the pre-training data. Another contaminant for zero or few-shot methods, which we call task contamination, is the inclusion of task training examples in the pre-training data, effectively making the evaluation no longer zero or few-shot.[Zero-shot evaluation is evaluation where a model has seen zero examples for the task.Few-shot, or N-shot, where N is a small number, is where the model has seen N examples for the task. Prior work has sometimes defined zero-shot for multi-class classification as predicting classes that have never been seen during training, but most recent work does not use this definition.]Simply evaluating the scope of this contamination is difficult to do <cit.>.Closed models do not release their pre-training data.While open models give the sources, crawling the sites to obtain that data is non-trivial, especially if the data has changed from when it was crawled.For models that are pre-trained on freely available pre-training corpora, simply grepping for examples in the pre-training corpora may not be reliable due to differences in data formatting (such as XML vs CVS, etc) or differences in text normalization and tokenization.In this paper we empirically measure the scope of task contamination for few-shot methods across various models and tasks. To the best of our knowledge, we are the first to systematically analyze this problem. We evaluate 12 different models, ranging from closed GPT-3 series models <cit.> to open models including Fairseq MoE <cit.>, GPT-J <cit.>, Bloom <cit.>, OPT <cit.> ,LLaMA <cit.>, Alpaca <cit.>, and Vicuna <cit.> on 16 classification tasks and 1 semantic parsing taskReviewers might be concerned with lack of seq2seq tasks..We analyze each model on datasets created before its training data was crawled on the internet versus datasets created afterward. We find that datasets created before the LLM training data was collected have a significantly higher chance of having performance higher than the majority baseline (Fig. <ref>).We perform training data inspection and task example extraction to look for possible task contamination.Importantly, we find that for classification tasks with no possibility of task contamination, models rarely demonstrate statistically significant improvements over simple majority baselines across a range of tasks, in both zero and few-shot settings (Fig. <ref>).As a case study, we also attempt to conduct a membership inference attack for a semantic parsing task (Spider, Yu et al. 2019) for all models in our analysis. We find a strong correlation (R=.88) between the number of extracted examples and the accuracy of the model on the final task (Fig. <ref>).This is strong evidence that the performance increase in zero-shot performance on this task is due to task contamination.Additionally, we look closely at the GPT-3 series models. We find that training examples can be extracted from the GPT-3 models, and that the number of extractable training examples increased from each version fromto , and closely tracks the increase in zero-shot performance of theGPT-3 models on that task (Fig. <ref>).This is strong evidence that the increase in performance on these tasks across GPT-3 models from tois due to task contamination. § OVERVIEW We employ four methods of measuring task contamination.Add a figure containing examples of each of these.* Training data inspection: Search through the training data to find task training examples. * Task example extraction: Extract task examples from an existing model. Extraction is only possible with instruction-tuned models. This analysis can also be done for training data or testing data extraction <cit.>. Note: For the purposes of detecting task contamination, the extracted task examples need not exactly match existing training data examples. Any examples demonstrating the task indicate possible contamination for zero and few-shot learning. * Membership inference: This method only applies to generation tasks.Check if the model generated content for an input instance is exactly the same as the original dataset <cit.>add inter-alia. If there is an exact match, we can infer it is a member of the LLM's training data. This differs from task example extraction because generated output is checked for an exact match.Exact matches for an open-ended generation task strongly indicate the model has seen those examples during training.The model is not just good, it is psychic: it has knowledge of the exact phrasing used in the data. Note: this can only be used for generation tasks.[Exact matches for the input do not indicate task contamination because the input text could have been seen, but it needs to be paired with the output label for task contamination.] * Chronological analysis: For a set of models whose training data has been collected at a range of known times, measure performance on a dataset with a known release date, and check for evidence of contamination using chronological evidence.The first three methods have high precision, but suffer from low recall.If data is found in the training data for the task, then it is certain that it has seen examples.But because of data formatting variations, variations in keywords used to define the task, and the size of the dataset, the absence of evidence for contamination using the first three methods is not evidence of absence.The fourth method, chronological analysis, is high recall, but low precision.If the performance is high due to task contamination, then a chronological analysis will have a high chance of catching it.But other factors could also contribute to increased performance over time, so the precision is low.Due to their inherent trade-offs, we employ all four methods for detecting task contamination.With all four methods, we find strong evidence of task contamination for some combinations of models and datasets.We begin with a chronological analysis for all models and datasets we tested, since it has the highest potential for catching possible contamination (<ref>).We then look for further evidence of task contamination using training data inspection (<ref>) and task example extraction (<ref>). Next we look at the performance of LLMs on tasks without contamination (<ref>), and conclude with additional analysis using a membership inference attack (<ref>). § PROBLEM OVERVIEWZero-shot or few-shot learning is a model capability in that the model can generate output for a task without training examples or with only several examples(maybe less than 5) as a trigger. It has criteria that the model should not be directly trained or fine-tuned on its training data, i.e. the initial GPT-3 model which was pre-trained as the language modeling task on a large number of texts can be considered as a model that has the capability of zero-shot or few-shot learning since they did not directly train on the downstream tasks. However, for the later versions of GPT-3 or recent other LLMs especially after instruction learning was introduced<cit.>, models included many human instructions and downstream tasks inputs and outputs to train the language model to obtain better downstream tasks performance which compromises the zero-shot or few-shot setting and we argue that it is not zero-shot or few-shot any more. § MODELS AND DATASETSModels todo We experimented with 12 models. Table <ref> lists these models, along with the collection dates of the training data and release dates for each model.[GPT-3 series training data collection dates are obtained from <https://platform.openai.com/docs/models/overview>] The 12 models we use can be further categorized into two broad groups: (1) five proprietary GPT-3 series models ("closed") and (2) seven open models with free access to their weights ("open"). Comparing models from these two groups yields valuable insights into the difference between proprietary, high-performance models like those from the GPT-3 series and more accessible, community-driven open models. More information about hyperparameters for these models is given in the Appendix <ref>.Datasets update and expand Zero-shot and few-shot evaluations involve models making predictions on tasks that they have never seen or seen only a few times during training. The key premise is that the models have no prior exposure to the particular task at hand, ensuring a fair evaluation of their learning capacity. Contaminated models, however, give a false impression of its zero- or few-shot competency, as they have already been trained on task examples during pretraining. Detecting such inconsistencies would be relatively easier in a chronologically ordered dataset, where any overlap or anomaly would stand out. Based on this narrative, we split the datasets into two categories:datasets released before or after January 1st, 2021, identified as pre-2021 datasets and post-2021 datasets. We use this division to analyze the zero-shot or few-shot performance difference between older datasets and newer ones, with the same division applied for all LLMs. We also use the per-LLM division pre-collection and post-collection datasets, which distinguishes datasets that the model was possibly trained on (pre-collection datasets) from the datasets it could not have been trained on (post-collection datasets). Table <ref> presents the creation time of the training data for each model. Information about the datasets can be found in the Appendix <ref>, while release dates for each dataset are listed in Table <ref>. § CHRONOLOGICAL ANALYSIS We start with a chronological analysis. This allows us to detect patterns of possible task contamination across the LLMs and datasets we examine. §.§ Analysis of Pre- and Post-collection Datasets We perform a global chronological analysis across all datasets and LLMs. We look at the difference between performance on datasets released before the training data collection date for the LLM (pre-collection) versus after the training data collection date (post-collection). Specifically, we focus on whether the model is above the majority baseline.[The majority baseline for a classification task is the performance of a model that labels every example with the label that occurs most frequently in the dataset.] In this section we use this measure, instead of averaging the performance across datasets, to avoid datasets with large performance differences dominating the analysis.With 12 models and 16 datasets, we have 192 model/dataset combinations.Of these combinations, 136 the datasets were released before the LLM training data collection date (pre-collection) and 56 the dataset were release after (post-collection). For both sets, we compute the percentage of model/dataset combinations for which the model beats the majority baseline, both zero-shot and few-shot. The results are shown in Fig. <ref>. We find that for datasets released prior to the creation of the LLM, it is more likely the LLM beats the majority baseline for both zero and few-shot settings. Using the Mann-Whitney U test <cit.>, we find the difference in those above the majority baseline between pre- and post-collection populations to be statistically significant at the 99% confidence level for both zero and few shot settings. This can be clearer. I don't understand exactly what the M-W test showed here. What are the two compared populations? Is this saying "of the models which outperform the majority baseline, the performance on the pre-collection dataset is significantly higher than on the post-collection datasets"? If so, why only consider the cases which outperform the baseline?For some model/dataset combinations, the performance difference above the majority baseline is small, so we also we compute the percentage of model/dataset combinations and for which the model beats the majority baseline and the difference above the majority baseline is statistically significant at the 99% levelI don't quite understand what this means. How is this different than the previous test?, calculated using the student t-test <cit.> (Fig. <ref>, darker).Again, we find that for datasets released prior to the creation of the LLM, it is far more likely the LLM beats the majority baseline with statistical significance for both zero and few-shot settings. Similarly, the Mann-Whitney U test indicates these differences between pre and post are statistically significant at the 99% confidence level for both zero and few shot settingsI think the previous few paragraphs can be slightly reworded to better explain exactly what the statistical tests are demonstrating. I'll need a little time to think about how I would word it so I'll come back to this later..These results indicate the possibility of task contamination for open LLMs and GPT-3 series LLMs.add the caveats to this section (FOMC dataset, and that performance difference doesn't indicate )§.§.§ Caveats talk about FOMC dataset: that they used GPT-3 series, and it seems it may be contaminatedThere are two considerations we need to make in the global chronological analysis. First, datasets may have become more difficult over time, meaning LLMs are less likely to outperform the majority baseline despite the lack of task contamination. To account for this, we carefully review the tasks and remove tasks known to be difficult for LLMs, such as GSM8K <cit.> and TrackingShuffledObjects <cit.>. The remaining datasets all have acceptable performance using fine-tuned pretrained language models (PLMs), and, importantly, there is no correlation between release date and the performance of fine-tuned PLMs (R^2 = 0.001) on our datasets, as shown in Fig. <ref>.Secondly, post-collection datasets, despite being released after data collection, may still suffer from contamination. For example, the FOMC dataset <cit.> was officially released post-collection for the GPT-3 series, but the performance of subsequent versions of GPT-3 is notably high. This may be the result of the authors' preliminary experimentation with the GPT-3 series (as stated in their paper), as OpenAI may have then utilized their experimental data for model updates.Is there any evidence for this? Seems just like speculation without proof, which is fine, but should be clear about it. §.§ Analysis of Pre- and Post-collection for Individual LLMs In this section, we consider the performance on pre- and post-collection datasets for each LLM individually (see Fig. <ref>).Add summary of results. We find... We find the difference in performance between the two categories to be statistically significant at 95% confidence according to the paired sign test <cit.>.We plot the percentage of datasets larger than the majority baseline as in the last section, but for each LLM individually. The results are shown in Fig. <ref>. We observe that the global trend from the previous section has remained true across models with the full range of dates, further indicating that the absolute date of the dataset is not the main factor, but rather the date of the dataset relative to the training data collection date for the LLM is the more important factor. (Note: because of the recency of BLOOM, LLaMA, Alpaca, and Vicuna, we have fewer datasets in our experiments post their training data collection date).The results indicate the possibility of task contamination for both open LLMs and GPT-3 series LLMs, with a stronger indication of contamination in the GPT-3 series withand after.add conclusion to beginning of sectionI don't quite understand why this section is separate from the previous one. I would merge them into one, and reverse the order so it reads like this: (1) for each LLM, it's more likely to outperform baseline on pre-collection datasets. (2) totalled across all LLMs and datasets, we find the likelihood of any given LLM outperforming baseline on any pre-collection dataset to be stat. sig. more than any post-collection. In my head that order sounds more natural.§.§ Performance over Time Next we perform a chronological analysis that examines the change in average performance over time for both GPT-3 series and open LLMs (Fig. <ref>).add summary of results In the x axis, LLMs are ordered chronologically by training data collection date. To also be sensitive to time of the datasets, we split our datasets into two sets:datasets released before or after January 1st, 2021, identified as pre-2021 datasets and post-2021 datasets, respectively.Pre-2021 Datasets For open LLMs, on pre-2021 datasets, we see a slight increase over time for open LLMs (Fig. <ref>).We find that the performance hovers around the majority baseline for both zero and few-shot settings, and does not increase very much from LLM data collection dates ranging from 2019 to 2022.For the GPT-3 series, on the other hand, the trend on pre-2021 datasets is particularly suspect (Fig. <ref>).We see that for prior GPT-3 datasets, the performance has increased dramatically over time, with latermodels much higher than the majority baseline for both zero and few-shot settings.The comparison to open LLMs indicates that zero and few-shot evaluations may have task contamination issues due to data collected from user inputs. Post-2021 Datasets For post-2021 datasets, GPT-3 average performance has also increased over time (Fig. <ref>), particularly in the zero-shot setting.This makes sense, as many of the post-2021 datasets are released prior the training data collection date for the latermodels.(To see which datasets are pre- or post- training data collection time, see the line separating pre- and post- collection datasets in Table <ref>.)Open LLMs average performance also increased over time, but they remain lower than the majority baseline and the GPT-3 series.One could hypothesize that the high performance of the GPT-3 series is due to instruction tuning <cit.>, however we do not believe this is the case.While we observe an increase in performance fromtoon pre-2021 datasets, there is a corresponding decrease in performance on post-2021 datasets, which we measure with the sign test to be statistically significant at the 95%Increase on zero-shot pre p_value=0.00408, Decrease on zero-shot post p_value=0.02939.. This demonstrates that the GPT-3 series instruction tuning is specific to certain earlier datasets, and suggests dataset contamination for zero and few-shot evaluation of GPT-3 series. would this indicate dataset (test data) contamination or task contamination? Or both?§ TRAINING DATA INSPECTION To search for direct evidence of task contamination, we conduct training data inspection on two instruction fine-tuned open LLMs (Alpaca and Vicuna) for all experimented classification tasks. We search for task-related instruction patterns in the training data, and manually inspect them to see if they contain task training examples. Because we must check manually, we can perform this analysis only for the small fine-tuning datasets of Alpaca and Vicuna. We then compare the performance to see if more task-specific training examples has boosted performance.Table <ref> shows the number of task examples on Alpaca and Vicuna, as well as the change in performance over LLaMA averaged over zero and few-shot settings and all tasks.We find that performance has improved for Alpaca and Vicuna over the original LLaMA model for tasks with more than one task example. Because Alpaca and Vicuna are fine-tuned LLaMA models, this indicates that the performance can be improved with small sets of task examples in the training data, which can compromise zero-shot or few-shot evaluation. This could also be the effect of instruction tuning. If possible, can we maybe re-train alpaca minus the examples you extracted and see how it does? If it's significantly worse, that is very strong evidence for task contamination. I think training alpaca is feasible in a couple days on Nautilus, and is easy to run since code is provided. Maybe could try Alpaca-LoRA if it's too heavy.§ TASK EXAMPLE EXTRACTIONrename to Task Example Extraction?We test for task data contamination by attempting to extract task examples from the LLM.Prior work <cit.> has tested if there exists testing data contamination by prompting an LLM to generate examples for a task.If the LLM can generate examples that exactly match examples in the test data, it is evidence that the test set of the task has been seen during training by the LLM. Inspired by their method, we adopt a similar approach to test for task contamination. Instead of attempting to generate test data, we prompt the model to generate training examples, since for zero- or few-shot evaluation, the model should not be trained on any task examples. If an LLM can generate training examples based on the prompt, this is evidence of task contamination.Note we do not require an exact match of the generated examples with the training data for the task, since any examples for the task seen during training indicate possible task contamination.Our prompts for task example extraction are given in Appendix <ref>.Table <ref> shows the task example extraction results on all tasks across all models. For all pre-collection datasets, GPT-3 series models starting fromcan generate task specific training examples. There are some post-collection datasets that have evidence of contamination for the GPT-3 series. These datasets may have been contaminated if the authors of these datasets experimented with the GPT-3 series before releasing the dataset. For example, the FOMC paper <cit.> states they tested with the GPT-3 series, which could have caused contamination. For open LLMs, almost no models can generate training examples of specific tasks except for Vicuna, which is fine-tuned on the ChatGPT data. Note models without instruction tuning cannot follow the instructions directing them to generate task examples, so this analysis is not conclusive for these models. §.§ Comparison to Training Data Inspection Add Comparison to Training Data InspectionNeither are perfectly reliableComparing Tables <ref> and <ref>, we find that training data inspection (TDI) and task example extraction (TEE) both suffer from low recall.TDI has demonstrated task contamination in Alpaca for SST-2 and NewsMet datasets, but TEE failed to catch this contamination.Similarly, TEE has demonstrated task contamination for Vicuna for NewsMTSC, but TDI has failed to catch it.Both suffer from low recall, and highlight the difficulties of employing these methods for detecting task contamination. § LLM PERFORMANCE ON TASKS WITH NO CONTAMINATIONWe find that for tasks without demonstrated possibility of task contamination, LLMs rarely show statistically significant improvements over majority baselines. In Table <ref>, for the 51 model/dataset combinations that are post-collection and have no extracted task examples, only 1 out of 51, or 2%, demonstrate a statistically significant improvements over the majority baseline for either zero or few-shot settings.This combination ison MTSC-RW, which shows a statistically significant improvement over the majority baseline (Tables <ref> and <ref> in the Appendix) but does not generate task examples with our prompt.This dataset is found by cross-referencing Table <ref> and Tables <ref> and <ref> in the Appendix, and looking for datasets which are post-collection and not marked  in Table <ref>, and are bold in either Table <ref> or <ref>.§ MEMBERSHIP INFERENCE This is not possible to do for classification tasks.So we do it for a generation task. To further examine the effect of training data contamination, we apply a membership inference attack <cit.>, which checks if model generated content exactly matches the examples in the dataset. While this test is possible for generation tasks, it is not possible for classification tasks, since inputs may be in the training data of LLMs (and likely are, for many datasets), but we do not know for certain if the inputs are also paired with the labels without looking at the training data. We use Spider, a semantic parsing and text-to-SQL generation task, <cit.> as our target for analysis.Fig. <ref> and Fig. <ref> show how many generated examples from the sampled training set and full development set are exactly the same over versions of the GPT-3 series and recent open sourced LLMs, respectively. The database schemas are not in the zero-shot prompts, so if the model can generate exactly the same table name or field name as found in the training or development data, there must be contamination. As shown in Fig. <ref>, the number of exact matched generated examples increases over time, which indicates the extent of the task contamination on Spider is increasing. We also compute the execution accuracy after adding the schema in the prompts, and plot it against the number of exact matched generations (Fig. <ref>). We find a strong positive correlation between the number of exact matched generated examples and execution accuracy (R = 0.88), strongly indicating increased contamination is related to increased performance. However, we still cannot determine the extent of the contamination's effect on performance improvement. We leave this for future work. § TAKE-AWAYSWe now share some takeaways which our experiments have brought to light:* Due to task contamination, closed-sourced models may demonstrate inflated performance in zero-shot or few-shot evaluation, and are therefore not trustworthy baselines in these settings, especially those including instruction fine-tuning or reinforcement learning with human feedback (RLHF)cite RLHF here?. The extent of this contamination is still unknown, and we therefore recommend caution. * In our experiments, for classification tasks without demonstrated possibility of task contamination, LLMs rarely show statistically significant improvements over majority baselines, in both zero and few-shot settings. * The observed increase over time of GPT-3 series models for zero-shot or few-shot performance for many downstream tasks is likely due to task contamination. kinda the same as the previous bullet. Think we can remove? * Inspection for task contamination of training data even for open-sourced LLMs can be difficult for several reasons. First, determining membership is difficult unless the processed dataset used for training the LLM is released (e.g., OPT and LLaMA did not release the data they used to train the model, but Alpaca and Vicuna did, so we can obtain more definite information). Second, we cannot always rely on the model to reproduce evidence of contamination even if it exists. And third, formatting differences (such as CSV and JSON) of a dataset complicate analysis. * We encourage publicly releasing training datasets to allow for easier diagnosis of contamination issues. Also related to previous bullet, we can probably find a way to merge these Recommend Against Using Highest Performing Closed Models as Baselines, since they may not be zero or few-shot.Performance increase over time for GPT-3, partially due to dataset contaminationContamination Analysis Difficulties with Web Data: Difficult to definitely decide if dataset is included unless we have the whole dataset (like OPT).Still difficult without links.Cannot always rely on model to reproduce data, since need to be paired with labels. Formatting can cause issues.Difficulties with Closed Dataset Models like GPT and ChatGPT that have been fine-tuned on dataset we don't know about.They can have contamination issues.Open models like Alpaca and Vicuna help to diagnose contamination issues.§ RELATED WORKThe investigation into potential data contamination in large language models (LLMs) has recently been gaining attention in the research community. <cit.>, in their work with GPT-3, presented an in-depth analysis of data contamination. Although they acknowledged the presence of a bug that led to data contamination in multiple datasets, their position was that it did not affect the overall performance of the model. Intriguingly, they noted that contaminated datasets outperformed the uncontaminated ones which, in a way, contradicted their original assertion. <cit.> extracted training data from GPT-2 and indicated potential leaks of private data in the pre-trained language model. <cit.> discovered that OpenAI models were memorizing substantial amounts of copyrighted materials, which increased concern over data contamination. <cit.> highlighted the severity and scope of data contamination problems for ChatGPT evaluations. Highlighting the need for strategic interventions to address these issues, <cit.> proposed several strategies for mitigating testing data contamination. Additional work has further looked into test data contamination <cit.>.The previous work listed above has investigated test data contamination, but has not considered task contamination for zero-shot or few-shot settings. Prior work has noticed our proposed task contamination problem for zero-shot or few-shot learning <cit.>, but did not systematically analyze it. Our work seeks to add to the existing knowledge by providing an exhaustive evaluation of task contamination for few-shot or zero-shot learning scenarios. § CONCLUSION AND FUTURE WORKWe investigate task contamination for LLMs, and conduct a chronological analysis, training data inspection, task example extraction, and a membership inference attack to analyze it. We find evidence that some LLMs have seen task examples during pre-training for a range of tasks, and are therefore no longer zero or few-shot for these tasks.Additionally, we find that for classification tasks with no possibility of task contamination, LLMs rarely demonstrate statistically significant improvements over simple majority baselines, in both zero and few-shot settings. We recommend additional research be conducted on task contamination for zero and few-shot settings to reveal the extent and impact of task contamination for large language models in these settings.§ ACKNOWLEDGEMENTS We are grateful for valuable feedback from Nilay Patel on an earlier version of this draft. We are thankful for the computing resources provided by the Pacific Research Platform's Nautilus cluster, supported in part by National Science Foundation (NSF) awards CNS-1730158, ACI-1540112, ACI-1541349, OAC-1826967, OAC-2112167, CNS-2100237, CNS-2120019, the University of California Office of the President, and the University of California San Diego's California Institute for Telecommunications and Information Technology/Qualcomm Institute. Thanks to CENIC for the 100Gbps networks. § HYPERPARAMETERS We use greedy decoding to ensure a fair comparison for all approaches. For GPT-3 series models, we set the temperature as 0 to ensure deterministic results. For few-shot learning, we use the same few-shot examples across models for each instance in a task. We run open sourced models on an NVIDIA A100 GPU. § DATASETS updateThe pre-2021 datasets are common GLUE <cit.> and Super GLUE <cit.> tasks: MRPC <cit.>, boolq <cit.>, SST-2 <cit.>, QNLI <cit.>, WNLI <cit.>, RTE <cit.>, CB <cit.>, COPA <cit.>, WiC <cit.>. The post-2021 datasets are StrategyQA <cit.>,NLI4Wills <cit.>, NewsMTSC <cit.>, CREPE <cit.>, FOMC <cit.> and NewsMet <cit.>.§ PROMPT SOURCESThe prompts for these tasks are taken from previous research <cit.> that use them as evaluation benchmarks and <cit.> Examples or designed based on the related tasks from these sources. Table <ref> shows prompt source for each dataset.Appendix <ref> lists example prompts for each task. § TRAINING DATA INSPECTION DETAILS We manually inspect training examples found using regular expressions for each task. Our regular expression or string search pattern for each task are listed in Table <ref>. Some tasks such as COPA and BoolQ do not have a specific pattern that can be matched. We count an example if it is directly related to the task and contains the input and output for the task.We do not count examples that talk about the task without giving input and output examples. § DETAILED RESULTS TABLESIn this section, we report the performance numbers for all models and datasets in our experiments with confidence intervals. § ADDITIONAL FIGURES§ PROMPT EXAMPLES FOR EACH TASKIn this section we give examples of zero-shot prompts for each task.§ PROMPTS FOR TASK EXAMPLE EXTRACTION
http://arxiv.org/abs/2312.16337v1
{ "authors": [ "Changmao Li", "Jeffrey Flanigan" ], "categories": [ "cs.CL", "I.2.7" ], "primary_category": "cs.CL", "published": "20231226211746", "title": "Task Contamination: Language Models May Not Be Few-Shot Anymore" }
Evaluating the security ofin the quantum random oracle model [ ================================================================ § ABSTRACT Meta-learning has emerged as an effective methodology to model several real-world tasks and problems due to its extraordinary effectiveness in the low-data regime. There are many scenarios ranging from the classification of rare diseases to language modelling of uncommon languages where the availability of large datasets is rare. Similarly, for more broader scenarios like self-driving, an autonomous vehicle needs to be trained to handle every situation well. This requires training the ML model on a variety of tasks with good quality data. But often times, we find that the data distribution across various tasks is skewed, i.e.the data follows a long-tail distribution. This leads to the model performing well on some tasks and not performing so well on others leading to model robustness issues. Meta-learning has recently emerged as a potential learning paradigm which can effectively learn from one task and generalize that learning to unseen tasks. However, it is often difficult to train a meta-learning model due to stability issues. Negative transfer (cite: to transfer on not to), which is commonly seen in transfer learning, is one of the main reasons for this instability. Akin to transfer learning where negative transfer can actually hinder performance if the tasks are too dissimilar, understudied effects of different task interactions can affect the performance in meta-learning as well. It will be useful to study the task distribution of meta-train and meta-test tasks and leverage any external source of information about these tasks which can help us create more informed mini-batches instead of the status-quo of randomly selecting tasks for the mini-batch.In this study, we aim to exploit external knowledge of task relations to improve training stability via effective mini-batching of tasks. We hypothesize that selecting a diverse set of tasks in a mini-batch will lead to a better estimate of the full gradient and hence will lead to a reduction of noise in training. Our contributions are two-fold in this project: Firstly, we leverage WordNet to build the class-relation graph for the 100 classes of the mini-Imagenet dataset. We then generate clusters in that graph based on the node(class) distances. After the generation of class clusters, we effectively sample classes from them and generate tasks of varying level of complexities. Later, we also generate an artificial dataset by using a backward approach. Specifically, we first take the WordNet hierarchy and select 15 clusters (subtrees). We then sample 10 classes from each of these clusters to get 150 classes in total. Then for each of these classes, we again sample 100 images from the ImageNet dataset to generate an artificial dataset having the prior of WordNet class clusters. One thing to note is that we use these clusters not only to generate tasks of varying levels of complexity for the meta-training phase, but, we also use the clustering information to sample tasks for the meta-testing phase as well. This helps us to evaluate the dependency of meta-test performance over the meta-train performance for varying level of task complexity distribution.Secondly, we test our hypothesis through training two meta learners - MAML and ProtoNet over the new task distributions and study the correlation between various combinations of task complexity distributions of the meta-train and meta-test phase. We also hypothesize that constructing meta-train tasks such that it is not very different from the meta-test tasks will reduce the effects of negative transfer and potentially lead to faster convergence. This can also improve model performance via efficient meta-train task selection.§ INTRODUCTIONMeta-Learning is one of the fastest-growing areas of research in the field of Machine Learning. Meta-learning, in the machine learning context, is the use of machine learning algorithms to assist in the training and optimization of other machine learning models. The general idea of meta-learning is 'learning how to learn'. Meta-learning is the ability of an artificially intelligent machine to learn how to carry out various complex tasks, taking the principles it used to learn one task and applying it to other tasks. Fig. <ref> shows the workflow of a typical meta-learning algorithm. The first step is to construct a meta-dataset which consists of various labelled datapoints. The sampler then samples from the meta-dataset (ususally randomly) to create tasks for the meta-training phase. The meta-training phase itself consists of two components. The support set of the meta-training task consist of labelled datapoints for supervised training while the query set consist of 'test datapoints'. During training, the meta-learner is optimized to learn from the support set tasks to give correct predictions (classification/regression) for the query set tasks. The meta-learner is optimized via the loss function which evaluates its performance over the query set. The performance of the trained meta-learner is evaluated during the meta-test phase where we judge how well and robustly it's able to learn from the support set tasks and apply its learnings to solve the query set tasks. Note that the meta-learner was not optimized over the support and query set of the meta-test phase.Given the high-level overview of the meta-learning pipeline, we will now focus on two specific meta-learning algorithms namely Model-Agnostic Meta-Learning (MAML)<cit.> and Prototypical Networks (ProtoNets) <cit.>. The key idea of Model-Agnostic Meta-Learning (MAML) algorithm is to optimize model which can adapt to new task quickly. MAML provides a good initialization of a model’s parameters to achieve an optimal fast learning on a new task with only a small number of gradient steps while avoiding overfitting that may happen when using a small dataset. The second algorithm is Prototypical Networks (ProtoNets) which significantly differs from MAML design but performs the same task of few-shot classification. Here a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results.It's well established that meta-learning algorithms generalize well for tasks with low-data. However,one of the biggest limitations to meta learning is the problem of negative transfer. Meta learning only works if the meta-training tasks are similar enough for the training to be relevant. If the meta training is too far off the mark, the model may actually perform worse than if it had never been trained at all. Hence, it is imperative to study the task distribution and if possible, augment the data with external sources of information. In this project, we aim to inform the task distribution of meta-train and meta-test tasks through an external source of information - namely the WordNet hierarchy. Specifically, we generate mini-batch of tasks with varying levels of task complexity and study the performance of MAML and ProtoNets for varying levels of meta-test task complexity. The paper is outlined a follows: Section <ref> gives details about the existing literature in this problem domain. In Section <ref>, we describe the dataset along with the approach to create the class relation graph and artificial dataset. In Section <ref>, we explain our approach to test the effect of task complexities on the performance of the meta-learning algorithm at test time. Section <ref> outlines the evaluation routines and presents the study results. Finally, we conclude our findings in Section <ref> followed by potential future work in Section <ref>. § RELATED WORK Meta-learning involves learning new tasks quickly with a few training examples utilizing information from related tasks. However, the tasks may be very diverse, and the generalization across the entire task distribution may be ineffectual (if the test task distribution lies in some small clusters in the task distribution space) or even harmful to unrelated tasks (often referred to as negative transfer). Motivated by these observations, there have been many recent works on handling task heterogeneity in meta-learning <cit.>. These approaches can be divided into two broad categories, <cit.> enforce task-specific representations instead of globally shared parameters but ignore the relationships between tasks, which may limit the model expressiveness and impair knowledge generalization. On the other hand, <cit.> learn task relationships from the training data but ignore the inherent relationship expressed by external knowledge. <cit.> explores the use of externally available relationship between tasks (specifically the hierarchy of classes from which the task distribution is sampled) to learn a task embedding that characterizes task relationships and tailors task-specific parameters, resulting in a task-adaptive metric space for classification. All these methods show that incorporating either learned or external knowledge about the relation between tasks helps the meta-learner better generalize to unseen tasks <cit.>. Motivated by these works, we explore the benefits of using external knowledge about task and class relations to improve the efficiency and stability of different meta-learning algorithms.§ DATASETWe use the publicly available ImageNet <cit.> dataset to construct a few-shot classification dataset for our work. We chose to create out own subsample of Imagenet classes because the current existing benchmarks like mini-ImageNet <cit.> have classes random selected which are all very distinct from one another and hence not suitable to demonstrate the issue of task quality that we study in this work. We first create a class relationship graph and then use clustering on the graph to sample classes such that is has a good mix of closely related classes and distinct classes as described below.§.§ Class relation graphWordNet <cit.> is one of the most popular thesauri for computational purposes in the NLP domain. WordNet contains all sorts of interesting relationships between words from over 200 languages. It can link words into semantic relations, including synonyms, and can categorize words into word hierarchies. We utilize the synset ids corresponding to the ImageNet classes and the hypernym relations to build a class-relation graph for all the 1000 classes Imagenet dataset. Fig. <ref> shows a few examples of the hierarchy defined by WordNet on the classes of the ImageNet dataset. We get a graph with 32,324 nodes (corresponding to different synsets) with 32,544 edges between them. §.§ Sampling classesThe goal of our dataset is to have classes that are close to each other (like beagle and hound) and also classes that are distinct from each other (snakes and fruits). Hence given the class relation graph as defined above, we select nodes such that it has between 15 to 25 ImageNet classes and then sample 10 classes from the children of each such node. The classes in the same subtree are more closely related than the classes in different subtrees. For example, the classis more closelt related toin the same subtreethan to other classes likein a different subtree. Hence using this criteris, we sample 15 subtrees, 10 classes in each subtree and 100 images in each class to finally have a dataset with 150 classes and 15k images. Fig. <ref> shows a sample of classes from each cluster in the dataset. As is evident, classes in the same cluster are similar and harder to classify into different classes.§ METHODOLOGY In our work, we study the effects of complexities of tasks seen at meta-training time on the performance of meta-learning algorithms at test-time. We define a random task as a task consisting of classes randomly sampled from the pool of available classes such that the classes might be very distinct and easy for a ML model to differentiate between them. An example of randomly created tasks is shown in Fig. <ref>. As can be seen from the figure, the classes in a task are quite different like a television and a boat and hence it is very easy for a model to differentiate between them based on superficial signal only without developing an understanding of the image.We define a hard task as a task having classes which are similar to each other and hence hard for a ML model to distinguish between them. An example of a hard task is shown in Fig. <ref>. As can be seen from the figure, the classes in the task are closely related like different species of dogs and hence a model needs to develop a good understanding of the images to be able to correctly distinguish between the classes.To study the effects of the task complexity on the meta-learner we devise three training regimes, namely: * Random task meta-training: In this training regime, all tasks are sampled such that it consists of randomly selected classes from the 150 classes in the dataset and hence all the tasks seen by the meta-learner during meta-train time are random tasks. Fig. <ref> shows a batch of tasks encountered in this training regime. * Hard task meta-training: In this training regime, all tasks are sampled such that the classes in the task belong to a randomly selected cluster out of the 15 clusters in the dataset and hence all the tasks seen by the meta-learner during meta-train time are hard tasks. Fig. <ref> shows a batch of tasks encountered in this training regime.* Mixed task meta-training: In this training regime, a random task is sampled with probability 0.5 and a hard task is sampled with probability 0.5. Hence the meta-learner sees hard tasks 50% of time and random tasks other 50% of the time during meta-train. Fig. <ref> shows a batch of tasks encountered in this training regime. To evaluate the performance of meta-learners with different test task distribution, we test the meta-learner with tasks drawn with different probabilities of seeing a hard task. We evaluate the performance of the meta-trainer trained using MAML and ProtoNets algorithms. We calculate the mean accuracy of the meta-learner on 1.6k tasks along with the 95% confidence interval.§ RESULTS AND ANALYSIS We use the TorchMeta library <cit.> to train a 5-way 5-shot meta-learner for all out experiments. Fig. <ref> shows the performance of the meta-learners trained using MAML in the three different training regimes on test tasks distribution with varying hard task probability from 0 to 1. The figure shows that the performance of the meta-learner trained in the random task regime drops drastically as the probability of hard task increases in the meta-test phase. This is in accordance with our hypothesis that meta-learners only seeing random tasks in training latches on superficial features to distinguish between the classes and hence when faced with complex task in test time fails to generalize well. The performance of the meta-learner trained in hard task regime also suffers a small drop in performance when faced with random tasks. Our hypothesis is that the meta-learner have never seen distinct classes and hence does not have the ability to classify them properly. From the plot, we can observe that the meta-learner trained in the mixed task regime has a stable performance across all the test task distributions. This suggests that meta-learning algorithms benefit from training over a wide variety of tasks with different complexities so that the model can adapt well to any task at test-time.Fig. <ref> shows the performance of the meta-learners trained using ProtoNet in the three different training regimes on test tasks distribution with varying hard task probability from 0 to 1. We observe a similar characteristics as observed in the case of MAML above. Though in this case, the model trained only on hard tasks performs quite well on task of all difficulty levels and is similar to that in the mixed task regime. We surmise that this happens because ProtoNet learns efficient feature extractor for the images and training for hard tasks makes the representation more meaningful and hence can help in distinguishing between any two given classes.§ CONCLUSIONThe huge drop in performance of the meta-learner trained in the random task regime when faced with increasingly hard meta-test tasks from both MAML and ProtoNet shows that random task creation is not the effective way to generate tasks for meta-training. This also suggests that the current benchmarks containing only distinct classes are not well equipped to gauge the performance of meta-learning algorithms because often meta-learning algorithms are employed in scenarios where the test tasks are quite varied - some of these test tasks can have distinct classes while some can consist of similar classes. For a meta-learning model to be robust and able to generalize well across a meta-test task distribution of varying complexity, it is imperative to focus on the meta-training task distribution and devise methods for the efficient training of the models. We conclude that having a uniform mix of random and hard task distribution during meta-training boosts the generalizability of the meta-learner. Finding the hard tasks is problem-specific. For example, in our problem setting, we were solving a classification problem and termed a task as a 'hard task' if all the classes for that task belong to the same cluster. These clusters were based on the WordNet hierarchy which helped us get more information about classes and their associated hierarchies. A hard task makes it difficult for the meta-learner to discriminate among classes. Similarly, this approach can be further expanded in other problem settings by first creating a bunch of hard tasks for the meta-learner followed by training on them. We believe, following this approach would benefit the learning process given the status-quo.§ FUTURE WORKIn the future, we plan on evaluating black-box algorithms in conjunction with more recent and advanced meta-learning algorithms over our proposed approach of effective mini-batching of tasks. We also plan to evaluate the performance of various data augmentation techniques for images like image flipping, rotation, shift as against our current approach of cluster-based sampling of tasks. Lastly, we wish to explore the relationship between meta-train and meta-test tasks and leverage that information to select meta-train tasks which are tightly correlated to the meta-test tasks.
http://arxiv.org/abs/2312.16612v1
{ "authors": [ "Prabhat Agarwal", "Shreya Singh" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231227153352", "title": "Exploring intra-task relations to improve meta-learning algorithms" }
NatAsApJL ApJ MNRAS ARA&A åA&A ApJL Annu. Rev. Earth Planet. Sci. PhRL NIMPB JSpRocmKgμm aBϵabsτ_ edτ_ H ÅaeB CEFΓ HIJkMS'026n'041 prvVcmkmþthmaxmaxḍ⟨|ψ̇|⟩⟨ϕ̇⟩ D eff edêergeV ggas GGHzH̋ He LTE H_2Hz IR iso ISRFJyk_ Bk̂μ m m n MRN Nn_ Hn̂pc peakϕ̂s spsr rotN_Causubtot St V Bar rad yr IP x̂ŷẑâêĥ BvrkFRsσ gas erf eV=cmmib10"21"0A"16"0B"0C"0D"00Spin-polarized electrons from aligned dust and chiral asymmetryHoang Korea Astronomy and Space Science Institute, Daejeon 34055, Republic of [email protected] of Astronomy and Space Science, University of Science and Technology, 217 Gajeong-ro, Yuseong-gu, Daejeon, 34113, Republic of Korea The unique biosignature of life on Earth is the homochirality of organic compounds such as amino acids, proteins, and sugars. The origin of this homochirality has remained a mystery for over a century. While high-energy spin-polarized (spin-up or spin-down) electrons (SPEs) from the β decay of radioactive nuclei discovered by <cit.> and <cit.> have been proposed as a potential source of symmetry breaking, their exact role on homochirality is much debated. Here we suggest magnetically aligned dust grains as a new source of SPEs due to photoemission of electrons having aligned spins by the Barnett effect. For the interstellar UV radiation field of strength G_ UV, we found that the SPE emission rate is Γ_ pe^ SPE∼ 10^-14G_ UV electrons per second per H, the fraction of spin-polarized to total photoelectrons is ∼ 10%, and the SPE yield (photoelectron number per UV photon) can reach ∼ 1%, using the modern theory of grain alignment. Low-energy SPEs from aligned grains would cause chiral symmetry breaking of interstellar chiral molecules due to spin-selective (dipole-dipole) interactions. Finally, we suggest magnetically aligned grains as chiral agents that facilitate and enrich the chiral asymmetry of chiral molecules. Our proposed mechanism might explain the detection of chiral asymmetry in the ISM, comets, and meteorites due to the ubiquitous UV radiation and magnetically aligned grains, paving the way for understanding the origin and distribution of life in the universe. This mechanism based on magnetic grain alignment implies the role of magnetic fields on chirality symmetry breaking.§ INTRODUCTIONAre we alone in the Universe? This big question has stimulated human curiosity for thousands of years. In 1848, Louis Pasteur first discovered that biological molecules have a unique geometrical property, so-called chirality (in Greek it means "hand")[Chirality is a geometrical property by which the object and its mirror image are non-superimposable, such as left hand/foot and right hand/foot.]. It is now established that homochirality is the key biosignature of life on Earth (e.g., ). Since then, the question of why there is the dominance of one enantiomer over the other (i.e., symmetry breaking or chiral asymmetry) has remained a mystery for more than a century (seefor reviews). A chiral molecule has two non-superimposable mirror images, called enantiomers. Chemical reactions typically produce a racemic mixture of equal amounts of left-handed and right-handed enantiomers (e.g., Miller-Urey experiment, ). However, chiral molecules of biological origin are composed of only one enantiomer (also called an optical isomer). For example, amino acids and proteins have only left-handed enantiomers, whereas sugars, DNA and RNA have only right-handed enantiomers (seefor a review). <cit.> and <cit.> showed that an initial small enantiomer excess could be amplified by autocatalytic reaction and eventually for pure enantiomer. The remaining question is the origin of a small initial enantiomer.Interestingly, homochirality was detected first in the Murchison meteorite <cit.>. <cit.> also detected amino acids in the Murray meteorite. Recently, <cit.> found the same L-amino acid in the Tagish Lake meteorite. Yet, it is uncertain whether such homochirality is ejected from Earth or originated from the presolar nebula. In the laboratory, it is found that UV circularly polarized photons can destroy one enantiomer, leaving only one enantiomer, a process called asymmetric photolysis <cit.>. Although it is not easy to completely destroy one enantiomer, the asymmetric photolysis produces a small enantiomer excess. This small initial enantiomer excess can be amplified by biological processes. CP is observed in star-forming regions <cit.> and cometary coma <cit.>. Therefore, the differential absorption of circularly polarized light by chiral molecules is a plausible mechanism producing the initial enantiomer excess (i.e., chiral asymmetry) and enantio-enrichment in the ISM <cit.>.Soon after the discovery of parity violation in weak interactions <cit.>, many studies suggested that the chiral asymmetry might arise through the preferential destruction of one enantiomer in a racemic mixture by spin-polarized electrons (SPEs) produced in the β decay of radioactive nuclei. However, the induced chirality asymmetry was found to be rather weak <cit.> and the exact role of parity violation is still debated (seefor a review). Moreover, unpolarized high-energy SPEs could dilute any chirality of molecules (seefor a review).Recently, <cit.> and <cit.> revisited the effect of high-energy SPEs from parity violation and suggested that magnetically polarized CRs (e.g., muons) could induce initial chirality asymmetry in prebiotic molecules.The last two decades have witnessed significant advances in understanding the origin of homochirality. Numerous experiments established the key role of spin-polarized electrons and ferromagnetic surfaces as chiral agents (see reviews by ). For example, <cit.> first demonstrated experimentally that low-energy (≲ 10 eV) SPEs resulting from irradiation of a magnetic substrate can induce chiral-selective chemistry due to chiral-induced spin selectivity (CISS) effect (). <cit.> suggested that the interaction of SPEs with adsorbed chiral molecules might produce a significant enantiomeric excess (chiral asymmetry) of a prebiotic molecule, which leads to the homochirality of amino acids and sugars <cit.>. Very recently, <cit.> suggested that UV irradiation of magnetic deposits in the prebiotic Earth environment (e.g., basins of closed evaporative lakes) could produce SPEs that would induce chiral asymmetry due to the CISS effect. The magnetic deposits are suggested as chiral agents facilitating the homochiral enrichment of prebiotic compounds. Here, we propose interstellar dust grains aligned with magnetic fields as an important source of SPEs and chiral agents for interstellar chiral asymmetry. Interstellar dust is a magnetic material because iron is among the most abundant elements in the universe, and observations show that more than 95% of cosmic iron abundance is locked in dust <cit.>. Observations of starlight polarization <cit.> and thermal dust polarization <cit.> revealed that interstellar dust grains are asymmetric and efficiently aligned with ambient magnetic fields (seefor reviews). High-resolution polarimetric observations by single-dish telescopes <cit.> and interferometers like Atacama Large Millimeter Array (ALMA) <cit.> reveal that dust grains are also aligned in very dense regions where young stars and planets are forming. The modern theory of grain alignment establishes that dust grains with embedded iron inclusions are efficiently aligned with the ambient magnetic field due to radiative torques and enhanced magnetic relaxation <cit.>. Rapid rotation of interstellar magnetic grains due to gas- and radiation-dust interaction causes the alignment of electron spins along the rotation axis due to the Barnett effect <cit.>. The Barnett effect is opposite to the popular Einstein-der Haas effect <cit.> that first showed the intrinsic connection between electron spin and macroscopic rotation of the solid body. The photoelectric effect on aligned grains induced by interstellar UV radiation can eject spin-polarized electrons spins aligned with the magnetic alignment axis due to angular momentum conservation. Therefore, aligned grains are potentially an important source of SPEs as well as chiral agents. However, the efficiency of SPE emission depends on the grain alignment degree with the ambient magnetic field and the grain size distribution. We will quantify the SPE emission yield of aligned grains using our modern theory of grain alignment and discuss implications for chiral asymmetry in this paper.The paper structure is described as follows. In Section <ref> we present a new mechanism of SPE emission from aligned grains and present a model for calculations of SPE photoemission based on the modern grain alignment physics. Section <ref> presents our numerical results for SPE photoemission rate and yield induced by interstellar UV radiation. Discussion on the other sources and implications of our results for interstellar chiral asymmetry are presented in Section <ref>. A summary of our main findings is presented in Section <ref>.§ MAGNETICALLY ALIGNED GRAINS AND PHOTOEMISSION OF SPIN-POLARIZED ELECTRONS§.§ The Barnett effect and aligned electron spinsInterstellar dust grains are magnetic material due to the inclusion of iron atoms, in various forms, e.g., in the matrix of silicate (e.g., MgFeSiO_4) or as iron or iron-oxide nanoparticles (see e.g., . In this paper, we assume that silicate grains have oblate spheroidal shape and contain embedded iron clusters, which make them superparamagnetic material.[This assumption is valid in a wide range of environments as constrained by polarimetric observations and grain alignment physics, from the diffuse ISM <cit.> to star- and planet-forming regions <cit.>.] Moreover, dust grains are rotating rapidly due to collisions with the gas <cit.> and radiative torques (RATs) caused by anisotropic radiation field <cit.>. A magnetic grain of zero-frequency susceptibility, χ(0), rotating with an angular velocitybecomes magnetized via the Barnett effect <cit.>, which is opposite to Einstein-de Haas effect. According to the Barnett effect, unpaired electrons within a rotating grain of angular velocity Ω are subject to an equivalent magnetic field given by _ Bar = -/|γ_e|=-2m_e/eg_e, γ_e=-g_eμ_B/ħ the electron gyromagnetic ratio where g_e≈ 2 and e the elementary charge. The Barnett magnetic field acts to line up electron magnetic moments, which is equivalent to the alignment of electron magnetic moments within a body at rest by an external magnetic field. The resulting magnetic moment of the grain of volume V=4π a^3/3 is then given by _ Bar = χ(0)V_ Bar=-χ(0)V/|γ_e|, where the magnitude of χ(0) depends on magnetic properties of grains, such as paramagnetism, superparamagnetism, or ferro(ferri)magnetism (see e.g., ). Figure <ref> illustrates the grain magnetization by rotation via the Barnett effect. Electron spins are random when the grain is at rest (left panel) and becomes aligned along the rotation axis (right panel). The photoelectric effect will eject electrons having random spin from the non-rotating grain (left) and aligned spin-up from the rotating grain.§.§ Grain alignment with the magnetic fieldHere, we briefly review the physical processes causing the alignment of magnetic grains with the magnetic field and present a model of their alignment efficiency.The physical process of grain alignment with the ambient magnetic field is rather complicated, but it could be summarized as follows. First, fast internal relaxation by Barnett and inelastic relaxation <cit.> and nuclear relaxation <cit.> induces the efficient alignment of the axis of maximum inertia with the grain angular momentum (i.e., internal alignment). Then, the Barnett magnetic moment allows the grain to interact with the ambient magnetic field via Larmor precession. Usually, the Larmor precession occurs much faster than the other timescales involved in grain alignment, such as gas randomization by gas collisions (see e.g., ), which makes the magnetic field the axis of grain alignment <cit.>. Finally, radiative torques act to spin up grains to suprathermal rotation and align grains with the magnetic field <cit.>. The enhanced paramagnetic relaxation due to magnetic inclusions can further increase the magnetic alignment, which may make grains to be perfectly aligned by magnetically enhanced radiative torque (MRAT) mechanism <cit.>.Numerical simulations in <cit.> show that if the RAT alignment has a high-J attractor point, then, large grains can be perfectly aligned because grains at low-J attractors would be randomized by gas collisions and eventually transported to more stable high-J attractors by RATs. On the other hand, grain shapes with low-J attractors would have negligible alignment due to gas randomization. For small grains, numerical simulations show that the alignment degree is rather small even in the presence of iron inclusions because grains rotate subthermally <cit.>. Therefore, the degree of grain alignment depends critically on the critical size above which grains can be aligned by RATs, denoted by a_ align. Accounting for the alignment of small grains, we can describe the grain alignment function for the size distribution asf_ align(a)=f_ min+ (f_ max-f_ min)exp[-(a/2a_ align)^3], where f_ min describes the alignment degree of small grains of a<a_ align, including nanoparticles, and f_ max is the maximum alignment degree of large grains of a>a_ align. This function is approximately consistent with numerical calculations in <cit.> for grain alignment by magnetically enhanced radiative torque (MRAT) mechanism.Numerical calculations in <cit.> showed that their alignment degree could reach 2-5%, and here we take f_ min=0.025. Although their alignment degree is much lower than larger grains, their dominance in the total surface area makes them a considerable source of SPEs. To account for the variation of grain efficiency by MRAT with dust magnetic susceptibility and local conditions (gas and radiation field), we vary the value of both a_ align and f_ max from f_ max=1 for perfect alignment to lower values for imperfect alignment (see Appendix <ref>). According to the RAT alignment theory, grains with left and right helicity would be aligned with the high-J attractor point withantiparallel and parallel to the magnetic field. Therefore, due to the Barnett effect, aligned grains of the left helicity will have electron spins anti-parallel to the magnetic field (called spin-down), while the right helicity grains will have electron spins parallel to the ambient magnetic field (spin-up). As a result, photoelectric emission from aligned grains will produce electrons with only one spin-up/spin-down state, namely spin-polarized electrons.§.§ Photoelectric emission from aligned grains §.§.§ Interstellar UV radiationAligned grains are irradiated by diffuse interstellar radiation(ISRF) from <cit.>. The UV radiation spectrum of the ISRF can be approximately given by (e.g., ): ν u_ν^ MMP={[ 0  for  hν>13.6; 3.327× 10^-9 (hν/)^-4.4172^-3;for  11.2 <hν<13.6; 8.463× 10^-13 (hν/)^-1^-3;for  9.26 <hν<11.2; 2.055× 10^-14 (hν/)^0.6678^-3;for  5.04 <hν<9.26 ].. To describe the variation of the local UV radiation field, we define u_ν^ UV=G u_ν^ MMP where G is the UV scaling factor. Equation (<ref>) implies that the photoelectron emission rate scales as G_ UV, and the fraction of SPE, f_ SPE is independent of G_ UV. §.§.§ Photoelectric effect and photoelectric yieldIrradiation of dust grains aligned with the ambient magnetic field by unpolarized UV radiation would eject spin-polarized electrons with spins directed along the magnetic field. Let Y(a,ν) be the photoelectric yield of a grain of size a induced by a photon of frequency ν. The rate of photoelectron emission of primary electrons from one grain isJ_ pe(a)=∫_ν_ pet^∞ Y(a,ν)π a^2Q_ abscu_ν/hν dν, where Q_ abs is the absorption efficiency, ν_ pet is the frequency threshold required for the photoelectric effect, which is determined by the ionization potential (IP), i.e., hν_ pet=, andu_ν is the specific energy density of the radiation field. Here we take the ionization potential =W=8 eV for silicate grains (see ).We calculate the photoelectric yield of grains irradiated by energetic photons for the different grain sizes using the method in <cit.> (see ). For the interstellar UV radiation, we take the absorption efficiency Q_ abs≈ 1 (see Figure 16 in ).Figure <ref> shows the total photoelectric yield as a function of photon energy for a neutral silicate grain of different sizes. For the range of interstellar UV radiation, W<hν<13.6, the photoelectric yield is almost constant. The yield increases with the photon energy for hν>40 eV due to the emission of primary, Auger, and secondary electrons.For convenience, we parameterize the photoelectric yield for UV photons of 13.6>hν>W using the numerical calculations of the photoelectric yield shown in Figure <ref>,Y(a) = {[0.7     for  a ≤ 0.001; 0.5     for  0.001 <a ≤ 0.005;0.3     for  0.005 <a < 0.01; 0.2     for  0.01 <a < 0.05;0.1      for  0.05< a <1; ].,and Y=0 for hν<W. Note that for X-rays (of energy above 100 eV), the photoelectric yield increases due to the contribution of Auger and secondary electrons (see Figure <ref> and ). However, the energy density of diffuse interstellar X-ray is much lower than UV photons, so it is ignored in this section.§.§.§ SPE photoemission rate and yieldThe total rate of photoelectron emission per H (electron/s/H) is then Γ_ pe = ∫_a_ min^a_ maxJ_ pe(a)n_^-1dn/da da= ∫_ν_ pet^∞cu_ν/hν dν∫_a_ min^a_ maxY(a,ν)Q_ absπ a^2n_^-1dn/da da , where n_ is the hydrogen density in the interstellar gas, dn/da = Ca^-3.5 is the grain size distribution with the lower and upper cutoff a_ min, a_ max, respectively <cit.>. The constant C is determined by the mass ratio of dust to the gas, which is chosen to be 1:100 for the typical ISM.The emission rate of spin-polarized electrons per H is calculated from aligned grains, which reads Γ_ pe^ SPE = ∫_a_ min^a_ max f_ align(a) J_ pe(a)dn/n_da da,= ∫_ν_ pet^∞cu_ν/hν dν∫_a_ min^a_ maxf_ align(a)Y(a,ν)Q_ absπ a^2n_^-1dn/da da,    where f_ align accounts for the alignment degree as a function of the grain size (see Eq. <ref>).The fraction of spin-polarized photoelectrons to the total photoelectrons isf_ SPE=Γ_ pe^ SPE/Γ_ pe.We are also interested in the yield of SPE emission, defined by the ratio of SPE emission rate to UV irradiation rate:Y_ SPE=Γ_ pe^ SPE/Σ_dcn_ UV=Γ_ pe^ SPE/Σ_d∫ dν(cu_ν/hν),where Σ_d is the total dust surface area per H given by Σ_d = ∫_a_ min^a_ max da π a^2 (n_ H^-1dn/da)=C(a_ min^-0.5-a_ max^0.5)=10^-21(C/10^-25^2.5)(a_ min/0.001)^-0.5^2^̋-1.   For the diffuse interstellar UV, using Equation (<ref>) we estimate the density of UV photons to be n_ UV∼ 0.0054G_ UV^-3. Therefore, the SPE emission yield (per incident UV photons) is Y_ SPE≃ 0.067 (Γ_ SPE/10^-14^-1^̋-1)(10^-21^2^̋-1/Σ_d).Equation (<ref>) implies Y_ SPE∼ 7% for the UV photoelectric yield of Y∼ 0.3, i.e., 7 SPEs are produced for every 100 incident UV photons. Extreme UV or X-ray would produce higher SPE yield due to higher photoelectric yield (see Figure <ref>). §.§ Anisotropy and Polarization of SPE PhotoemissionFor an isotropic radiation field, the emission of SPEs is still anisotropic due to the alignment of grains with the magnetic field that results in the differential cross-section with the ejection along and in the direction perpendicular to the magnetic field (see Figure <ref>). Assuming the isotropic photoelectric yield, the anisotropy of SPEs is defined by γ_ SPE=Γ_^ SPE-Γ_⊥^ SPE/Γ_^ SPE+Γ_⊥^ SPE,where Γ_,⊥^ SPE are the SPE emission rate along and in the direction perpendicular to the magnetic field.For the cylindrical grain shape of radius r and height h, we get γ_ SPE=2π r^2-2π rh/2π r^2+2π rh=1-s/1+s which implies γ_ SPE=1/3 for s=h/r=1/2 and the anisotropy increases with grain elongation s.One interesting feature of SPEs from aligned grains is the difference in the polarization state of SPEs with respect to the direction of electron emission. Indeed, for SPEs emitted along the alignment axis, they have spins directed along the direction of motion, and those emitted in the direction perpendicular to the alignment axis have spins perpendicular to the direction of motion. § NUMERICAL RESULTS§.§ Dependence of SPE emission on grain alignmentThe efficiency of grain alignment is a key parameter for the SPE photoemission. The minimum size of grain alignment a_ align depends on several physical parameters, including the gas density and radiation field, as given by Equation (<ref>). The maximum alignment efficiency f_ max depends on magnetic susceptibility (see Eq. <ref>). To get insights into the effect of grain alignment on SPEs, here we calculate SPE photoemission for different values of a_ align and f_ max.Figure <ref> shows the results for photoemission of SPEs from aligned grains as a function of the alignment size, a_ align, for the different maximum grain size, a_ max, assuming the typical ISRF and the perfect alignment of large grains with f_ max=1. The left, middle, and right panels show the total SPE emission rate (Γ_ pe^ SPE/G_ UV), the fraction of SPEs relative to total photoelectrons (f_ SPE), and the SPE yield, respectively. One can see that aligned dust can create more than 10-20 SPEs per H. Moreover, both the SPE photoemission rate and the fraction of SPEs increase rapidly with decreasing a_ align due to the increase in the photoemission from grains aligned by RATs. This is reasonable because small grains contribute more to the total surface area and the photoelectric yield is larger for small grains. For the typical ISRF, the alignment size is a_ align≈ 0.05 (see Eq.<ref>), for which the fraction of SPEs is f_ SPE≈ 5%.The contribution of nanoparticles produces about 2.5% of SPEs due to the weak alignment of nanoparticles. Numerical calculations by <cit.> showed that nanoparticles could reach the maximum alignment degree of ∼ 5%. However, due to their dominant surface area and higher photoelectric yield, the emission of SPEs is still contributing about 2.5-5%. Figure <ref> shows the similar results as Figure <ref>, but as a function of the alignment efficiency f_ max. The SPE rate increases slightly with f_ max and reaches 5% for perfect alignment. For weakly aligned large grains, the emission of SPEs is only produced by nanoparticles. Interestingly, one can see that increasing the maximum grain size a_ max tends to decrease the SPE emission rate due to the reduction in the surface area of largest grains (upper panel) but increases the SPE fraction and yield (middle and lower panels). §.§ Dependence of SPE emission on grain growth Grain growth due to gas accretion and grain-grain collision is expected to occur in dense clouds (see e.g., ). The grain growth increases the surface area of large grains but reduces that of the smallest grains, which affects photoemission rates. To study the effect of grain growth on SPEs, we calculate the SPE photoemission for the different a_ min.Figure <ref> shows the similar results as Figure <ref> but as a function of the lower cutoff of the grain size distribution. The minimum size accounts for the effect of grain growth in molecular clouds, especially in dense PDR regions. The SPE rate increases slightly with a_ min, but the ratio of SPE increases with a_ min due to the reduction in the photoemission from small and unaligned grains. In particular, the grain growth that reduces the smallest grains of a<0.01 could enhance the fraction of SPEs to 10%. This situation is important for PDR regions irradiated by UV radiation from young stellar objects and massive star-forming regions. Same as Figures <ref> and <ref>), increasing the maximum grain size a_ max tends to decrease the SPE emission rate (top panel) but increases the SPE fraction and yield (middle and bottom panels). § DISCUSSION§.§ SPE emission from aligned grains by UV, X-rays, and cosmic raysIn Section <ref>, we have discussed the emission of SPEs from aligned grains by the diffuse interstellar UV radiation. Here, we consider other sources of SPEs in astrophysical environments.Massive stars and young stellar objects (YSOs) are the strong source of UV photons. In this environment, grains are efficiently aligned by MRAT mechanism <cit.>. Thus, we expect that SPEs are abundant in these massive star-forming regions due to enhanced SPE emission yield resulting from grain growth (see Figure <ref>). Moreover, if the UV light is circularly polarized, the amount of left- and right-SPEs from aligned grains would be different.Supernova explosions, gamma-ray bursts, and quasars are the most powerful sources of energetic photons from EUV and X-rays in the Universe. As shown in Figure <ref>, the photoelectric yield of EUV to X-rays is significantly higher than that of UV photons. As a result, the SPE yield emitted from aligned grains is significantly enhanced (see Eq. <ref>). The enhanced flux of SPEs could be the source of chiral asymmetry and cause homochirality.Ly-α photons resulting from the recombination of electrons and protons in HII regions caused by massive stars, active galactic nuclei, or accretion disks around black holes can also be an important source of SPEs from aligned grains.Finally, cosmic rays (CRs) can eject SPEs from aligned grains via collisional ionization <cit.>. Since CRs can penetrate dense molecular clouds and protostellar disks, CR-induced SPEs could be the main driving force for inducing chiral asymmetry of complex molecules that are formed on icy grain mantles in cold and dense clouds, protostellar cores, and disks.§.§ Interaction of SPEs with chiral molecules and interstellar chiral asymmetryExperimental studies <cit.> show that low-energy (≲ 10 eV) SPEs can induce chiral-selective chemistry on a substrate due to the chirality-induced spin selective (CISS) effect (see Appendix <ref>), which is considered the initial step toward homochirality for chiral molecules <cit.>. Very recently, <cit.> suggested that UV irradiation of magnetic deposits in the prebiotic Earth environment (e.g., basins of closed evaporative lakes) could produce SPEs that would facilitate the homochiral assembly of life’s building blocks. <cit.> found that the crystalization of a racemic ribo-aminooxazoline (RAO, a precursor of RNA) on a ferromagnetic surface could achieve 60% of enantiomer excess.Complex organic molecules (COMs, e.g., CH_3OH and C_2H_5OH) are thought to first form in the ice mantle of dust grains from simple molecules such as H_20, CO, HCN, and NH_3. They subsequently desorb from the grain mantle due to thermal sublimation in star-forming regions, such as hot cores around massive protostars or hot corinos around low-mass protostars <cit.>. Therefore, some chiral molecules may be formed in the icy grain mantle from these simple molecules. Independently, experiments by <cit.> and <cit.> demonstrated that amino acids (including glycine, alanine, and serine) could be formed naturally from UV photolysis of the analogs of interstellar ice (consisting of H_2O, HCN, NH_3, CH_3OH), but such amino acids are racemic (i.e., comprising of equal numbers enantiomers). This suggested that some amino acids in meteorites could form in the ISM via photochemistry rather than formation in liquid water on an early Solar System body <cit.>. Recently, the first interstellar amino acid (propylene oxide) was detected in absorption toward the Galactic center by <cit.>. Due to the CISS effect <cit.>, the interaction between SPEs and chiral molecules on the aligned grain surface is spin-dependent. For example, a left-handed enantiomer tends to interact stronger with left-handed (or spin-down) than right-handed (spin-up) SPEs. A similar effect is between SPEs and chiral molecules in the gas phase. As a result, SPEs gradually increase the chiral asymmetry of interstellar molecules. §.§ Magnetically aligned grains as chiral agentsExperimental studies (e.g., ) show that magnetic substrates act as chiral agents, i.e., they interact stronger with one enantiomer of chiral molecules than the other due to dipole-dipole (i.e., spin-spin) interaction. Here, we propose aligned grains as chiral agents that facilitate enantioenrichment of chiral molecules in the ISM. We now quantify the dipole-dipole interaction induced by aligned grains. Consider silicate grains containing embedded iron clusters with the number of iron atoms per cluster, N_ cl, and the volume filling factor, ϕ_ sp. The existence of iron clusters makes composite grains become superparamagnetic material that has magnetic susceptibility increased by a factor of N_ cl from ordinary paramagnetic material <cit.>. Therefore, the magnitude of Barnett magnetic moment becomes μ_ Bar≃ 4.6× 10^-17T_g,1^1/2a_-5^1/2 St(N_ cl,4ϕ_ sp,-2/T_d,1)  esu,    where St is the suprathermal rotation parameter defined by St=Ω/Ω_T and Ω_T=(kT_ gas/I_a)^1/2 with I_a being the grain inertia moment (see Appendix <ref>), a_-5=a/10^-5, N_cl,4=N_cl/10^4, ϕ_ sp,-2=ϕ_ sp/10^-2 where the normalization factor of 10^-2 corresponds to 3% of iron abundance embedded in the dust in the form of iron clusters (see ), T_g,1=T_ gas/10 and T_d,1=T_d/10 with T_ gas, T_d being the gas and dust temperatures.The grain's magnetic moment produces a magnetic potential at a large distance r from the dipole, which is given by () ϕ()=./r^3,and the magnetic field ()=-∇ϕ(). The energy potential due to dipole-dipole interaction of the magnetic grain and the chiral molecule of dipole momentsand ', separated by a distance r, is given by (see, e.g., )U_dd = -.()=(.∇)ϕ()= .'/r^3-3(.)('.)/r^5.For parallel or anti-parallel magnetic moments, the potential energy becomes U_ dd =-.'/r^3(3cos^2θ-1), where θ is the relative angle betweenand . For parallel magnetic dipoles, the grain Barnett dipole interaction is attractive at its maximum of U_max = -2μ.μ'/r^3 for θ=0^∘ and repulsive for cosθ<1/√(3). For anti-parallel dipoles, the interaction is attractive at its maximum for θ=90^∘ and repulsive for θ=0. Therefore, the dipole-dipole interaction favors the attraction along the alignment axis for parallel spins and the attraction along the perpendicular direction for anti-parallel spins.Equation (<ref>) implies the interaction potential between the aligned grain of magnetic moment μ_ Bar (Eq. <ref>) and a chiral molecule of magnetic moment _ mol as given byU_dd = -_ Bar._ mol/r^3(3cos^2θ-1)≃ -0.027St_2r_0.01^-3p_ mol,3(3cos^2θ-1) _d._ mol  ,    where St_2=St/10^2, p_ mol=μ_ mol/μ_B is the strength of molecule magnetic moment and p_mol,3=p_ mol/10^3, _d is the unit vector of the grain spin (angular velocity) and _ mol is the unit vector of molecule spin. Equation (<ref>) shows that the dipole-dipole interaction potential is larger than the kinetic energy of chiral molecules of kT_ gas=0.0013(T_ gas/15) eV for typical cold clouds of T_ gas=15. Thus, they tend to attract chiral molecules and SPEs with similar spins and repel the other due to magnetic dipole-dipole (or spin-spin) interaction. Therefore, aligned grains act as a chiral agent which helps form more complex molecules of similar chirality and increase chiral asymmetry. Note that <cit.> discussed various examples where UV irradiation of magnetic surface in the ISM can result in SPEs. However, metallic surfaces may have random orientation in space due to gas collisions. As a result, direct irradiation of such randomly oriented magnetic surfaces would not lead to SPEs. Our proposed mechanism is based on aligned grains with the magnetic field, which can create SPEs. Since aligned grains are ubiquitous in a wide range of environments, including planet-forming disks where planets, comets, and asteroids are forming, resulting SPEs would enrich the chiral asymmetry of chiral molecules, leading to the homochirality of amino acids observed on Earth and meteorites.§.§ The role of magnetic fields on SPEs from dust and chiral asymmetryIn our proposed mechanism of SPEs, magnetic fields play a crucial role in aligning dust grains via Larmor precession and magnetic relaxation <cit.>. Moreover, magnetic fields also affect the propagation of SPEs in the ISM due to the Lorentz force.For instance, one particular feature is the difference in polarization of SPEs. For SPEs emitted along the alignment axis, they have spin along the direction of motion and those emitted perpendicular to the alignment axis have spin perpendicular to the direction of motion. We call longitudinal and transverse spin polarization. Transverse spin SPEs would be constrained by the magnetic fields while longitudinal SPEs can move along the magnetic field line. Therefore, longitudinal SPEs may be dominant over transverse ones.Finally, large-scale galactic magnetic fields determine the orientation of dust grains on galactic scales and the net spin polarization of SPEs.Therefore, within our proposed mechanism, magnetic fields appear to be a key player in the chirality symmetry breaking and the origin of life. Interestingly, Louis Pasteur predicted that magnetic fields may play a role in inducing homochirality of the biological world. § SUMMARYWe proposed interstellar dust grains aligned with magnetic fields as a key photoemission source of SPEs and chiral agents for interstellar chiral asymmetry. Our main findings are summarized as follows:*Spins of electrons within dust grains containing embedded iron inclusions are aligned by the Barnett magnetic field. Magnetic grains are aligned with the ambient magnetic field due to the Larmor precession, radiative torques, and magnetic relaxation, resulting in the alignment of electron spins with the interstellar magnetic field.*Photoelectric emission of electrons from aligned grains by interstellar UV radiation produces spin-polarized (spin-up or spin-down) electrons. We quantified the rate of SPE emission from aligned grains using the modern grain alignment theory and found that the rate can reach 10^-14G_ UV electron per second per H. The fraction of SPEs relative to the total photoelectrons is ∼ 10-20%.*The yield of SPE photoemission, defined by the ratio of the rate of SPEs to incident UV photons, is found to achieve ∼ 1%. This implies that each SPE is produced for every 100 incident UV photons. Energetic photons like X-rays can produce a higher SPE yield due to Auger and secondary electron effects. *The SPEs emitted from aligned grains could play an important role in chemical reactions, inducing chiral asymmetry of chiral molecules. An initial chiral molecule of the same enantiomer absorbs an SPE would lead to an enantiomer excess, producing an initial small chirality asymmetry for chiral molecules. *Magnetic aligned grains act as chiral agents due to spin-spin (dipole-dipole) interaction and amplify the chiral asymmetry for complex molecules. Since aligned grains are observed in a wide range of environments, from the diffuse ISM to star- and planet-forming regions, we expect a broad implication of SPEs and aligned grains for astrochemistry and astrobiology. *If SPEs could induce an initial chiral asymmetry, our result implies that life would be more ubiquitous in the universe than previously thought.*Our proposed mechanism for SPEs based on magnetically aligned grains is more universal than ferromagnetic deposits in the prebiotic Earth required for the proposal in <cit.> because grains are aligned with the magnetic field in a wide range of astrophysical environments and can work for any magnetic material (para-, superpara, and ferro-/ferri-magnetic material).T.H. acknowledges the support by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2019R1A2C1087045). This work was partly supported by a grant from the Simons Foundation to IFIRSE, ICISE (916424, N.H.). § GRAIN ALIGNMENT BY THE MRAT MECHANISM§.§ Suprathermal rotation and grain alignment by RATsAccording to the modern grain alignment theory based on RATs, grain alignment can occur at low-J attractors and high-J attractors <cit.>. Numerical simulations in <cit.> show that if the RAT alignment has a high-J attractor point, then, large grains can be perfectly aligned because grains at low-J attractors would be randomized by gas collisions and eventually transported to more stable high-J attractors by RATs. On the other hand, grain shapes with low-J attractors would have negligible alignment due to gas randomization. For small grains, numerical simulations show that the alignment degree is rather small even in the presence of iron inclusions because grains rotate subthermally <cit.>.Let γ and λ̅ be the anisotropy degree and the mean wavelength of the radiation field of energy density u_ rad. The strength of the local radiation can be described by U = u_ rad/u_ ISRF where u_ ISRF = 8.64 × 10^-13^-3 is the energy density of the interstellar radiation field (ISRF) in the solar neighborhood (). Following <cit.> (see also ), an irregular grain of effective size a subject to a luminous radiation field can be spun up by RATs to a maximum angular velocity given byΩ_ RAT= 3γ u_ radaλ̅^-2/1.6n_ H√(2π m_ HkT_ gas)(1/1+F_ IR)≃ 9.4× 10^8 s^1/3a_-5(λ̅/1.2)^-2(γ U/n_1T_ g,1^1/2)(1/1+F_ IR)^-1, for a≲ a_ trans with a_ trans=λ̅/2.5 being the transition size at which the average RAT efficiency changes the slope.For large grains with a> a_ trans, one has Ω_ RAT=1.5γ u_ radλ̅a^-2/12n_ H√(2π m_ HkT_ gas)(1/1+F_ IR)≃ 8.1× 10^10s^-2/3a_-5^-2(λ̅/1.2) (γ U/n_1T_ g,1^1/2)(1/1+F_ IR)^-1.The suprathermal rotation number for the grain spin-up by RATs is then, _ RAT=Ω_ RAT/Ω_T≃ 1.8× 10^4ρ̂^1/2s^5/6a_-5^7/2(λ̅/1.2)^-2(γ U/n_1T_ g,1) (1/1+F_ IR), and _ RAT≃ 1.5× 10^6ρ̂^1/2s^-1/6a_-5^1/2(λ̅/1.2) (γ U/n_1T_ g,1)(1/1+F_ IR). §.§ The minimum size of aligned grains by RATs Therefore, the first parameter of the RAT alignment theory is the minimum grain size required for grain alignment, denoted by a_ align, which is defined based on the suprathermal rotation condition <cit.>. Let n_ and T_ be the local gas density and temperature. Let γ_ rad, u_ rad, and λ̅ be the anisotropy degree, radiation energy density, and the mean wavelength of the radiation field. The minimum alignment size by RATs is given by <cit.>:a_ align=(1.2n_ HT_ gas/γ_ rad u_ radλ̅^-2)^2/7(15m_ Hk^2/4ρ)^1/7(1+F_ IR)^2/7≃ 0.055ρ̂^-1/7(γ_-1U/n_3T_ gas,1)^-2/7×(λ̅/1.2)^4/7 (1+F_ IR)^2/7 ,where ρ̂=ρ/(3^-3) is the normalized mass density of grain material, T_ gas,1=T_ gas/10, n_3=n_/10^3^-3, γ_-1=γ_ rad/0.1, U=u_ rad/u_ ISRF with u_ ISRF=8.64× 10^-13^-3 being the radiation energy density of the interstellar radiation field (ISRF) in the solar neighborhood <cit.>, and F_ IR is a dimensionless parameter that describes the grain rotational damping by infrared emission. For the ISM and dense clouds, F_ IR≪ 1, and can be omitted in Equation (<ref>). §.§ Effect of magnetic relaxation and degree of MRAT grain alignment Traditionally, the alignment efficiency for an ensemble of grains with size a is described by the Rayleigh reduction factor, R<cit.>. The Rayleigh reduction factor describes the average alignment degree of the axis of major inertia of grains (_1) with its angular momentum (, i.e., internal alignment) and of the angular momentum with the ambient magnetic field (i.e., external alignment), given byR=⟨1/2(3cos^2β-1)1/2(3cos^2ξ-1)⟩, where β is the angle between _1 andand ξ is the angle betweenand , and ⟨...⟩ describes the averaging over an ensemble of grains (see, e.g., ). Here, R=0 for randomly oriented grains, and R=1 for perfect internal and external alignment. According to the modern grain alignment theory based on RATs, grain alignment can occur at low-J attractors and high-J attractors <cit.>. Let f_ high-J be the fraction of grains that can be aligned by RATs at high-J attractors. For grain alignment by only RATs (e.g., grains of ordinary paramagnetic material, ), high-J attractors are only present for a limited range of the radiation direction that depends on the grain shape <cit.>. Silicate grains containing embedded metallic iron or iron oxide are superparamagnetic material that has magnetic susceptibility increased by a factor of N_ cl from ordinary silicate paramagnetic material <cit.>. The enhanced magnetic susceptibility significantly increases the rate of magnetic relaxation (denoted by τ_ mag,sp^-1) and the degree of grain alignment (). The strength of magnetic relaxation for rotating composite grains is defined by the ratio of the magnetic relaxation rate relative to the gas randomization rate (denoted by τ_ gas^-1) δ_ mag=τ_ mag,sp^-1/τ_ gas^-1 = 5.6a_-5^-1N_ clϕ_ sp,-2p̂^2B_2^2/ρ̂ n_3T_,1^1/2k_ sp(Ω)/T_d,1,     where T_ d,1=T_ d/10 is the normalized dust temperature, p̂=p/5.5 where pμ_B is the mean magnetic moment per iron atom and μ_B=eħ/2m_ec is the Bohr magneton, B_2=B/10^2μ G is the normalized magnetic field strength, and ϕ_sp,-2=ϕ_ sp/10^-2. Above, k_ sp(Ω) is the function of the grain angular velocity Ω which describes the suppression of the magnetic susceptibility at high angular velocity (see, e.g., ). Detailed numerical calculations in <cit.> show the increase of f_ high-J and R with δ_ mag. To model the increase of f_ high-J with δ_ mag, as in <cit.>, we introduced the following parametric model: f_ high-J(δ_ mag) = {[0.25     for δ_ mag < 1; 0.5     for  1 ≤δ_ mag≤ 10; 1      for δ_ mag > 10;].. The maximum alignment degree, f_ max in the alignment function given by <ref>) could be approximately given by f_ high-J (see e.g., ).§ THE CISS EFFECTThis CISS effect was first discovered by <cit.> and <cit.> where the auhors found that the transmission of electrons through chiral molecules is spin-dependent. The later experiment by <cit.> showed that transmitted electrons through doubled-stranded (chiral) DNA are spin-polarized, and photoelectrons are spin-polarized even when they are produced by unpolarized light. The physics of CISS effect is as follows. The chiral molecule induces an electric potential _ helix<cit.>. As the electron moves through the chiral molecules with a velocity , the electron experiences an electrostatic force which acts in the perpendicular direction of the electron velocity and directs to the molecular axis, equivalent to the centripetal force. Therefore, one can write _ cen=e_ helix, where _ helix is the electric field acting on the electron by the chiral molecule <cit.>. The helix electric field can be measured experiementally and it is rather strong.Due to the centrepital force, the electron changes its initial direction and follows the chiral trajectory. In the electron's frame, this is equivalent to the electron gyrates around the effective magnetic field which is directed along the vertical direction <cit.>.Following <cit.>, the Hamiltonian of SOC is given byH_ SO=eħ/4m_e^2c^2.(×)=μ_B/2c.(×), where =m_e, μ_B=eħ/(2m_ec) andis the Pauli spin matrix.Writing in the form of the magnetic dipoleand effective field, H_ SO=-._ eff=μ_Bg_e._ eff, where g_e≈ 2. Equating Equations (<ref>) and (<ref>) one obtains_ eff=/4c×_ helix,where E,c,B are given in cgs units (cf. ).The spin-orbit coupling interaction between the electron magnetic moment (due to its spin) and the effective magnetic field causes the potentialU_e=-._ eff which results in the Zeeman splitting effect. Therefore, when passing through a chiral molecule, one electron spin state is favored while the other is disfavored, resulting in the electron spin-filtering effect by chiral molecules.
http://arxiv.org/abs/2312.15934v1
{ "authors": [ "Thiem Hoang" ], "categories": [ "astro-ph.GA", "astro-ph.EP", "physics.bio-ph" ], "primary_category": "astro-ph.GA", "published": "20231226075923", "title": "Photoemission of spin-polarized electrons from aligned grains and chiral symmetry breaking" }
../hsiam2 We define Hochschild cohomology of thesecond kind for differential graded (dg) or curved algebras as a derived functor in a compactly generated derived category of thesecond kind, and show that it is invariant under Morita equivalence of the second kind. A bimodule version of Koszul duality is constructed and used to show that Hochschild cohomology of thesecond kind is preserved under (nonconilpotent) Koszul duality. Hochschild cohomology of the second kind of an algebra often computes the ordinary Hochschild cohomology of geometrically meaningful dg categories. Examples include the category of infinity local systems on a topological space, the bounded derived category of a complex algebraic manifold and the category of matrix factorizations. * Ivan Dimitrijević, Branko Dragovich, Zoran Rakićand Jelena Stanković==========================================================================§ INTRODUCTION Hochschild cohomology of an associative algebra (or, more generally, a dg algebra) is well-known to be invariant under Morita equivalence. A related fundamental result, or a series of results, is the invariance of Hochschild cohomology under Koszul duality. In other words, Hochschild cohomology of a dg algebra and that of its Koszul dual dg coalgebra are isomorphic, together with various structures that they possess; we refer to <cit.> for the most structured version of this result and its history. At the same time, there is another version of Hochschild cohomology (sometimes called Hochschild cohomology of the second kind or compactly supported Hochschild cohomology, <cit.>) which is used e.g. in the study of categories of matrix factorizations. There are also Koszul duality theorems involving coalgebras that are not necessarily conilpotent, cf. <cit.>. In this paper we investigate the analogues of the above-mentioned results in the context of global (i.e. non-conilpotent) Koszul duality and Hochschild cohomology of the second kind. In more detail, our results are as follows. After recalling in Section<ref> the notion of a twisted complex over a dg category and, as a special case, that of a twisted finitely generated module over a dg algebra together with Morita equivalence for dg categories, we give a proof of the Morita invariance of Hochschild cohomology. Our proof is constructed in such a way that it is suitable for generalization to homological algebra of the second kind. We review coderived categories of coalgebras and two versions of nonconilpotent Koszul duality between comodules over a (not necessarily conilpotent) dg coalgebra and modules over its Koszul dual dg algebra, following <cit.> and <cit.>. In particular, we recall the compactly generated derived category of thesecond kind (A) for a dg algebra A, generated by its subcategory (A) of compact objects. (A) is equivalent to the coderived category ( A) of its Koszul dual. <ref> The Hochschild cohomology complex of thesecond kind of a dg algebra A is (A) = _(A⊗ A^)(A,A). A map of dg or curved algebras f: A → B is a Morita equivalence of the second kind if it induces a equivalence (A) ≃(B) and we showMorita invariance for Hochschild cohomology: <ref> A Morita equivalence of thesecond kind between dg algebras FA → B induces a quasi-isomorphism isomorphism (A) ≃(B) of dg algebras. This is the content of Section <ref>. At this point we should warn the reader that our version of Hochschild cohomology of the second kind is not the same as that of Positselski and Polishchuk <cit.> but rather lies between the ordinary Hochschild cohomology and that of Positselski-Polishchuk, cf. Remark <ref>. Our definition is less elementary than the one in <cit.> but has the advantage of being compatible with Koszul duality; in favourable cases the two definitions are equivalent. Note that Hochschild cohomology, be it of the first or second kind, is constructed as a (co)derived functor of bi(co)modules; whereas Koszul duality is usually formulated as an equivalence between one-sided modules and comodules. It is natural, therefore, to establish Koszul duality as an equivalence between bimodules over an augmented algebra and bicomodules over a suitably Koszul dual coalgebra; note that this is the approach of <cit.>. This is done in Section <ref>, both in the context of conilpotent and non-conilpotent Koszul duality. In fact, our result is slightly more general and establishes compatibility of Koszul duality with tensor products of dg algebras and dg coalgebras. A consequence of this compatibility is a Quillen equivalence between the model categories of C-bicomodules for a dg coalgebra C and dg bimodules over its cobar-construction Ω C. In the case when C is conilpotent, the weak equivalences on the Ω C-bimodule side are the ordinary quasi-isomorphisms (and our result reduces to Keller's <cit.>) but in the non-conilpotent case they are more subtle (closer to isomorphisms). We deduce: <ref> For any dg algebra A, there is a quasi-isomorphisms of dg algebras (A) ≃( A). Here A is the extended bar-construction of A, cf. <cit.> concerning this notion. Similarly, for a dg algebra A, we establish two further types of Koszul duality. One is a Quillen equivalence between the category of dg A-bimodules with the ordinary model structure (i.e. with quasi-isomorphisms for weak equivalences) and bicomodules over 𝖡A, the bar-construction of A; this is also essentially Keller's result in op. cit. The other is a Quillen equivalence between the category of A-bimodules with the compactly generated model structure of the second kind and A-bicomodules. For simplicity we state and prove our results first in the differential graded and augmented case and formulate the general results for curved, not necessarily augmented algebras in Section <ref>. In particular, Definition <ref>, Theorem <ref> and Corollary <ref> hold more generally for curved algebras. The proofs are mostly the same except that in the curved setting there is no Yoneda embedding available as a curved algebra is not a left or right module over itself. To circumvent this difficulty, we construct for every curved algebra A a dg algebra A' with (A) ≅(A') and so, (A)≅(A').It is interesting to observe that the dg algebra A' is acyclic, in particular its ordinary derived category is trivial as well as the ordinary Hochschild cohomology. Finally, in Section <ref> we give some examples of Hochschild cohomology of the second kind of a dg or curved algebra A that can be reduced to Hochschild cohomology of the first kind of the dg category of perfect A-modules of the second kind, which often has geometric or topological meaning. Results of this sort were first obtained in <cit.>. Specifically, with this reduction we show the following: * If A is the Dolbeault algebra of a smooth complex algebraic manifold X then (A) is Hochschild cohomology of a dg model of the bounded derived category of coherent sheaves on X, * If 𝒜^*(M) is the de Rham algebra of a smooth compact manifold M then (𝒜^*(M)) is Hochschild cohomology of the category of infinity local systems on M, * If R_w is /2-graded regular commutative algebra concentrated in even degrees with curvature element w then, under a technical assumption, (R_w) is Hochschild cohomology of the category of matrix factorizations (R,w) which encodes the structure of the hypersurface singularity defined by w. § NOTATION AND CONVENTIONS We fix a ground field k throughout. The category of (cohomologically ℤ-graded) differential graded (dg) vector spaces over k will be denoted by . The shift of a dg vector space A is denoted by A[1] so that A[1]^i=A^i+1. The categoryis monoidal with respect to the tensor product; monoids and comonoids in it are dg algebras and dg coalgebras respectively. A curved algebra (A, d, h) is a graded algebra A equipped with an element h ∈ A^2 and a derivation d such that d^2(a) = [h,a]. If h=0, a curved algebra can be viewed as a dg algebra. A curved module (M,d) over (A,d,h) is a graded A-module M with a derivation d such that d^2(m) = hm. The notions of a curved coalgebra and a comodule over it can be defined by duality. More details on curved (co)algebras and (co)modules can be found e.g. in <cit.>. §.§ Pseudocompact algebras Instead of (dg or curved) coalgebras, we will consider (dg or curved) pseudocompact algebras, i.e. topological algebras arising as inverse limits of finite-dimensional discrete algebras. Taking (continuous) duals gives a contravariant equivalence of categories between coalgebras and pseudocompact algebras, and similarly between right comodules over C and right pseudocompact modules over C^*. Local augmented pseudocompact algebras correspond to conilpotent coalgebras. A tensor product of pseudocompact algebras or modules is always assumed completed,equivalently it is the linear dual of the tensor product of the dual coalgebras or comodules. We denote the dual of a dg vector space V by V^*. If V is discrete this is (V, k) equipped with the natural inverse limit topology. If V is pseudocompact this is the discrete k-module of continuous maps V → k. In particular this ensures V^**≅ V. §.§ DG categories A dg category A is a category enriched over ; in particular, for each pair of objects X,Y ∈A, homomorphisms from X to Y form a dg k-module which we denote by _A(X,Y). A dg functor is a functor enriched over . Any dg category A has an associated ordinary category A, called its homotopy category, whose objects are the same as A but whose morphisms are defined by A(X,Y) = H^0(_A(X,Y)). A dg functor F A→B is a quasi-equivalence if * F induces quasi-isomorphisms A(X,Y) →B(FX,FY) for any objects X, Y in A. * H^0(F) is essentially surjective (or, equivalently assuming (1), H^0(F) is an equivalence of categories). § TWISTED MODULES §.§ DG modules and twistings Let A be a dg category. A (right) dg A-module is a dg functor M A^→. We denote by A the category of dg A-modules. There is a natural map h A→A, sending an object Y ∈A to (-,Y), called the Yoneda embedding; a dg version of the usual Yoneda lemma says that h is fully faithful. Similarly a left dg A-module is a right dg A^-module, i.e. a dg functor N:A →. We will always specify if we consider a left dg module; by default a dg module will be a right module. A dg A-module M is acyclic if it is acyclic pointwise, i.e. M(A) is an acyclic complex for all A ∈A. The derived category (A) of dg A-modules is the Verdier quotient of (A) by acyclic dg A-modules. There is a model category structure on A, where weak equivalences are pointwise quasi-isomorphisms and fibrations are pointwise surjections. All objects are fibrant; we denote by ( A) the cofibrant objects in A. By general results on model categories, (A) = (A). Twisted complexes were first defined in <cit.> and later redefined in <cit.>. A (two-sided) twisted complex over A is a formal expression (⊕_i=1^n C_i[r_i], q), where C_i ∈A, r_i ∈ℤ, n ≥ 0, q=(q_ij), q_ij∈(C_j[r_j], C_i[r_i]) homogeneous of degree 1 such that dq+q^2 = 0. A twisted complex is one-sided if q_ij = 0 for all i ≥ j. For two twisted complexes C and C', the space of morphisms of twisted complexes (C,C') is the ℤ-graded k-module of matrices f=(f_ij), f_ij∈(C_j[r_j], C'_i[r'_i]) with differential df = (df_ij) + q'f -(-1)^|f_ij|fq. Composition of morphisms is usual matrix multiplication. We denote the dg category of twisted complexes over A by (A) and the dg category of one-sided twisted complexes by (A). In <cit.>, these were respectively denoted Pre-Tr(A) and Pre-Tr^+(A); furthermore, ( A) = Pre-Tr^+(A) can alternatively be defined as the pretriangulated hull of A, that is, it is the closure of A under shifts and cones. There is a natural functor from twisted complexes over a dg category A to right dg A-modules: We send (⊕_i=1^n C_i[r_i], q) to (⊕_i=1^n (A, C_i)[r_i], q_*). As an example, consider the case where A is a dg algebra (A,d_A), considered as a dg category with one object. Recall that a Maurer–Cartan element in A is an element x ∈ A^1 such that dx+x^2 = 0, and that the set of all Maurer–Cartan elements in A is denoted by (A). Then a twisted complex over A is a pair (M, q) where M ≅ V ⊗ A as an A-module for some finite-dimensional ℤ-graded vector space V, and q ∈( V ⊗ A). The A-module M becomes a (right) dg A-module when equipped with the differential 1 ⊗ d_A + q, and in fact, every dg A-module structure on the A-module M arises this way, as noted in <cit.>. A twisted complex over A is therefore precisely a finitely generated twisted A-module in the following sense. A twisted A-module over a dg algebra A is a dg A-module that is free as an A-module after forgetting the differential, that is, it is isomorphic as an A-module to V ⊗ A for some graded vector space V. A finitely generated twisted A-module is a twisted A-module V ⊗ A with V finite-dimensional. Note that there is a slight clash of terminology here; nevertheless this is unavoidable as we wish to refer to non-finitely generated twisted modules later. Note also that (M, 1 ⊗ d_A) above is a dg ( V ⊗ A)-A-bimodule whose differential has been twisted by the element q ∈( V ⊗ A). More generally, we have the following. Let (A,d_A) be a dg algebra and x ∈(A). * The twisted algebra of A by x, denoted A^x = (A,d^x), is the dg algebra with the same underlying algebra as A and differential d^x(a) = d_A(a) + [x,a]. * Let (M,d_M) be a left dg A-module. The twisted module of M by x, denoted M^[x] = (M,d^[x]), is the left dg A^x-module with the same underlying module structure as M and differential d^[x] (m) = d(m) + xm. The condition x ∈(A) ensures that M^[x] is indeed a left dg A^x-module, and that furthermore, if M is a dg A-B-bimodule for some dg algebra B, then M^[x] is a dg A^x-B-bimodule, that is, the right B-module action remains compatible with the new differential. A perfect dg A-module for a dg category A is a cofibrant dg A-module that is homotopy equivalent to a direct summand of a module in (A). We denote by (A) the full dg subcategory of (A) consisting of perfect modules. It can be shown that (A) is a compactly generated triangulated category, and that (A) consists precisely the compact objects in (A), cf. <cit.>. §.§ DG Morita equivalence A dg functor F A→B is a (dg) Morita equivalence if the induced map F_! (A) →(B) is an equivalence of triangulated categories. Equivalently, F is a Morita equivalence if it induces a quasi-equivalence (A) →(B), (A) →(B)or (A) →(B). (The first two statements follow from the fact that A is compactly generated by ( A).) We will abuse notation and also denote these maps by F_!. A main result of <cit.> is that the Yoneda embedding h A→(A) ↪(A) induces dg equivalences (A) →((A)) and (A) →((A)). In particular, this implies that h_! (A) →((A)) is a quasi-equivalence, so the Yoneda embedding is a Morita equivalence. In fact, (A) is a Morita fibrant replacement of A in the sense of <cit.>. § HOCHSCHILD COHOMOLOGY OF THE FIRST AND SECOND KIND In this section we recall Hochschild cohomology for dg categories and define Hochschild cohomology of thesecond kind for dg algebras. It is well-known that Hochschild cohomology is Morita invariant; we will give an alternative proof of this fact and show how the proof can be adapted to give a Morita invariance result in the second kind case. §.§ Hochschild cohomology of the first kind Let A be a dg category. The Hochschild cohomology of A is (A) = _A⊗A^ (A,A). This is a dg algebra considered as the endomorphisms of one object. We show that Hochschild cohomology is invariant under Morita equivalence. This is well-known, cf. <cit.> but we give a proof that is different from the standard proofs and will be used as a model when proving the analogous result for Hochschild cohomology of the second kind. A Morita equivalence F A→B induces a quasi-isomorphism (A) ≃(B) of dg algebras. Since F A→B induces a map (F ⊗ F^)_!(A⊗A^) → (B⊗B^) sending A to B, and all bimodules admit cofibrant replacements, it suffices to show that (F ⊗ F^)_! (A⊗A^) →(B⊗B^) is a quasi-equivalence. Indeed, the restriction of (F ⊗ F^)_! to (A⊗A^) fits into the commutative diagram [sep=large] (A⊗A^)((A) ⊗(A)^) (B⊗B^) ((B) ⊗(B)^)["(h ⊗ h^)_!", from=1-1, to=1-2] ["(h ⊗ h^)_!", from=2-1, to=2-2] ["(F_!^⊗ F_!^)_!", from=1-2, to=2-2] ["(F ⊗ F^)_!", swap, from=1-1, to=2-1] where the rows are quasi-equivalences as the Yoneda embedding h induces a quasi-equivalence h_! (A) →((A)), and (F_!^⊗ F_!^)_! is a quasi-equivalence as F is a Morita equivalence. Hence (F ⊗ F^)_! (A⊗A^) →(B⊗B^) restricts to an equivalence of categories on the compact objects, and any such cocontinuous functor between compactly generated triangulated categories is an equivalence, by <cit.>. Finally the endomorphisms of corresponding objects are multiplicatively quasi-isomorphic. Let A be a dg category. Then the Yoneda embedding A→(A) induces a quasi-isomorphism (A) ≃((A)). Immediate as A→(A) is a Morita equivalence. §.§ Model structures of thesecond kind and Koszul duality In this section we give the other model structures that will feature in the remainder of this paper. These model structures will all give rise to “derived categories of thesecond kind”, the collective name given to derived categories which arise from localizing at some collection of weak equivalences that are finer than quasi-isomorphism. There are many such derived categories; we will recall the coderived category of a pseudocompact algebra, following <cit.>, and the compactly generated derived category of thesecond kind defined of a dg algebra, as in <cit.>. These have the feature of being compatible with Koszul duality, which we will also recall. We first define, following <cit.>, that in a category of dg or curved modules or comodules an object is coacyclic (resp. contraacyclic) if it lies in the smallest subcategory containing all totalizations of exact triples and closed under cones and direct sums (resp. direct products). Let now A bedg algebra and C be a pseudocompact dg algebra over a field k. The categories C and A, of pseudocompact dg C-modules and dg A-modules respectively, admit the following model category structures of thesecond kind. For any pseudocompact dg algebra C, the category C is a model category where a morphism fM → N is * a weak equivalence if the cone of the dual map f^*N^* → M^* of C^*-comodules is coacyclic; * a fibration if it is surjective; * a cofibration if it has the left lifting property with respect to acyclic fibrations. Equivalently to the dual having a coacyclic cone, we can also characterize weak equivalences directly as having contraacyclic cone. The homotopy category of C is the coderived category of C, denoted by (C). Let A be a dg algebra. There is a model category structure on A, where a morphism fM → N is * a weak equivalence if it induces a quasi-isomorphism _A(T, M) →_A(T, N) for any finitely generated twisted A-module T; * a fibration if it is surjective; * a cofibration if it has the left lifting property with respect to acyclic fibrations. Where it is necessary to distinguish between the usual model structure of the first kind on dg A-modules (where weak equivalences are quasi-isomorphisms) and the model structure of <ref>, we will denote these model categories respectively by A and A. We say that a dg A-module is cofibrant of thesecond kind if it is cofibrant in A, and denote these cofibrant objects by (A). The (compactly generated) derived category of thesecond kind (A) of dg A-modules is the homotopy category of A. Recall that the cobar construction Ω C of C is the tensor graded algebra TC^*[-1] with differential defined using the differential and multiplication on C. The model structures above are then related by the following statement, referred to as Koszul duality for the cobar construction. Let C be a pseudocompact algebra. There is a Quillen equivalence G(C)^⇄Ω C F. We will sometimes write G_C(N) for GN when we want to make the dependence on C clear. The functor G sends a pseudocompact dg C-module N to the dg Ω C-module GN := (N^*⊗Ω C)^[ξ],where ξ is the canonical Maurer–Cartan element in C ⊗Ω C. The twist makes sense as we regard N^* ⊗Ω C as a (C ⊗Ω C)^⊗Ω C-module, where the left action of C comes from the right action of C on N and the Ω C^⊗Ω C-action arises from the left and right multiplication on Ω C. Hence the twisted module (N^*⊗Ω C)^[ξ] is a left dg (C ⊗Ω C)^ξ module and a right dg Ω C-module, and GN is obtained by forgetting the left (C ⊗Ω C)^ξ-action. More explicitly, GN can be written as a complex as follows: N^* ← N^* ⊗C← N^* ⊗C^2 ←…. This is, in fact, a double complex where the vertical differential (not indicated explicitly) is induced by the internal differentials in C and N whereas the horizontal differential is induced by the multiplication in C and its action on N^*. Explicitly, the horizontal differential is given by d(c_0⊗…⊗ c_k⊗ n) =∑_i=0^k-1(-1)^ic_0⊗… c_ic_i+1… c_k⊗ n +(-1)^k+1c_0⊗…⊗ c_kn where the last term in the above sum corresponds to the twisting by ξ. Similarly, the functor F sends a dg Ω C-module M to the pseudocompact dg C-module FM := (M^* ⊗ C)^[ξ] where we use the left Ω C-module structure on M^* and the left C-module structure on C to define the twist. A version of Theorem <ref>, formulated in the language of coalgebras and comodules, is due to <cit.>. Combining Theorem <ref> with Positselski's result we see that if A= Ω C then the homotopy category (A) is precisely the coderived category of A defined by localizing at coacyclic dg A-modules. In the case that C is augmented local (equivalently, that C^* is conilpotent), the cobar construction Ω C is cofibrant and one recovers the more classical statement with Ω C on the right hand side. More on the relationship between different (co)derived categories can be found in <cit.>. For the sake of convenience, we will now sketch the proof of <ref>. We will later note that the arguments also work for (pseudocompact) modules over a curvedpseudocompact algebras C. First note that the functors F and G are adjoint in the sense that there are natural isomorphisms _Ω C(GN, M) ≅_ C(FM,N), for N and M as above, as both sides are identified with the dg vector spaces (N⊗ M)^[ξ]. Composing the functors gives the pseudocompact dg C-module FGN = (N ⊗Ω C^* ⊗ C)^[ξ⊗ 1+1⊗ξ], which as a (double) complex, can be written as follows: N ⊗ C ← N ⊗C^* ⊗ C ← N ⊗ (C^*)^2 ⊗ C ←…. The horizontal differential is described by a similar formula as above. Note, however, that due to the tensor factor C, the horizontal differential is acyclic except at the first term; in fact it gives the standard cobar resolution of N, the map from the above complex to N being induced by the action map N⊗ C→ N. Of course, one has to be careful calling this a resolution since C is a pseudocompact algebra and N is a pseudocompact C-module. This can be made sense of as follows. Consider the homotopy cofiber of the map FGN → N; this is the double complex that can be represented as follows: N ← N ⊗ C ← N ⊗C^* ⊗ C ← N ⊗ (C^*)^2 ⊗ C ←…. Its horizontal differential is acyclic. By taking canonical truncations this complex can be represented as a homotopy inverse limit of acyclic complexes of finite horizontal length, which are, therefore, contra-acyclic as pseudocompact C-modules. Thus, we obtain the following result, from which <ref> follows. The adjunction unit FGN → N is a cofibrant resolution of the pseudocompact dg C-module N. Analogous statements can be obtained if we instead start with a dg algebra A; however, depending on whether we consider model structures of the first or second kind over A, we will have two different equivalences. Let us denote by A the bar construction of A and by A the extended bar construction of A, both viewed as pseudocompact dg algebras. As graded algebras these are respectively defined as TA^*[-1] and TA^*[-1], where T denotes the free local pseudocompact functor while Ť denotes the free pseudocompact functor on a graded pseudocompact vector space. (When dualizing, this corresponds to the free conilpotent coalgebra and the free coalgebra, respectively. Note that we are using A rather than 𝖡 A to distinguish this construction from the usual bar construction on A, which would be a coalgebra dual to our A.) As before, bar differentials are defined using the differential and multiplication on A; see <cit.> in the extended bar case. We then have the following Koszul duality statements for the bar constructions, see <cit.>: There are Quillen equivalences ( A)^⇄A and ( A)^⇄A. We collect some useful facts about the model structure of thesecond kind: * Twisted modules are cofibrant of thesecond kind if they are a union of finitely generated twisted modules. All cofibrant modules are retracts of such. *Any map f: M → N in A with coacyclic or contra-acyclic cone is also a weak equivalence in the sense of Theorem <ref>. * Any A-module is fibrant, and may be written as a filtered limit of finite-dimensional modules. Under Koszul duality the finite-dimensional A-modules correspond to finitely generated twisted A-modules. An arbitrary cofibrant object is a retract of its cofibrant replacement (which is obtained as a union of finitely generated twisted modules). * Let 0 → L → M → N → 0 be an exact triple and T a finitely generated twisted A-module. Then 0 →_A(T, L) →_A(T, M) →_A(T, N) → 0 is exact. Therefore the dg space of homomorphisms from T into the totalization of L → M → N is acyclic. Thus the weak equivalences are closed under totalization of exact triples. They are automatically closed under direct products, and as any finitely generated twisted module is compact they are also closed under direct sums. We now define perfect modules of thesecond kind. A dg A-module is perfect of thesecond kind if it is homotopy equivalent to a direct summand of a two-sided twisted module in (A). We denote by (A) the full dg subcategory of (A) consisting of perfect modules of thesecond kind. We note that any homomorphism f: A→ B induces a dg functor f_!: (A)→(B) and there is a quasi-fully faithful Yoneda embedding h: A →(A). The following statement foris analogous to the corresponding statement for . For any dg category A, the Yoneda embedding induces a quasi-equivalence (A) →((A)). The natural map i (A) →(A) is a Morita fibrant replacement by <cit.>. It suffices to show that i': ^2(A) → ()^2(A) is also a Morita fibrant replacement: the map (A) → ()^2(A) is then a Morita fibrant replacement of the thefunctor (A) →^2(A) induced by the Yoneda embedding, which is a quasi-equivalence by theDefinition of . So i' is a Morita equivalence between Morita fibrant categories, and hence a quasi-equivalence, as the Morita model structure is a (left) Bousfield localisation of the usual Dwyer-Kan model structure. But the map i' is the composition ^2(A) →((A)) → ()^2(A) and each of the factors is a Morita equivalences, obtained by applying the map i to (A) and by applyingto i respectively. Hence i' is itself a Morita equivalence. Let A be a dg algebra. Then (A) is the category of compact objects in (A), and (A) is compactly generated. By definition, (A) consists of dg A-modules that are direct summands of finitely generated twisted A-modules, andby Proposition <ref> it is clear that these are compact as finite-dimensionaltwisted modules are. To prove the that (A) generates we use that (A) is by Koszul duality (Theorem <ref>) anti-equivalent as a triangulated category to the coderived category of pseudocompactdg modules over a pseudocompactdg algebra, which is cocompactly cogenerated by finite dimensional dg modules by <cit.> (which correspond to finitely generated twisted A-modules). Finally, to show that (A) contains all compact objects we use <cit.> in the case 𝒮 = ℛ = (A). §.§ Hochschild cohomology of thesecond kind We define Morita equivalence of thesecond kind. We will then define Hochschild cohomology of thesecond kind for dg algebras, and prove that it is invariant under Morita equivalence of thesecond kind. Let A and B be dg algebras. A morphism FA → B is a Morita equivalence of thesecond kind if it induces an equivalence of triangulated categories (A) →(B). For any dg algebra A the counit map Ω A → A is a Morita equivalence of the second kind. This followis directly by<cit.>. Morita equivalence can equivalently be characterized as follows. A morphism FA → B is a Morita equivalence of thesecond kind if and only if it induces a quasi-equivalence (A) →(B), (A) →(B) or (A) →(B). It follows from the model category structure that F is a Morita equivalence of thesecond kind if and only if the induced left Quillen functior F_! induces a quasi-equivalence (A) →(B). But this is a quasi-equivalence if and only if it is a quasi-equivalence when restricted to compact objects by Theorem <ref>, and (A) is the category of compact objects in (A) by <ref> again. Finally, as (A) is the idempotent completion of (A), (A)≃(B) follows from the quasi-equivalence (A) →(B). If A is a dg algebra, then by Lemma <ref> the Yoneda embedding A →(A) is a Morita equivalence of thesecond kind. For curved algebras A, B there is a quasi-equivalence (A ⊗ B) ≃((A) ⊗(B)) Consider the following diagram [sep=large] (A ⊗ B)((A) ⊗(A)^) ((A ⊗ B)) ["(h^A⊗ h^B)_!", from=1-1, to=1-2] ["i_!", from=1-2, to=2-2] ["h^A⊗ B_!", from=1-1, to=2-2] Here h^A ⊗ B_! is a quasi-equivalence by Proposition <ref> as the Yoneda embedding is a Morita equivalence of thesecond kind by Lemma <ref>. The functor i: (A)⊗(B) →(A ⊗ B) is quasi-fully faithful and thus so is i_!. It follows that (h^A ⊗ h^B)_! is quasi-fully faithful and thus, is a quasi-equivalence. The claimed statement is therefore proved. We will now consider the category of A-bimodules with the model structure from Theorem <ref>, [A]A := (A^⊗ A). The Hom functor _A⊗ A^(-,-): [A]A⊗[A]A→ is a Quillen bifunctor if we put the projective model structure on . By adjointness it suffices to show that ⊗_k: [A]A⊗→[A]Ais a Quillen bifunctor. So let g: P → Q be a cofibration inand f: L → M a cofibration in [A]A and consider fg: M ⊗ P ⨿_L ⊗ P L ⊗ Q → M ⊗ Q. To check that fg is a cofibration if f and g are, it suffices to check the case that g is a generating cofibration. Then we can write g: k[n] →cone(_k[n]) and we obtain fg: M ⊕ L[1] → M ⊕ M[1] which is a cofibration if f is. If g is a generating acyclic cofibration it is of the form 0 →cone(_k[n]). Then fg: L ⊗ Q → M ⊗ Q has cokernel cone(f)⊗cone(_k[n]) which is a totalization of a short exact sequence and thus coacyclic, and thus fg is a weak equivalence by Proposition <ref>. Finally, it is clear from the definition in Theorem <ref> that if f is a weak equivalence so is f ⊗_B for B bounded finitely generatedin . Thus if L → M is a weak equivalence then L ⊗ P → M ⊗ P and its pushout along L ⊗ P → L ⊗ Q are weak equivalences, as is L ⊗ Q → M ⊗ Q. By the 2-out-of-3 property, this gives a weak equivalence fg. We now write _A(M,N) for the derived hom space between M and N in A. Let A be a dg algebra and M a dg bimodule. The Hochschild cohomology of thesecond kind of A with coefficients in M is (A, M) = _A ⊗ A^ (A,M). We write (A) for (A,A). We are concerned with Hochschild cohomology in this paper, but could equally define Hochschild homology of the second kind to be A ⊗^L, II_A ⊗ A^A where the tensor product is derived in the model category [A]A. This makes sense as the tensor product over A ⊗ A^ is a left Quillen bifunctor intowith the projective model structure. This can be shown using that generating cofibrations in [A]A are the images of injection of finite-dimensional comodules under Koszul duality. The definition of weak equivalences ensures that tensoring with the image of a finite-dimensional comodule preserves weak equivalences. A Morita equivalence of thesecond kind FA → B between dg algebras induces an isomorphism (A) ≅(B) of dg algebras. We proceed as in the proof of <ref>. As before, it suffices to show that (F ⊗ F^)_! (A ⊗ A^) →(B ⊗ B^) is a quasi-equivalence. The restriction of (F ⊗ F^)_! to (A ⊗ A^) fits into the commutative diagram [sep=large] (A ⊗ A^)((A) ⊗(A)^)(B ⊗ B^)((B) ⊗(B)^) ["(h ⊗ h^)_!", from=1-1, to=1-2] ["(h ⊗ h^)_!", from=2-1, to=2-2] ["(F_!^⊗ F_!^)_!", from=1-2, to=2-2] ["(F ⊗ F^)_!", swap, from=1-1, to=2-1] where the rows are quasi-equivalences by Lemma <ref>, and (F_!^⊗ F_!^)_! is a quasi-equivalence as F is a Morita equivalence of thesecond kind. Hence by <ref>, the functor (F ⊗ F^)_! (A ⊗ A^) →(B ⊗ B^) restricts to an equivalence of categories on the compact objects, so is an equivalence. § BIMODULE KOSZUL DUALITY The aim of this section is to generalize the Koszul duality statements, <ref> and <ref>, to bimodules. Throughout this section, A and E will denote two augmented dg algebras, and [A]E := (A^⊗ E) will denote the category of dg A-E-bimodules. Similarly C and D will denote two augmented pseudocompact dg algebras, and the notation [C]D will be understood to mean pseudocompact bimodules, i.e. pseudocompact left C-modules that are also pseudocompact right D-modules. §.§ Koszul duality for the cobar construction We now obtain an analogue of <ref> for pseudocompact bimodules by defining the following adjunction: G([C]D)^⇄[Ω C]Ω D F. Let ξ_C ∈(C^⊗Ω C^) and ξ_D∈(D ⊗Ω D) be the canonical Maurer–Cartan elements corresponding to the counits Ω C^→ C^ and Ω D → D of the adjunction Ω⊣. Define ξ := ξ_C ⊗ 1 + 1 ⊗ξ_D∈ C^⊗Ω C^⊗ D ⊗Ω D, then ξ∈(C^⊗Ω C^⊗ D ⊗Ω D). The functor G sends a pseudocompact dg C-D-bimodule N to GN( Ω C ⊗ N^* ⊗Ω D)^[ξ], where as before, GN is a C-D-bimodule. It is the bimodule cobar-construction of the C^⊗ D-module N. It can be written as the direct sum totalization of a double complex as follows: [d] [d] [d] N^* ⊗D^2[d] C⊗ N^*⊗D^2[d][l] C^2⊗ N^*⊗D^2[d][l] [l]⋯ N^*⊗D[d] C⊗ N^*⊗D[d][l] C^2⊗ N^*⊗D[d][l] [l]⋯ N^* C⊗ N^*[l] C^2⊗ N^*[l] [l]⋯ This is, in fact, a triple complex where the third differential (not indicated explicitly) is induced on each term by the internal differentials in C, D and N. The nth row of the above complex is G_C(N⊗D^n) and the nth column is G_D(C^n⊗ N). Similarly, the functor F sends a dg Ω C-Ω D-bimodule M to the pseudocompact dg C-D-bimodule FM(C ⊗ M^* ⊗ D)^[ξ], and the functors F and G are adjoint in the sense that there is a natural isomorphism of dg vector spaces _[Ω C]Ω D(GN, M) ≅_[C]D(FM, N). Indeed, both sides above are identified with the dg vector spaces (N⊗ M)^[ξ]. Composing the functors, we obtain the Ω C^⊗Ω D-module FGN = (C ⊗Ω C ⊗ N ⊗Ω D ⊗D)^[ξ⊗ 1+1⊗ξ]. This is a double complex obtained from (<ref>) by tensoring each entry with C from the left and D from the right. Furthermore, this new double complex can be `augmented' by adding to it as a (-1)-st row the cobar-resolution F_CG_C(N) of the C-module N. We obtain [d] [d] [d]C⊗ N⊗D^2 ⊗ D[d] C ⊗C⊗ N ⊗D^2 ⊗ D[d][l] C ⊗C^2⊗ N⊗D^2 ⊗D[d][l] [l]⋯C ⊗ N⊗D⊗ D[d] C ⊗C⊗ N⊗D⊗ D [d][l] C⊗C^2⊗ N⊗D⊗ D [d][l] [l]⋯C ⊗ N ⊗ D [d] C ⊗C⊗ N⊗ D[l][d] C ⊗C^2⊗ N⊗ D[l][d] [l]⋯ C⊗ N C ⊗C⊗ N [l] C ⊗C^2⊗ N[l] [l]⋯ The resulting total complex can be viewed as the cofiber of a map FGN → F_CG_C(N). We canonically truncate in the vertical and horizontal direction and as in the one-sided case we obtain an inverse system of bounded acyclic complexes of finite length.This shows that this cofiber is contra-acyclic as a C-D-bimodule. Thus, we conclude that the following result holds: The adjunction unit FGN → N is a cofibrant resolution of the pseudocompact dg C-D-bimodule N. One proves similarly: The adjunction counit GFM → M is a cofibrant resolution of the dg Ω C-Ω D-bimodule M. The argument proving Proposition <ref> proves that the unit map has a contra-acyclic cone and Proposition <ref>.<ref> shows this is a weak equivalence. We can now formulate the following bimodule version of <ref>. The functor G is left adjoint to F and they form a Quillen equivalence G([C]D)^⇄[Ω C]Ω D F. We have an adjoint pair of functors (G,F) between C^⊗ D-modules and Ω C^⊗Ω D-modules; it has already been argued above that this is indeed an adjoint pair. Moreover, the functor F clearly converts cofibrations of Ω C^⊗Ω D-modules into fibrations of C^⊗ D-modules (since the latter are simply surjective maps) while G takes cofibrations ofC^⊗ D-modules to fibrations of (Ω C^⊗Ω D)-modules (since the latter are similarly surjective maps). This shows that (G,F) is a Quillen adjunction. By Propositions <ref> and <ref> F and G induce an isomorphism at the level of homotopy categories. For any pseudocompact dg algebras C and D, there is an equivalence of triangulated categories between (Ω C^⊗Ω D) and (Ω (C^⊗ D)). By Koszul duality there is a Quillen equivalence ([C]D)^⇄Ω (C^⊗ D), so by <ref> there is an equivalence of homotopy categories ([Ω C]Ω D) and (Ω (C^⊗ D)). Note that this subsection would have simplified drastically if the functor Ω was quasi-strong monoidal. This is known for local pseudocompact algebras and those dual to pointed coalgebras <cit.>, but not in general. §.§ Koszul duality for the bar constructions Let A, E be two dg algebras. We define two functors G ([ A] E)^⇄[A]EF. Let ξ_A ∈(A^⊗ A^) and ξ_E∈(E ⊗ E) be the canonical Maurer–Cartan elements corresponding to the counits Ω A^→ A^ and Ω E → E of the adjunction Ω⊣. Define ξ := ξ_A ⊗ 1 + 1 ⊗ξ_E∈ A^⊗ A^⊗ E ⊗ E, then ξ∈(A^⊗ E ⊗ A^⊗ E). The functor F sends a dg A-E-bimodule M to FM(M^* ⊗ A^⊗ E)^[ξ] and the functor G sends a pseudocompact dg A-E-bimodule N to GN(N^* ⊗ A^⊗ E)^[ξ]. Analogously we can define two functors G ([ A] E)^⇄[A]EF exactly as above, except replacing every occurrence ofby . The following statement is the bimodule version of <ref>. The functors G and G are left adjoint to F and F respectively, and form Quillen equivalences G ([ A] E)^⇄[A]EF and G ([ A] E)^⇄[A]EF. Furthermore for any dg algebras A and E, there are equivalences of triangulated categories ( A^⊗ E) ≅( (A^⊗ E)) and ( A^⊗ E) ≅( (A^⊗ E)); hence (G,F) and (G,F) are Quillen equivalences. We prove the theorem for the pair of functors (G,F); the proof for (G,F) is the same. By Koszul duality there are Quillen equivalences (( A^⊗ E))^⇄Ω( A^⊗ E) and ( (A^⊗ E))^⇄Ω (A^⊗ E) so it is equivalent to prove that (Ω( A^⊗ E)) ≅(Ω (A^⊗ E)). But by <ref>, (Ω( A^⊗ E)) ≅(Ω A^⊗Ω E) ≅(A^⊗ E) ≅(Ω (A^⊗ E)). Here for the middle equivalence by Proposition <ref> it suffices to check (Ω A^⊗Ω E) ≃(A^⊗ E). This holds true as both sides are equivalent to ((A^) ⊗(E)) by Lemma <ref> and since Ω A → A is a Morita equivalence of thesecond kind (<ref>). Finally this implies that (G,F) is a Quillen equivalence as ( (A^⊗ E)) ≅(A^⊗ E). As a corollary, we get the following result, the first part of whichappears in <cit.>. Here (C) for apseudocompact algebra C (sometimes called coHochschild cohomology of thedual coalgebra C^*) denotes the Hochschild cohomology _C ⊗ C^(C,C). For any dg algebra A, there are isomorphisms of algebras (A) ≅( A) and (A) ≅( A). We prove the second statement. By Theorem <ref> ( A ⊗ A^) ≅(A ⊗ A^). It suffices to check that G( A) = (A ⊗ A^* ⊗ A)^[ξ]≃ A. But this follows as G_A ⊗ A^( A) is the same as G_AF_A(A) = (A⊗ ( A ⊗ A^*)^*)^[ξ_A ⊗ 1 + 1 ⊗ξ_A]. The two expressions clearly have the same underlying graded object and the differential is induced in each case by two copies of the MC element ξ_A ∈ A ⊗ A, acting by left multiplication on A and A and by right multiplication on A and the other copy of A respectively. (A) is computedby the complex ( A ⊗ A)^ξ where the superscript induces two-sided twisting by ξ_A. The computation is the same as how one might compute Hochschild cohomology in terms of the usual tensor algebra. We have (A) ≅_ A ⊗ A^( A,A) and A is freely resolved by A ⊗A^*[1] ⊗ A → A ⊗ A via the multiplication map. Thus we obtain (A) ≃ (_k(A^*,A), d) ≃ (A ⊗ A, d) and the induced differential d is exactly the two-sided twisting by ξ_A, which is the usual Hochschild differential There is another definition of Hochschild cohomology of the second kind, cf. <cit.> which is not equivalent to our notion. To define it, let 𝖡'(A):=⊕_n=0^∞ (A̅^⊗ n)^*[-1] be the `semi-complete' bar-construction of A; it is a dg algebra that is neither discrete in general (because (A̅^⊗ n)^* is a pseudocompact vector space) nor pseudocompact (because an infinite direct sum of pseudocompact vector spaces is not pseudocompact). Nevertheless, the complex (𝖡'(A)⊗̂A)^ξ makes sense and can be taken as a definition of Hochschild cohomology of the second kind (in the sense of Polishchuk-Positselski), ^II_PP(A). This is also sometimes called compactly supported Hochschild cohomology, ^*_c(A), and the dual construction is called Borel-Moore Hochschild homology _*^BM(A). (Note that the subscript c in ^*_c(A) stands for `compactly supported' in contrast with its usage in the present paper which refersto `compactly generated'.) The pseudocompact bar-construction (A) is the pseudocompact completion of 𝖡'(A) and the ordinary bar-construction (A) is a further completion at the maximal ideal. It follows that there are maps of complexes ^II_PP(A)→(A)→(A). In other words, (A) is a kind of a half-way house between ^II_PP(A) and(A). § CURVED AND NON-AUGMENTED CASES We note that the results in the previous two sections can be generalized to the curved and non-augmented settings, i.e. to curved, non-augmented algebras which are Koszul dual to non-local, curved pseudocompact algebras. We restrict ourselves to the case of algebras and do not consider curved categories. We now gather the results, and indicate where there is a difference in the proofs. A twisted module over a curved algebra (A, d, w) is just a curved A-module whose underlying graded module has the form V ⊗ A fora graded k-module V. Explicitily a twisted module is of the form (V ⊗ A, 1 ⊗ d + q) whereq ∈ ( V ⊗ A)^1 satisfies dq+q^2 = 1 ⊗ w ∈ V ⊗ A. With this definition we can define finitely-generated twisted modules, (A) and (A) for a curved algebra. Note that modules over A are somewhat more subtle if A is curved since A itslef is no longer a (left or right) twisted module over itself. It is, however, always a bimodule over itself. Koszul duality extends to the curved setting following <cit.>. As a first step one may extend the bar and cobar construction to the non-augmented case by choosing a section of the unit to define a decomposition A ≅A̅⊕ k of an algebra as a k-module. This will introduce curvature. The Koszul dual of a non-augmented dg algebra is thus a curved pseudocompact algebra. Similarly the Koszul dual of a non-local pseudocompact algebra is a curved algebra. Next, while the bar construction of a curved algebra is not a good notion in general, we always have the extended bar construction A for a curved algebra A, right adjoint to the cobar construction. With this we still have the model structure on A by <cit.> and the Quillen adjunction to A^ by <cit.>. There is also a Morita equivalence of thesecond kind Ω A ≃ A. This follows directly by<cit.>. Theorem <ref> still holds in the curved setting: sompact generation by twisted modules is again inherited via Koszul duality from the fact that comodules are compactly generated by finite-dimensional comodules. This also holds in the curved, nonconilpotent setting, see <cit.>. Since a curved algebra is always a bimodule over itself, the following definition makes sense: Let A be a curved algebra and M be an A-bimodule (i.e. a module over the curved algebra A⊗ A^op). The Hochschild cohomology of thesecond kind of A with coefficients in M is (A, M) = _A ⊗ A^ (A,M). We write (A) for (A,A). The next goal is to transfer the remaining content of Section <ref>. Some results, namely Proposition <ref> and Lemma <ref> hold without any adjustments. However, other results strongly rely on the fully faithful Yoneda embedding A →(A). If A is curved, this is no longer available since A is not a (right) module over itself and so it has no Yoneda embedding. To get around this technical point, we need the following result, which is of some independent interest. Given a curved algebra A, there is a dg algebra A' such that (A) and (A') are quasi-equivalent. Let w∈ A^2 be the curvature element. Let V:=k⊕ k[1] and consider the A-module M:=A⊕ A[1]≅ A⊗ V with the differential d_M given on 1⊗ V⊂ M by the 2× 2 matrix x_w=-[ 0 1; w 0 ]. We have d_M^2=[ w 0; 0 w ] and so M is a curved (perfect) A-module. It follows that x_w is an MC element in the curved algebra A⊗(V). Set A':=_A(M)≅ A⊗(V)^x_w, the twist of A⊗(V) by x_w. The curved algebra A⊗(V) is clearly Morita equivalent of the second kind to A with the equivalence (A)→(A⊗(V)) given by the usual prescription ?↦_A(?,M). Since A' is isomorphic to A⊗(V) as a curved algebra, it follows that (A) and (A') are quasi-equivalent as desired (in fact, the constructed quasi-equivalence is even an equivalence as ordinary categories). It is easy to see that the dg algebra A' constructed above, is acyclic (and so, the twisted module M is homotopically trivial). Indeed, the identity element [ 1 0; 0 1 ] is the coboundary of the element [00; -10 ] in A'. Note that the ordinary derived category of A' is, of course, trivial. We now have the curved analogue of Lemma <ref>. For curved algebras A, B there is a quasi-equivalence (A ⊗ B) ≅((A) ⊗(B)) We have a quasi-equivalence (A⊗ B)≃(A'⊗ B') (where A' and B' are dg algebras II-Morita equivalent to A and B as constructed in Lemma <ref>) since A'⊗ B' is isomorphic to the tensor product of A⊗ B and a 4× 4 matrix algebra. Similarly ((A)⊗(B)) is Morita equivalent of the second kind to (A⊗ B) which reduces the question to the uncurved case that has already been proved in Lemma <ref>. Using Lemma <ref> in place of Lemma <ref>, the curved analogue of Theorem <ref> can now be shown with the same proof: A Morita equivalence of thesecond kind FA → B between curved algebras induces a quasi-isomorphism (A) ≃(B) of dg algebras. With the same adjustment of using Lemma <ref> we also have the curved analogues of Theorem <ref> and Corollary <ref> by the same proof. We will just define the necessary functors and state the theorem: As there is an adjunction Ω⊣ B also for curved pseudocompact algebras <cit.> we obtain a canonical MC element ξ_A ∈(A^⊗ A^) corresponding to the counit Ω A^→ A^. We may define ξ := ξ_A ⊗ 1 + 1 ⊗ξ_E∈ A^⊗ A^⊗ E ⊗ E, then ξ∈(A^⊗ E ⊗ A^⊗ E). The functor F sends a dg A-E-bimodule M to FM(M^* ⊗ A^⊗ E)^[ξ] and the functor G sends a pseudocompact dg A-E-bimodule N to GN(N^* ⊗ A^⊗ E)^[ξ]. Let A, E be curved algebras. The functor G is left adjoint toF respectively, and forms a Quillen equivalence G ([ A] E)^⇄[A]EF. Furthermore for any dg algebras A and E, there is an equivalence of triangulated categories ( A^⊗ E) ≅( (A^⊗ E)). For any curved algebra A, there is a quasi-isomorphism of algebras (A) ≃( A) and (A) is computedby the complex ( A ⊗ A)^ξ. § EXAMPLES OF HOCHSCHILD COHOMOLOGY OF THESECOND KIND §.§ Preliminaries In this final section we compare Hochschild cohomology of the first and second kind in some situations. Let A and E be two curved algebras and M,N be two A⊗ E-modules. Recall that _A⊗ E(M,N) is the derived complex of homomorphisms from M to N in the compactly generated model category of thesecond kind of A⊗ E-modules. Thus, _A⊗ E(M,N) can be represented as _A⊗ E(M̃,N) where M̃ is a cofibrant replacement of M as an A⊗ E-module. Recall that (?) stands for the dg category of perfect cofibrant ?-modules of thesecond kind (which can be represented as retracts of finitely generated twisted modules). The modules M and N can be viewed as (A)⊗(E)-modules: the corresponding functor to dg vector spaces (A) → is given by L↦_A⊗ E(M,L) where L is a given A⊗ E-module. We can, therefore, form _(A)⊗(E)(M,N), the derived functor of homomorphisms in the model category (of the first kind) of (A)⊗(E)-modules. We would like to compare _A⊗ E(M,N) and _(A)⊗(E)(M,N). Note that it is well-known that_A⊗ E(M,N) and _(A)⊗(E)(M,N) are naturally quasi-isomorphic, the reason being that the Yoneda embedding A⊗ E→(A)⊗(E) is a Morita equivalence of categories. This argument, however, breaks down in our situation (e.g. because we wish to compareof the first kind withof the second kind). In fact, there is not even a natural map between _A⊗ E(M,N) and _(A)⊗(E)(M,N). Consider the natural functor i (A)⊗(E)→(A⊗ E) sending a pair M, N of perfect modules of the second kind to M ⊗ N. Then we have the following result, similar to <cit.>. Suppose that i above is a Morita equivalence (of the first kind). Then for A⊗ E-modules M and N we have that _A⊗ E(M,N) and _(A)⊗(E)(M,N) are naturally quasi-isomorphic. We first note that there is an equivalence (A) ≅((A)) induced by the natural map ((A))→((A)). This follows by comparing compact objects, which are (A) on both sides (using ((A)) ≅(A)). With this we can compute _A⊗ E(M,N)≃_(A⊗ E)(M,N)≃_D()(A⊗ E)(M,N) ≃_D((A)⊗(E))(M,N) ≃_(A)⊗(E)(M,N) where we used the assumption in the penultimate step. There is no reason to believe that i is always a quasi-equivalence. In fact, the dg category (A⊗ E) should be viewed as a kind of a completed tensor product of the categories (A) and (E). There are, however, important situations when this completion is extraneous. Let us take E := A^ and M:=A. The above lemma specializes to the following statement. Let A be a dg algebra such that (A)⊗(A^) is Morita equivalent to (A⊗ A^) and M be an A-bimodule. Then (A,M) is naturally quasi-isomorphic to ((A),M). In other words, under the assumptions of <ref>, Hochschild cohomology of thesecond kind of dg algebras reduce to Hochschild cohomology of the first kind of a suitable dg category. A version of this question was considered in <cit.> where some partial results were obtained (for a different notion of Hochschild cohomology of thesecond kind, see Remark <ref>). §.§ Complex algebraic manifolds Let X be a compact complex projective manifold and A:=(𝒜^0,*(X), ∂̅) its Dolbeault-algebra. It is well-known <cit.> that the bounded derived category ^b(X) of coherent sheaves on X is equivalent to the derived category of sheaves on X with boundedcoherent cohomology and, since X is smooth, the latter coincides with the derived category of perfect complexes of sheaves on X. We will consider its dg model (X) formed by taking Dolbeault resolutions of coherent sheaves. According to <cit.>, the latter is equivalent to the dg category of dg modules over some dg algebra (the endomorphism algebra of a generator of (X)). The following result holds. The categories (X× X) and (X)⊗(X) are Morita equivalent. The statement is well-known and follows from a very general result <cit.> valid for perfect derived stacks, not merely for complex projective manifolds. The argument goes back to <cit.>. We sketch a proof for the reader's convenience. Let ℱ be a complex of sheaves representing a generator of (X) and B=R(ℱ) be its endomorphism dg algebra. Then the external tensor product ℱ⊠ℱ is a generator of (X× X) and B⊗ B≃ R(ℱ⊠ℱ). So, (X) is quasi-equivalent to (B) and (X× X) is quasi-equivalent to B⊗ B. Thus we have reduced the question to that of a perfect derived category of a dg algebra. Since (B⊗ B) is Morita equivalent to (B)⊗(B) (for any dg algebra B), the desired claim follows. The dg algebra A=𝒜^0,* satisfies the conditions of <ref>. Thus we have a natural quasi-isomorphism (A)≃((X)). This follows at once from Proposition <ref> together with the observation that A is graded commutative and so A^op≅ A and (A)≃ ((A))^op. §.§ Topological spaces In the topological setting we consider higher local systems on a topological space X. To be precise we denote by (X) the dg category of fibrant cofibrant cohomologically locally constant (clc)sheaves of complexes over k whose cohomology sheaves have finite-dimensional fibers. We recall the following: Let X be a connected locally contractible topological space and C^*(X) its normalized singular cochain algebra with coefficients in k. Then (C^*(X)) ≅(X). Let k = ℝ, X a connected manifold and 𝒜^*(X) its de Rham algebra. Then (𝒜^*(X)) ≅(X). For the first result it follows from <cit.> that (X) is given by finitely generated twisted modules over C^*(X) since finitely generated twisted modules correspond to clc sheaves whose fibers are bounded and finite dimensional in each degree. As (X) is idempotent complete it follows that it is quasi-equivalent to (C^*(X)). The second result is <cit.>. Note that as we restrict to finitely generated twisted complexes we did not need to consider C^*(X) as a pseudocompact algebra in the theorem. If we consider C^*(X) as a pseudocompact algebra we have (C^*(X)) ≅^∞(X), where the right hand side is the category of potentially infinite-dimensional local systems <cit.>. But (C^*(X)) is different from (C^*(X)). E.g. the local system on S^1 associated to the regular representation of π_1(S^1) exists in (C^*(X)) but not in (C^*(X)). If π_1(X) is finite then ^∞(X) is generated by (X) and their Hochschild cohomologies agree. It is known that ^∞(X) ≃C_*Ω X (e.g. <cit.>). Thus Hochschild cohomology of ^∞(X) is given by (C_*Ω(X)) which is known to be equal to the string topology of X <cit.>. See also <cit.>. Let X be a CW complex with finitely many cells in each degree. Then (X × X) and (X) ⊗(X) are Morita equivalent. We let {G_i} be a collection of generators for (X), e.g. the collection of all (classical) local systems on X. We first show that G_i ⊠ G_j generate (X × X). To do this we follow again the argument by Bondal-Van den Bergh. Let N be right orthogonal to { G_i ⊠ G_j}, i.e. _X × X( G_i ⊠ G_j, N) ≃ 0 for all i,j. We need to show that N is trivial. By adjunction we have 0≃ _X × X( G_i ⊠ G_j, N) ≃ _X × X(π_1^*G_i, ℋom(π_2^*G_j, N)) ≃ _X(G_i, (π_1)_*ℋom(π_2^*G_j, N)). It follows that (π_1)_*ℋom(π_2^*G_j, N) ≃ 0 as the G_i generate (X) and (π_1)_*ℋom(π_2^*G_j, N) ∈(X) by our finiteness assumption. Thus the fibers of (π_1)_*ℋom(π_2^*G_j, N) must be trivial and we have _{x_1}× X(π_2^*G_j|_{x_1}× X, N|_{x_1}× X) ≃ 0 which gives _X(G_j, (π_2)_*N|_{x_1}× X) ≃ 0 and as the G_j generate we have that N|_{x_1}× X is trivial and it follows that N itself is trivial. This shows that the comparison functor (X) ⊗(X) →(X × X) given by ⊠ is essentially surjective. It remains to be shown the functor is quasi-fully faithful. It suffices to check on generators, i.e. on local systems. We thus have to compare (L, L') ⊗(M, M') with (L ⊠ L, M ⊠ M') for local systems L,L', M, M' on X. But this is equivalent to showing C^*(X, L' ⊗ L^*) ⊗ C^*(X, M' ⊗ M^*) ≃ C^*(X × X, (L' ⊗ L^*)⊠ (M' ⊗ M^*)). Thus the result follows from the Künneth theorem with local coefficients <cit.>. Here we use the finiteness assumption. Let X be a connected locally contractible topological space that has the homotopy type of a CW complex with finitely many cells in each degree. Then (C^*(X)) ≃((X))). IfX is moreover a manifold (𝒜^*(X)) ≃((X))). §.§ Matrix Factorizations We now turn to a curved example. Let R be a commutative algebra over a field k of characteristic 0, and assume R is regular, and let w ∈ R. Interpreting R as a /2 graded algebra concentrated in even degrees we may consider w as curvature and define a curved ring R_w = (R, 0, w). Then the dg category of matrix factorizations (R,w) may be defined as the idempotent completion of the category of curved modules over R_w such that the underlying graded module is finitely generated and projective in each degree. Its homotopy category is known to be equivalent to the derived category of singularities ^b_coh(Z)/(Z) where Z = w^-1(0) ⊂Spec(R) is the zero locus with singular locus crit(w) <cit.>. For a /2-graded curved algebra we may define its /2-graded categories of twisted modules and perfect complexes of thesecond kind ^II_/2 by just changing the grading in our definitions. With these definitions we obtain thefollowing lemma: There is a quasi-equivalence (R,w) ≅^II_/2(R_w) of /2-graded dg categories. The definitions of (R, w) and ^II_/2(R_w) agree except that matrix factorizations are built out of projective modules for R^#,the underlying gradedof R, rather than free modules. But any curved R_w module whose underlying graded module P is finitely generated projective is a direct summand of a finitely generated twisted module, i.e. a curved module whose underlying graded is finitely generated free over R^#. To show this, pick a finitely generated projective R^#-module L such that P ⊕ L is free and consider G(L) the free curved R-module on L which has elements formal sums ℓ + dℓ and differential d(ℓ + dℓ) = hℓ + dℓ, cf. <cit.>. Then the underlying graded of G(L) is L⊕ L[-1] and P ⊕ P[-1] ⊕ G(L) is the desired R_w-module, proving the assertion. We define Hochschild cohomology of thesecond kind of R_w as in the -graded case, noting that (R_w) = _R_w ⊗ R_w^ (R_w, R_w) is now a /2-graded complex. In the situation as above let furthermore w satisfy crit(w) ⊂ w^-1(0). Then (R_w) ≃((R,w)). The Thom-Sebastiani theorem <cit.> says (R, w) ⊗(R, w) ≅(R ⊗ R, w ⊗ 1 + 1⊗ w). Rewriting in terms of curved algebras this is exactly saying (R_w) ⊗(R_w) = (R_w ⊗ R_w). Using Lemma <ref> we are in the setting of Corollary <ref> and immediately obtain(R_w) ≃((R,w)). Note that ((R,w)) has also been computed for compactly supported Hochschild cohomology (cf. Remark <ref>). For isolated singularities of w^-1(0) this is <cit.> and in general <cit.>. They show ((R,w)) = _c(R_w). This is further used for example in <cit.>. It is notable that in this important case our definitions agree with the older definition that is different in many cases.
http://arxiv.org/abs/2312.16645v1
{ "authors": [ "Ai Guan", "Julian Holstein", "Andrey Lazarev" ], "categories": [ "math.CT", "math.AG", "math.AT" ], "primary_category": "math.CT", "published": "20231227172250", "title": "Hochschild cohomology of the second kind: Koszul duality and Morita invariance" }
1,2]Hao Xu 1,2]Yuanbin Man 1,2]Mingyang Yang 1,2]Jichao Wu 1,2]Qi Zhang 1,2]Jing Wang[1]DAMO Academy, Alibaba Group, Hangzhou 310023, China [2]Hupan Lab, Hangzhou 310023, ChinaAnalytical Insight of Earth: A Cloud-Platform of Intelligent Computing for Geospatial Big Data [============================================================================================== The rapid accumulation of Earth observation data presents a formidable challenge for the processing capabilities of traditional remote sensing desktop software, particularly when it comes to analyzing expansive geographical areas and prolonged temporal sequences. Cloud computing has emerged as a transformative solution, surmounting the barriers traditionally associated with the management and computation of voluminous datasets. This paper introduces the Analytical Insight of Earth (AI Earth), an innovative remote sensing intelligent computing cloud platform, powered by the robust Alibaba Cloud infrastructure. AI Earth provides an extensive collection of publicly available remote sensing datasets, along with a suite of computational tools powered by a high-performance computing engine. Furthermore, it provides a variety of classic deep learning (DL) models and a novel remote sensing large vision segmentation model tailored to different recognition tasks. The platform enables users to upload their unique samples for model training and to deploy third-party models, thereby increasing the accessibility and openness of DL applications. This platform will facilitate researchers in leveraging remote sensing data for large-scale applied research in areas such as resources, environment, ecology, and climate. Keywords: Cloud platform, intelligent computing, geospatial big data, large vision models, artificial intelligence, machine learning system.§ INTRODUCTIONWith the rapid development of remote sensing technology and the launch of numerous remote sensing satellites, an increasing number of high spatial resolution and multi-spectral imagery is being acquired<cit.>. This influx of remote sensing big data presents both opportunities and challenges in terms of data process, management, and analysis. To effectively utilize and extract valuable information from these data, there is a growing need for advanced computational tools and platforms<cit.>.The emergence of cloud computing technology has revolutionized the way data is processed and analyzed. Cloud computing provides on-demand access to a shared pool of computing resources, enabling users to leverage the power of distributed computing and storage without the need for significant upfront investment in hardware and software infrastructures<cit.>. This paradigm shift has proven to be highly beneficial for various industries, including remote sensing<cit.>.Unfortunately, fully capitalizing on these resources remains a challenging endeavor that demands extensive technical expertise and effort. One significant obstacle lies in the realm of basic information technology management, encompassing tasks such as database and server management, data acquisition and storage, deciphering of complex data formats, as well as utilizing various geospatial data processing frameworks<cit.>.To enable researchers to quickly and conveniently search and process vast quantities of remote sensing imagery, international internet giants and related research institutions have successively launched professional remote sensing cloud platforms like Google Earth Engine (GEE)<cit.>, Microsoft Planetary Computer<cit.>, and Sentinel Hub<cit.>. These professional remote sensing cloud platforms not only offer reliable remote sensing data and functionalities but also provide users with high-performance computing and storage resources to support complex remote sensing analysis and applications. Indeed, the built of GEE has provided researches with greater possibilities to process geospatial data in a larger spatial scale. As a result, there has been a significant increase in research focusing on global-scale ecological monitoring<cit.>, natural resource surveys<cit.>, and climate change studies<cit.>.One of the key features of GEE is extensive data catalog, which includes a wide range of satellite imagery, such as Landsat, Sentinel, MODIS, and more<cit.>. The diverse collection allows users to access historical and near real-time data, enabling them to monitor and analyze Earth’s dynamic changes over time<cit.>. Also, GEE provides a user-friendly interface and a JavaScript-based code editor that allows users to write and execute complex geospatial algorithms<cit.>. This makes it easy to perform various remote sensing applications, such as agriculture<cit.>, climate change<cit.>, natural hazards<cit.>, and water resources<cit.>. Furthermore, GEE provides powerful visualization capabilities, allowing users to generate publication-quality visualizations directly within the engine.However, GEE requires users to have a certain level of programming skills, involving writing scripts in JavaScript or Python to access and process data. This may have a learning curve for users who are not familiar with programming<cit.>. In addition, GEE provides some machine learning (ML) algorithms for tasks like image classification and object detection, users still need to implement and train these algorithms themselves. Although GEE provides a rich collection of remote sensing datasets and tools, data retrieval requires users to understand the structure of the datasets and query syntax and use code to retrieve the desired data, which bring some difficulties for users who only need to search and download data.Therefore, inspired by the success of GEE and to address its some limitations, we have developed an intelligent computing cloud platform, named Analytical Insight of Earth (AI Earth), which is specifically designed to overcome the difficulties faced by remote sensing professionals and practitioners in handling large-scale data. The framework of AI Earth is shown in Fig. 1. One of the key advantages of AI Earth is its integration of artificial intelligence (AI) techniques. Users have the flexibility to interact with AI algorithms either through the graphical user interface (GUI) or by directly accessing the code. Additionally, the platform allows users to fine-tune models using custom samples, as well as deploy third-party models to extend the potential applications of deep learning (DL). Moreover, it incorporates AI Earth Segment Anything (AIE-SEG) large vision model, which utilizes interactive annotation through text, point-picking, or bounding box to label the desired objects. With this limited information, the model can efficiently perform batch segmentation to extract all similar objects in the images. What sets out application apart is the groundingbreaking “zero-shot” capability, allowing for the swift and batch extraction of corresponding objects in other images without the need for individual target sample annotations. Another significant advantage of the platform is its scalability, which can dynamically allocate computing resources based on the users’ needs to process large-scale remote sensing imagery. And the platform supports parallel and distributes computing, enabling users to process data in a timely manner and accelerate the analysis process. The scalability and efficiency make the AI Earth platform as well-suited for handling the ever-increasing volume and complexity of remote sensing imagery. § PLATFORM OVERVIEWAI Earth encompasses various functionalities including data retrieval, general computation, AI model training, and application space. Users can access the platform homepage by logging in through their browser at <https://aiearth.aliyun.com>. On the homepage (Fig. 2(a)), users can explore the latest abilities of the AI Earth and navigate to different models such as data resources, product capabilities, application space, and the documentation center. Within the data retrieval page (Fig. 2(b)), users can select diverse data sources and specify retrieval criteria such as temporal, spatial, and cloud coverage parameters to swiftly obtain query results. Moreover, users can directly download the source data. To accommodate researchers and engineers from diverse disciplines, the processing and analysis modules furnish computational services in both toolbox and developer modes. In the toolbox page, users can perform tasks through interactive methods within the web-based user interface (Fig. 2(c)). Conversely, in the developer page, users can utilize the Python programming language to invoke API interfaces and accomplish code writing within the embedded code editor (Fig. 2(d)). The API functions serve as the primary means for users to engage in computations and encompass a diverse array of atomic functions, including arithmetic operations of pixel value, spectral neighborhood analysis, and ML algorithms. Furthermore, developers have the flexibility to personalize their own computation functions and submit user-defined functions (UDFs) to server for execution.Within the model training module, users possess the ability to train AI models for target detection, land classification, and change detection through their own training datasets with the aim of achieving the desired recognition and classification accuracy. The platform offers the ability to upload pre-annotated samples and also provides online annotation capabilities. Users are able to customize their own annotation labels according to their specific requirements. Additionally, to expedite the training of models that cater to individual needs, the Earth platform provides access to 10 publicly available sample datasets that users can utilize as a starting point. Furthermore, users retain the capability to disseminate their algorithms and achieved results accomplished on the platform by constructing customized applications with the application space, thereby fostering the creation and sharing of APP applications with other users.The data catalog of the platform servers as a repository for multi-petabyte-scale satellite remote sensing images and data products. The satellite data primarily encompasses two types, namely Landsat and Sentinel, both of which are extensive time-series Earth observation remote sensing datasets comprising optical and SAR (Synthetic Aperture Radar) images. The data products encompass a wide array of domains, including atmospheric monitoring, land cover, ecological environment, grain crops, and socioeconomic factors. All data has undergone meticulous preprocessing procedures to meet the requirements of distributed cloud storage, ensuring rapid and efficient accessibility for users. Users are relieved of the burden of concerning themselves with the specific data storage format, allowing them to concentrate on the spectral and feature information inherent in the remote sensing data, akin to traditional remote sensing software.Furthermore, to augment the platform’s accessibility and user-friendliness, users have the option to install the platform SDK in their local Python environment. In this setup, computations conducted using third-party libraries like Numpy are executed locally, while the API interfaces of the platform invoked are submitted to the server for execution. Furthermore, the platform offers an OpenAPI specification to simplify the integration with external applications, which supports to enable streamlined batch data retrieval and submission of tasks.§ DATA CATALOGThe multi-petabyte data catalog of AI Earth servers as a repository for satellite remote sensing images and data products which are widely used in geospatial analysis (Table 1). Within the catalog, the primary datasets are Landsat<cit.> and Sentinel<cit.> archive, and users can access images from Landsat-5, Landsat-7, and Landsat-8, as well as Sentinel-1 and Sentinel-2, which provide coverage for the entire China. Additionally, global coverage images are offered through Landsat-9, Sentinel-3, and Sentinel-5P. Moreover, the catalog includes geospatial and socioeconomic datasets pertaining to land cover classification, climate change monitoring, and grain crop estimation. The data within the platform is updated on a daily basis, incorporating approximately 2,000 scenes, and maintaining a T+1 update compared to official data sources, such as NASA, the U.S. Geological Survey, and NOAA, as well as the European Space Agency. Furthermore, users can upload their own private data and leverage the Earth platform’s intelligent computing and remote sensing analysis capabilities.At present, publicly available optical satellite imagery is mostly characterized by spatial resolutions at the meter scale, which is sufficient for surveying vast expanses of the Earth's surface but falls short in capturing detailed features of smaller objects. Conversely, high-resolution remote sensing offers more detailed observations of the Earth, providing data at meter or even sub-meter spatial resolutions that can discern the clear depiction of spatial structure, surface textures, detailed object compositions, and shaper delineation of object boundaries. These attributes provide a conducive environment for effective geoscientific interpretation and analysis. However, it is essential to note that high-resolution remote sensing imagery is typically provided by commercial satellite companies, and therefore, AI Earth cannot offer the raw data for free. Nonetheless, AI Earth supports the integration of user-purchased satellite imagery map services into the platform through standard OGC (Open Geospatial Consortium) protocols, which allows users to leverage the Earth platform’s intelligent computing capabilities for the analysis of high-resolution imagery.AI Earth employs the STAC (SpatioTemporal Asset Catalogs) standard specification, which is a common language to describe geospatial information, to govern the management of all publicly accessible data, so it can more easily be worked with, indexed, and discovered. Through the provision of a unified data query interface, users are able to obtain search results tailored to their specific criteria, encompassing parameters such as image acquisition time, area of interest, and other filtering conditions. As the search results adhere to the STAC specification, their structure can be accurately recognized and loaded by the software or application that support STAC.The primarily data source of the AI Earth data catalog is raster imagery, and as a result, the platform employs the “Image” and “ImageCollection” to describe and manage these data. An image can contain multiple bands with varying data types, resolutions, and projections, but pixels in an individual band must be homogeneous in data type, resolution and projection. Each image is associated with key-value pairs that store metadata, including acquisition time, platform information, and image dimensions, etc. Users can utilize these metadata attributes to set filtering conditions and retrieve a collection of images specific to their study area. To streamline the processing of image collections, images originating from the same sensor or production method area organized into a collection. For instance, users can select the Sentinel-2 Collection and efficiently search millions of images by specifying spatial and temporal filtering conditions.When acquiring remote sensing imagery from external official data sources, the images are typically large in size and come in various data formats, posing a challenge for distributed clusters to efficiently load them. Therefore, prior to integrating the images into the data catalog, the format of each image need be converted and each image should be divided into sets of 256×256 tiles, which are subsequently stored in distributed object storage service. In developer mode, users can debug their code and the results of computation will be exhibited on the map display. In this scenario, only the parts of images within the visible map viewport and corresponding map scale resolution need to be loaded. To facilitate rapid display and efficient computation of extensive remote sensing imagery, a pyramid of reduced-resolution tiles must be created. AI Earth uses the nearest-neighbor sampling method to build each layer of pyramid. The lowest layer represents the original resolution data, while each subsequent level reduces the image size by half until the entire image fits within a single 256×256 tile. When computation requires a reduced-resolution portion of an image, it is only necessary to retrieve the relevant tiles from the most suitable pyramid level residing in the tile storage service. This targeted retrieval approach ensures that only the necessary data is accessed, minimizing unnecessary data transfer and optimizing computational efficiency.§ COMPUTING ARCHITECTUREAI Earth is a cloud-native, spatiotemporal remote sensing cloud computing platform that caters specifically to the field of Earth science. It is constructed using cloud-native technologies and offers an automatically managed elastic big data environment, which is built entirely on the Alibaba Cloud infrastructure, with all subsystems and modules deployed 100% on Alibaba Cloud, including Container Service for Kubernetes (ACK), MaxCompute, Polar distributed databases, Object Storage Service (OSS), etc. As a geospatial data computing platform built upon cloud infrastructure, the platform effectively merges extensive remote sensing data with computational resources. It empowers users to study algorithm models at any desired scale and validate them through interactive programming.The platform comprises toolbox mode and developer mode. In the toolbox mode, users interact with the platform through web-UI to submit computational tasks. In the developer mode, users write code using the platform’s API interfaces in the code editor and send interactive or batch queries to the server system through REST API. Therefore, the computation system can be divided into On-the-Fly computation and Batch computation. Tasks submitted in toolbox mode are processed by the Batch computation. Meanwhile, AI Earth provides On-the-Fly computation to facilitate code debugging and visualization in developer mode. This service only loads required data for computation within the visible map viewport and map’s zoom level, and can constrain the pixel computation to just the pixels that are viewable. After users have finished debugging their code, they can submit it to the batch computation system to complete image processing for the specified computation area and resolution. To ensure efficient resource scheduling, On-the-Fly computation and Batch computation are deployed on separate computing clusters. A unified task scheduling system is used to ensure equitable task allocation and dynamic scaling of computing resource to balance the system load. §.§ Processing OperatorAI Earth offers a wide range of basic geospatial data computation and analysis functions and users have the flexibility to select and combine different functions to implement their research algorithms. Indeed, the platform provides two modes of development environments, namely cloud-hosted and local, to expedite code development for users. In the cloud-hosted mode, users can leverage the platform’s cloud-development environment and access all the necessary tools and resources directly from their web browser, eliminating the need for local installations or configurations. They can write and execute code using the platform’s interactive interfaces, making it convenient to develop and test algorithms without the need for local setup. In local mode, users have the flexibility to set up their own development environment on their local machine. This mode allows users to configure their preferred Python environment and utilize third-party Python libraries as needed. Users also need to install the AI Earth client library to call computation functions, which will be submitted to the server for execution.AI Earth currently offers more than 440 geospatial data computation functions (several functions shown in Table 2), which can be classified into simple and complex categories based on their level of complexity. Simple functions primarily involve arithmetic operations on image pixels, while complex functions encompass geo-statistics, image filtering, spectral analysis, and machine learning, among others. Most of these functions operate on a “Image”, which requires reading all corresponding tiles loaded onto the work nodes of the cluster through the distributed computing engine. Users often need to compute long time series and large-scale images when utilizing the cloud platform. Therefore, the standard computational workflow typically involves utilizing functions like map() or iterate() to execute independent or sequential operations on each image in the collection. Thanks to the utilization of a tile store for storing image data and the distributed computing engine’s data loading capabilities, computations can be categorized into four distinct types: parallel computation of individual tiles, joint computation of neighboring tiles, spatial aggregation statistics, and time series analysis.In the parallel computation mode of tiles, each tile is independently computed in parallel on different work nodes, ensuring high computational efficiency without any interference. Examples include pixel-based arithmetic operations, logical operations, type conversions, bitwise operations, multi-band spectral analysis, and matrix computation, etc. When developers specify the computation area, the distributed computing engine constructs a global layout consisting of a fixed-size grid. Each grid independently requests the necessary data for load from the tile store. Processing each output tile typically requires retrieving one or a small number of tiles for each input. For instance, in multi-band spectral analysis and matrix operations within the same grid, multiple tiles representing different bands are input. The pixel values of each band are retrieved at the corresponding pixel positions to form an array or a multi-dimensional matrix, which is then fed into the specific execution function. In the algorithm design, developers can call multiple arithmetic functions to calculate spectral indices such as NDVI. When multiple calculation functions are executed in series, multiple intermediate results are generated, resulting in increased computation time. To mitigate this problem, the functions written by developers are constructed into a syntax tree (AST). The arithmetic functions that can be executed within a single tile are merged together to avoid redundant calculations.During the joint computation of neighboring tiles, which is especially relevant in image filtering and convolution, it is essential to include a padding mechanism where a portion of data from neighboring tiles is added to the current tile being processes. When performing convolution operations, it is common to use fixed window sizes for the convolution kernels, such as 3×3. Therefore, when the developer submits their convolution operation code to the server for execution, AI Earth platform pads a portion of the current input tile to create a new buffer tile based on the size of the convolution kernel. The convolution operation is then performed within the extent of the input tile, ensuring accurate calculation of boundary pixels when using a fixed window. The platform offers a range of convolution kernel, some of which allow for the specification of window size in either pixel dimensions or geographical distances. Convolution kernels defined in pixel dimensions have a constant size that does not change with map scaling. However, for kernels based on geographical distances, they are converted into pixel-dimension convolution kernels based on the current computation’s corresponding scale. Hence, in situations where the geographical distance is too large or the scale is extremely small, it leads to the generation of a large convolution window. This large window requires excessive padding data from neighboring tiles, which ultimately leads to a decrease in computational efficiency.The spatial aggregation statistics process is indeed complex. While some computations can be parallelized, the final result necessitates summarizing and consolidating the intermediate results from the parallel computations for statistical aggregation. As a result, the calculation results of each partition cannot be independently outputted on the work nodes. Instead, they need to be aggregated and consolidated on the master node to complete the corresponding global statistical computation. The Earth platform provides aggregation functions primarily for regional statistics (e.g., min, max, mean, median, etc.), attribute aggregation, and sampling an image to train a classifier. Although distributed aggregation involves a complex computation process, users do not need to understand any distributed computing rules. The platform utilizes a distribution and gathering model to provide various distributed aggregation computing operators to compute geospatial data. The study area to be aggregated is divided into sub-regions, which are then assigned to the work nodes. Each work node calculates the input pixels and performs the necessary accumulation operations to compute its partial result. These partial results are sent back to the master node for further computation, where the master node merges them and transforms them into the final form. For example, when calculating the average, each work node calculates the sum and the count, and the master node collects and sums these intermediate results, and the final result is obtained by dividing the sum by the total count. Due to the ability of users to utilize massive remote sensing images stored in the platform for generating large-scale, high-resolution images, the usage of aggregation statistical functions may lead to the generation of a large number of computation partitions. It is possible for multiple partitions to be loaded onto a single work node simultaneously, potentially exceeding the memory limit. Therefore, when the platform allocates limited executable nodes to users, it automatically manages the partition data using a task queue. This ensures a balance in resource responsibility and guarantees the generation of correct output from the partition calculations.Time series analysis is a widely utilized technique in remote sensing research, allowing for the extraction of dynamic information concerning changes in surface features through the analysis and processing of time series data. Time series analysis is distinct from spatial aggregation as it operates at the pixel level. The input data consists of continuous pixel values across multiple time periods, without the need for aggregation across the entire study area. Consequently, the entire computation process for time series analysis can be parallelized and handled independently on tiles, eliminating the necessity to collect intermediate results on the master node for final result. Instead, the pixel values from different time periods are aggregated and inputted into designed aggregation analysis operators. When compared to spatial aggregation operations, time series aggregation generally involves smaller computational loads, mostly influenced by the number of stack values on individual pixels. To summarize, this stream computing facilitates swift and efficient aggregation computations with relatively small intermediate states. However, computations that demand significant storage space may consume excessive memory within this framework. §.§ Directed Acyclic GraphTo facilitate the parsing of user code by the server-side execution engine, the platform employs a directed acyclic graph (DAG) in which each node represents the execution of an individual function or a defined data variable, to build up a description of the computation the user wishes to perform. Upon parsing the DAG, the platform determines the data range to be read and the entire execution process. It then distributes the execution plan to the work nodes. Since the computation engine loads data in a grid format, each grid executes the same DAG. Consequently, to enhance the execution efficiency of the DAG, some strategies should be employed to optimize it.Upon completion of code writing on the client-side, developers generate a frontend DAG using the Client Library, which is constructed based on the API functions invoked by the developers and their respective execution order. After receiving the frontend DAG, the server meticulously traverses each node and generated an AST. The AST is then subjected to optimization and evaluation, aiming to streamline the execution process. In algorithm development, it is common for developers to reuse a specific intermediate variable in multiple functions. To circumvent the predicament of duplicate nodes in the syntax tree leading to redundant calculations, a prudent approach is adopted. Duplicate nodes are replaced with logical references, retaining only one actual computation node. Consequently, the actual node is executed just once, with its computed result being cached for subsequent utilization by other nodes in the DAG.The execution of nodes in the DAG follows a sequential order due to their dependencies. Therefore, certain nodes within the DAG can be rearranged without impacting the final result, enhancing the execution efficiency. The rearrangement aims to optimize the execution order based on the varying computation patterns and complexities of different nodes. The primary factor influencing efficiency is the computation load of the data. In algorithm implementation, users typically specify the dataset and study area. Consequently, data cropping operator should be moved to the leaf nodes in the DAG, where only the user-specified computation area is read, reducing the overall data loading. In general, operators that decrease data volume should be positioned closer to the leaf nodes, where data loading is minimized. Conversely, nodes that increase data volume through computation should be placed closer to the root node. This arrangement helps ensure efficient execution by reducing unnecessary data loading and optimizing the flow of data through the DAG. §.§ Optimizing StrategyAI Earth generates a physical execution plan graph after completing the logical optimization of the DAG. In traditional computing architectures that do not use lazy evaluation, each node is sequentially computed during the execution process. The data input in the platform uses the tile type, which can be conceptualized as an array structure with meta information. When the data required by a computation node differs in terms of resolution, data type, and map projection, data transformation becomes necessary to ensure structural consistency of the input data. However, data transformation consumes a significant amount of computation time and lacks flexibility in adjusting the execution process. To address these challenges, the platform adopts a lazy evaluation strategy. Computation nodes do not immediately execute the assigned tasks until the results are needed. Prior to task execution, the computation engine meticulously analyzes the data loading nodes in the DAG to ascertain crucial information such as data request range, spatial resolution, and projection. To ensure consistency and efficiency, the platform employs a standardized data loading layout that enables the seamless loading of all required data.In addition, AI Earth offers an On-the-Fly computation mode that enables dynamic determination of the output resolution and projection based on the map’s zoom level and view boundaries. The platform allows for restricting pixel calculations within the visible view. In the Batch computation mode, developers have the capability to specify the desired spatial resolution and projection type for the output results. This ensures that the distributed engine uniformly handles data loading to prevent data transformation during the execution process. Furthermore, the logical optimization of the DAG optimally moves the nodes that reduces the data size of returned image to leaf nodes for priority computation. In cases where mixed computations involve nodes that increases the data size, the input images with lower resolution from previous DAG node will be resampled. This is primarily because resampling data during computation proves more efficient than requesting high-resolution data over the network.AI Earth provides various complex function computations that involve distributed data loading. When users design and debug algorithm code in the code editor, they often make compute requests to the server. To prevent redundant computation of the same functions submitted repeatedly, the platform implements a strategy to cache the computation results in fragments. A user-initiated computation typically comprises two stages: data retrieval and function computation. When a user submits a computation request, the platform initially retrieves the query results from the data repository based on the specified dataset and query conditions. And the query results will be cached. Subsequently, when the user submits the next data retrieval request, the platform checks the cache system to determine if there are cached results for the same query conditions. If cached results exist, they are directly returned to the computation system; otherwise, a new request is sent to the data repository. During the function computation stage, the platform caches the computation results based on the DAG. Given the varying computational complexity of different operators, consecutive submissions of the same computation task by a user may result in certain nodes of the previous computation already being completed. In such cases, when the user submits a new run, the completed portions of the operators are retrieved from the cache system, while the remaining operators await the completion of the previous calculation. However, if the cache system encounters failure or some operators fail during computation, a new computation is initiated based on the new request.§ INTELLIGENCE COMPUTATIONThe deep integration of ML and remote sensing image interpretation has garnered significant attention in the context of rapidly advancing AI technology<cit.>. Researchers and international organizations have actively employed ML methods to intelligently interpret multi-source and multi-platform remote sensing data, producing a large number of high-quality Earth observation data products, especially in land cover classification<cit.>, crop yield estimation<cit.>, biomass estimation<cit.>, and natural disaster monitoring<cit.>. To facilitate the application of ML in the analysis of remote sensing imagery, AI Earth has integrated certain ML methodologies into the platform, allowing users to swiftly harness these capabilities in both toolbox mode and developer mode. The suite of ML methods offered by AI Earth encompasses traditional ML as well as DL techniques, the latter of which includes both standard DL algorithms and large vision model. Additionally, AI Earth platform offers capabilities in DL for model training and the deployment of third-party models, which will be discussed in the Application and Discussion section. §.§ Machine LearningML techniques are rooted in robust theoretical principles and often come with well-defined strategies for implementation. Thus, in order to lower the barrier for users in applying ML methods, AI Earth platform incorporates a carefully selected collection of classic ML algorithms. These include, but not limited to, Linear Regression, Logistic Regression, Decision Trees (DT), Support Vector Machines (SVM), and Random Forests (RF). Users have the option to upload their own samples or utilize the sample function to automatically obtain samples. During the model training process, the distributed computing engine loads all samples to train the model. To optimize On-the-Fly computation, trained models are cached to avoid repeated training triggered by map zooming or panning.The advantages of ML methods lie in their strict theoretical basis, high computational efficiency, and good performance on small to medium-sized datasets. However, ML methods face limitations when it comes to handling high-dimensional, nonlinear, and large-scale data. Additionally, these methods heavily depend on expert knowledge and manual experience for feature selection and construction, thereby limiting their application in leaning complex and variable land cover features. §.§ Deep LearningThe advancement of DL techniques has overcome the limitations of ML with regard to feature learning, representation, and the handling of large-scale datasets, while also exhibiting enhanced generalization abilities. In the realm of land cover classification, Convolutional Neural Networks (CNN)<cit.>, Recurrent Neural Networks (RNN)<cit.>, and Generative Adversarial Networks (GAN)<cit.> represent the predominant methodologies, among which CNNs are the most extensively employed. CNN primarily uses multiple convolutional and pooling layers to extract features from images, followed by fully connected layers for classification. In terms of semantic segmentation, fully convolutional networks (FCN)<cit.>, U-Net<cit.>, DeepLab<cit.> are usually used. And the U-Net is most widely used in image segmentation, applied in building segmentation, rood extraction, vegetation detection, etc. U-Net draws inspiration from FCN but introduces skip connections to better retain image details and spatial information. These skip connections facilitate the fusion of low-level and high-level features, thereby aiding in the accurate recovery of fine objects and boundaries. Despite their advantages, DL methods also have some limitations, such as the requirement for a large number of training samples, high computational resource demands, and time-consuming training. In cases involving low-quality data and small sample sizes, the traditional ML methods remain a preferable choice.To facilitate users in utilizing pre-trained models directly for specific target extraction and segmentation tasks, AI Earth platform offers 18 DL models, which are mainly used for land cover classification, instance segmentation, and change detection. For example, the platform employs the High-Resolution Net (HRNet) proposed by Sun et al.<cit.> to identify land cover categories using high spatial resolution imagery. The HRNet constructs a multi-scale network using parallel branches, where each branch performs feature extraction at different resolutions. Top-down and bottom-up connections are used for information propagation to improve the accuracy and precision of feature extraction. Presently, it is increasingly important to study global ecological environments and climate change based on large-scale land cover results. The Sentinel-2 dataset with a 10-meter resolution is the freely accessible remote sensing imagery that covers the entire globe. Therefore, the platform utilizes the multimodal fine-gained dual network (Dual-Net model) proposed by Liu et al.<cit.> to produce land cover maps for Sentinel-2 imagery. The model combines multiple temporal sequences, multimodal information, and low-level constraints to perform land cover classification by inputting Sentinel-2 images from two different time periods. To aid researchers in accessing land cover maps of China, the platform has produced the classification results for the year 2020, 2021, and 2022 using the Dual-Net model and Sentinel-2 imagery (result of the year 2021 shown in Fig. 3). These results have been publicly released on the data catalog and are accessible for use by anyone. For semantic segmentation, the platform adopts the Point-based Rendering (PointRend) neural network as introduced by Kirillov et al.<cit.> PointRend approaches image segmentation as a rendering problem, employing an iterative subdivision algorithm that selectively samples non-uniform points for precise segmentation. This technique enables the platform to provide finely-tuned instance and semantic segmentation models for key structures and features including buildings, road network, dams, greenhouses, wind turbines, and solar panel of photovoltaic power plants, among others. Additionally, to enhance the detection accuracy for rotated objects, the platform also references some models such as R3Det<cit.> and the Semi-Anchored Detector<cit.>. The platform’s change detection models use twin HRNet for the extraction of high-resolution features from imagery spanning different time frames, and employ a Bi-directional Feature Pyramid Network (BIFPN) for the integration and exchange of information across features of various resolutions<cit.>. To meet the diverse requirements of different use cases, the platform provides both binary and multi-class change detection models, and also offers specialized change detection models for buildings and agriculture land, ensuring adaptability to various user needs. DL techniques for analyzing remote sensing images often necessitate that the input data be of high spatial resolution. However, the resolution of freely available public remote sensing imagery is relatively low, making it difficult to use directly. To address this, the platform produces a sophisticated super-resolution reconstruction model to enhance the image. This model is capable of upgrading the spatial resolution of Sentinel-2 imagery from 10-meter to an impressive 0.8-meter. §.§ Large Vision Segmentation ModelThe development of AI has ushered in the era of “large models”, characterized by architectures with an immense quantity of parameters that exhibit remarkable generalization and transfer prowess. These models, once pre-trained, enable transfer learning to be effectively carried out with minimal domain-specific training data. Currently, the leading large models for visual segmentation primarily include SAM<cit.>, SEEM<cit.>, and SegGPT<cit.>. All three models can extract the mask by using an interactive prompt. The architecture of these models mainly consists of two parts: the encoder and the decoder. The encoder’s primary function is to process and extract embeddings from the input image and prompt information, while the decoder predicts the mask based on the input embeddings from the encoder stage, utilizing the self-attention and cross-attention mechanisms of the Transformer. Despite the shared overarching structure, there are nuanced variations in how the encoder and decoder are designed. These discrepancies are reflective of the distinct approaches each model employs to tackle the challenges of visual segmentation. The SOTA interactive segmentation models have been trained on a large amount of data. For example, SAM was trained using 11 million images and 1.1 billion masks. As a result, these models all demonstrate strong zero-shot performance. However, SAM and SegGPT lack semantic meaning in their segmentation results. In contrast, SEEM not only has more prompt types, which can take in a referred region from another image as a prompt, but also outputs semantic information, broadening the scope for downstream analysis and usability. Therefore, the platform has fully referred to the technical framework of the SEEM and designed a foundational model for arbitrary target extraction from remote sensing imagery, named AIE-SEG (framework shown in Fig. 4). Compared to images in the computer visual domain, remote sensing imagery is characterized by its larger size, abundant information, and more intricate backgrounds. Therefore, the platform has constructed a training dataset of tens of millions of images, with 1.3 billion labels, covering nearly a hundred remote sensing semantic categories, to train the AIE-SEG. AIE-SEG utilizes points, boxes, and text as prompts for general segmentation. After receiving a small amount of prompt information about the target to be extracted, it is capable of performing batch segmentation to extract all similar targets. In contrast to SAM, which only support segmentation on individual images, AIE-SEG support the extraction of similar targets across multiple images. Incorporating vision-language models (VLMs) into the large interactive segmentation model may become a hot spot in the development of remote sensing in the future. However, there is still a lack of comprehensive large-scale aligned image-text datasets suitable for training large VLMs in the remote sensing field. To address this problem, we established a high-quality Remote Sensing Image Captioning dataset (RSICap), which contains 2,585 manually annotated captions with rich, high-quality information. Additionally, this dataset furnishes meticulous descriptions for each image that encompass scene descriptions (e.g., residential zones, airports, or farmlands), object information (color, shape, counting, absolute position), object relationship (e.g., relative position), and also visual reasoning knowledge (e.g., image capture season). The high-quality dataset facilitates finetuning existing larger VLMs to build domain-specific VLMs in remote sensing. Therefore, we develop a Remote Sensing Generative Pretrained Model (RSGPT)<cit.> based on finetuning InstructBLIP<cit.> on the RSICap dataset. The integration of RSGPT with the AIE-SEG has facilitated the accomplishment of text-prompted semantic segmentation, instance segmentation, and panoptic segmentation. AI Earth platform now offers four distinct operational models built upon AIE-SEG: single-target extraction, land cover classification, binary change detection, and multi-class change detection. Preliminary comparisons suggest that AIE-SEG has the potential to outperform standard DL models in terms of recognition accuracy and extraction capabilities.§ COMPUTATIONAL EFFICIENCYAs the AI Earth platform is entirely deployed on the Alibaba Cloud infrastructure, involving the use of numerous middleware, and there are also issues with cloud computing resource allocation and task scheduling, it is difficult to evaluate the performance and scale of the platform from end-to-end perspective. Although numerous remote sensing computational cloud platforms have been developed internationally<cit.>, the differences in technical solutions and dependent infrastructures of these various platforms also make it impossible to directly compare the performance of different platforms in a fair and objective manner. Therefore, this article adopts the evaluation method used in the paper by Gorelick et al.<cit.> to analyze the efficiency of the AI Earth. Since the computational tasks are submitted from the client to server, the communication between them cannot be guaranteed to be consistent. Therefore, the efficiency evaluation will not take into the network communication part, and will only focus on two aspects: the optimization of the DAG and the computation of the DAG. AI Earth uses Java for DAG optimization and C++ for DAG computation. Therefore, to assess the computational performance of this hybrid mode, it will be compared with the execution of direct function calls using native C++. Based on the complexity of the DAG, five test cases as same as the explored by Gorelick et al.<cit.> were set as follows:a. SingleNode: A graph that comprises only a single node with one input data tile, and calculate the sum of all values in the tile. b. NormalizedDifference: A graph that calculates the normalized difference of two input data tiles, taking the calculation of NDVI (Normalized Difference Vegetation Index) as an example,NDVI = (NIR-Red)/NIR+Red.where NIR represents the near-infrared band and Red represents the red band. c. DeepProduct: A graph that contains 64 binary product nodes connected in a chain, and compute the sum of 65 input nodes. d. DeepCosineSum: A graph that contains the same number nodes as DeepProduct, but using the more expensive operation cos(a+b). e. SumOfProducts: A graph that contains 40 input data tiles, 780 product nodes, and 779 sum nodes in a chain. The numerous data inputs and computational nodes facilitates the assessment of complex DAG computations’ performance, a situation that is often encountered in practical user environments. The aforementioned five test cases were computed on a single tile, each of size 256×256 pixels. The testing approach was primarily due to the fact that, although a large number of tiles would be processed in actual computation, AI Earth employed distributed parallel computing, assuming that there is a sufficient amount of computing resources, and the difference in cost time between a single tile and all tiles would be minimal. And the test cases were executed using a single thread on an Intel Xeon (Ice Lake) Platinum 8369B processor at 2.4 GHz. The results, as shown in Table 3, indicate that the graph-based computation efficiency is virtually equivalent to that of using direct C++ functions.Furthermore, to validate the platform’s horizontal scaling capabilities, and end-to-end test approach was designed to calculate the NDVI across the entire territory of China. The test data consisted of Landsat 8 Level-2 imagery covering China from January 1st to December 31st, 2022, encompassing a total of 16,127 images. Given that the AI Earth platform supports both On-the-Fly computation and Batch computation modes, the On-the-Fly mode initiates tile computation triggered by the interactive map, while the Batch computation loads all data to complete the calculation. Consequently, in the On-the-Fly mode, the average time taken for processing all tile requests initiated by the interactive map and for displaying the results is recorded, whereas in Batch computation, the total time required to complete the entire export task is recorded. The experimental results indicate that in the On-the-Fly mode, the end-to-end computation took approximately 28 seconds, while in Batch computation, employing 100 workers each configured with 6 CPUs, a total of 27.78 million tiles (256×256 pixels of each tile) were loaded, and the entire offline export took 2.1 hours to complete.§ APPLICATION AND DISCUSSIONAI Earth platform was officially launched in March 2022. After a year of development, its functionalities have become quite comprehensive, enabling researchers to carry out the majority of remote sensing application analysis<cit.>. Due to its recency, research applications based on AI Earth are relatively sparse, especially when compared to more established platforms like GEE. Therefore, the content of this section will mainly focus on the open capabilities of AI Earth. §.§ User Defined FunctionsIn contemporary remote sensing cloud platforms, the architectural paradigm predominantly embraced is one of client-server decoupling. For the purpose of granting clients expedited access to the platform's computational capabilities, an extensive suite of API functions is routinely made available. Users engaging with these functions are acquainted solely with their inputs and outputs, remaining agnostic to the underlying implementation intricacies—a common practice in the development of such platforms. Nonetheless, the API functions may not always align with the specialized research requisites of users<cit.>. This may necessitate the generation of bespoke functions, yet integration with the server-side execution environment of the platform often remains elusive. In an effort to surmount this limitation, AI Earth has conceived a UDFs framework (Fig. 5). This innovative construct allows users to create their personalized functions, utilizing the framework as a core building block, which in turn ensures smooth integration and operation within the server environment of the cloud platform.When composing UDFs within the code editor interface, users submit their custom scripts to the server for execution. The way the UDFs load input data is identical to that used by the platform-provided API functions, employing a unified data loading module to read the necessary imagery data from the tile store. In adherence to the concepts of distributed data processing and computation, UDFs are bifurcated into two distinct categories: the pixel-oriented User Defined Scalar Functions (UDSFs) and the zonal-focused User Defined Aggregation Functions (UDAFs). UDSFs operate autonomously on an individual pixel basis, utilizing data loading modules to fetch imagery in discrete 256×256 pixel-sized tiles. The computational process involves constructing a DAG, integrating both platform-intrinsic API functions and UDFs. This graph undergoes strategic orchestration and optimization before submission to the distributed computing framework. Subsequent results are compiled into comprehensive raster datasets, a process optimized by the pixel-wise independent calculations. Conversely, UDAFs employ the same data ingestion protocol as UDSFs. However, due to their aggregative nature, they necessitate maintaining intermediate computational states within a distributed state repository. This repository enables the sharing of these states across computing nodes, employing a model akin to the MapReduce paradigm for result generation. Additionally, to maintain consistency with the data models of the platform's embedded API functions, the data models within UDFs are divided into ImageSet, Image, and Band, corresponding to ImageCollection, Image, and Image.Select(), respectively. This conformity allows users to intuitively grasp the data flow within their UDFs and to manage metadata with heightened efficacy, consequently reducing the complexity of development for end-users. §.§ Deep Learning Model Training and DeploymentWhen using distributed computing system, DL enables rapid interpretation and analysis of large-scale image collections. Constraints include a lack of diversity within training datasets that do not encapsulate the full spectrum of potential scenarios encountered in testing or real-world applications, the voluminous parameter space of DL networks that demands extensive training data for adequate model calibration, and imbalances in training dataset categories, which can skew model performance towards overrepresented classes. Therefore, users may need to use new training datasets based on their research needs and retrain models to achieve new interpretation objectives. AI Earth platform provides complete modules for sample production, model training, and model deployment (workflow shown in Fig. 6). In the sample production module, users can utilize the platform’s built-in public sample datasets, as well as upload their samples. To facilitate the creation of high-quality datasets, the platform offers sample annotation tools that enable users to set category labels and quickly manually annotate training samples using tools like automatic clipping, intelligent selection, and manual framing, based on pre-cut tiles. With the prepared training sample set, the model training module allows users to choose the network structure and backbone according to different model types, and complete model training by setting training parameters such as learning rate, iteration number, loss function, and optimizer. After completing model training, users can deploy the model on the platform with a single click, making it accessible through both the toolbox and OpenAPI specification.The employment of network structures provided by the training system, when paired with custom-created samples, might encounter restrictions in model efficacy due to discrepancies in sample set quality. Model performance is particularly susceptible when sample data is biased, inadequate, or marred by incorrect annotations, which can impede even the most sophisticated neural network architectures from reaching State of the Art (SOTA) benchmarks. Therefore, while custom sample training has irreplaceable value for specific tasks, the introduction of third-party models or pre-trained models becomes particularly important when pursuing higher accuracy and generalization capabilities. Third-party models often come from major research institutions, universities, enterprises, and the open-source community and may have already demonstrated excellent performance in specific fields or tasks. These models are usually pre-trained on large-scale, diversified datasets with precise annotations, possessing robust feature extraction and generalization capabilities that can save users significant time in data preparation and model training. AI Earth platform supports the integration of these powerful third-party models from sources such as Model Zoo, Hugging Face, Model Scope, or custom professional models tailored by individuals. The platform provides a unified pipeline for access, simplifying and standardizing the integration process. Thanks to predefined interfaces and modular design, users can effortlessly embed third-party models into their existing workflows without concerning themselves with underlying complexities. Once the model pipeline is constructed, users can opt to deploy the model either locally or on the cloud platform. The local mode is limited to on-site computational resources, while cloud deployment can leverage the elastic computing resources of the cloud platform, offering more robust model inference capabilities.§ CHALLENGES AND FUTURE WORKSCompared to traditional desktop-based remote sensing software, the principal advantage of cloud-based remote sensing platforms resides in their capacity to harness the extensive computational infrastructure and the vast amount of data intrinsic to cloud environments. Users are only required to submit their research requests via the provided API interfaces of the platform, without the need to concern themselves with underlying data storage conventions, distributed computing logic, and the management of computing resources. This allows users to conduct researches over larger data volumes, broader spatial extents, and longer temporal sequences. However, it is precisely due to the high level of integration of cloud platforms that users are confined to using the data models and computational paradigms abstracted by the platform, and are unable to control the actual computing process, which presents challenges for users wishing to freely expand their computing capabilities. Additionally, the deployment of cloud platforms involves the use of various cloud infrastructures, which necessitates consideration of network resources and computational security, among other issues, imposing certain limitations. This section will discuss some of challenges encountered during the construction of AI Earth platform, limitations that users should be aware of, and potential future developments of the remote sensing cloud platforms. §.§ What are the Scaling Limits?User-initiated computations on the AI Earth platform are executed on Alibaba Cloud’s Elastic Compute Service, where the platform manages resource allocation and task scheduling for the submitted computational jobs. The allocation primarily considers the number of tasks the user needs to execute and the data volume that needs to be loaded, after which a request is made to the data center for the required number of compute nodes to start. These nodes may be distributed across different data centers, and conceptually, the computation can be understood as occurring not on a single supercomputer but rather across many smaller clusters. Consequently, during data loading and computation, we cannot simply design the computing system based on the logic of single-machine processing. It is essential to embrace distributed computing paradigms, starting with establishing a distributed data loading scheme and then implementing distributed computing functions on top of the distributed data model. However, due to the diversity of computational analysis in the field of remote sensing, which involves spatiotemporal analysis, some calculations are challenging to adapt to a generalized distributed computing framework. This leads to certain computational functions still being constrained by the size of the available computing resources.Given that the AI Earth platform offers two computing modes, On-the-Fly and Batch, there are differences in the utilization of computational resources between them. On-the-Fly mode is designed for immediate task execution and real-time delivery of results by funneling computations into a shared resource pool. In contrast, Batch computation employs an independent computing scheme, where each user’s tasks are executed within isolated computational resources, with less restriction on the use of resources. On-the-Fly model requires users establish a connection with the platform’s server via a web browser. Taking into account the occupation of computing resources and the Alibaba Cloud gateway’s timeout limitations, this mode supports real-time computation lasting no longer than 300 seconds. Furthermore, due to use of a public resource pool, to prevent a user from monopolizing excessive computational resources and hindering the execution of other users’ requests, the platform sets a limit of 10 tasks that a user can perform at one time. Under the aforementioned constraints, it is feasible to calculate the annual maximum NDVI values using Landsat 8 images (225 scenes in one year) for the Zhejiang province in China, and to compute the average NDVI values for each city region at a resolution of 30 meters.Batch computations are not affected by the interactive limits, allowing for the execution of tasks on a much larger scale. However, the amount of data each individual machine can hold is still subject to certain limitations. Particularly when loading tiles for multi-spectral or temporal series analysis, a large number of values may be used or generated at the same pixel location, potentially exceeding the memory limits of the compute node. To mitigate the risk of memory overflow, the platform sets a stack depth of approximately 2,000 bytes per pixel. However, this safeguard only takes effect during computation; users cannot set it when submitting requests, which poses a significant challenge for them. Moreover, to ensure the stability of the distributed computing master and worker nodes’ data transmission and cache system, the size of individual cacheable objects is limited to within 100 MB. This limitation may restrict the amount of data that certain aggregation operation functions can compute, such as when using sampling extraction functions to obtain training samples for ML. §.§ How Data Model Scale?AI Earth platform adopts a distributed data model to load user-requested data, effectively leveraging the platform’s horizontal scalability to handle computational tasks for larger datasets. In particular, the platform relies on a tile store for storing and retrieving tile data, which significantly enhances the parallelism of pixel-based computations. This data model is especially well-suited for executing per-pixel and limited neighborhood operations, such as band arithmetic, morphological operations, spectral unmixing, and texture analysis. Additionally, for long-term time series analysis, it is still possible to construct a stack of values from different stages based on individual pixels, as the analysis process usually does not require considering other pixels within the neighborhood. However, remote sensing computation and analysis processes are often complex, and certain specific functions may not be able to achieve true parallel computation. These more challenging parallel computation processes typically occur when calculating a particular pixel requires the use of global image characteristics or local features from a large neighborhood. Since data will be loaded onto different compute nodes, the computation of global or local features will generate substantial data transmission, contravening the original intent of distributed computing.When users utilize the built-in API functions of the platform, they are unable to access the input data required for computation. Therefore, to enhance the capability of user-defined functions, the platform has exposed data access interfaces to users. Data access still adopts the tile-based approach, which means that when users need to aggregate pixel values over a large area for regional statistical analysis, it may be necessary to expand into new data models. This is particularly the case for unsupervised clustering, mask analysis, and spatial domain matrix operations. Defining multiple data models to reduce the barrier to entry for users and to meet various computational needs is also an ongoing task that AI Earth platform continually strives to improve. §.§ How Remote Sensing Large Models Evolve?Recent advancements in DL technology, including the architectural and performance refinements of Transformer-based models and the introduction of diffusion models, have propelled Large Vision Segmentation Models (LVSMs) to the forefront of DL research. Prominent AI research institutions internationally have released several LVSMs, which have achieved considerable success in image editing. However, the training of LVSMs requires hundreds of millions of training data, and data cleaning also has a significant impact on the quality of the output models. Considering the high cost of obtaining remote sensing imagery and the diversity of imaging modalities, it becomes exceptionally challenging to create massive training datasets. Designing and training Remote Sensing LVSMs (RS-LVSMs) based entirely on these data poses an enormous challenge. Consequently, current research endeavors in the field of RS-LVSMs are predominantly concentrated on adapting and transferring pre-existing LVSMs, cultivated within the computer vision domain, to the nuances and complexities of remote sensing data.The AIE-SEG large model provided on AI Earth platform is based on the pretrained SEEM model, which has been fine-tuned using proprietary remote sensing training data. With tens of millions of training images, the model can achieve satisfactory results. Currently, to be compatible with input data dimensions required by the SEEM, only the red, green, and blue bands are used for training the model, which reduces the utilization of the multispectral features of remote sensing imagery. Multi-spectral data can provide more information about surface materials and phenomena than RGB alone. For instance, the NIR band is particularly sensitive to vegetation and can be used to assess vegetation health; the SWIR band helps distinguish minerals and vegetation, as well as detect moisture content. The absence of these bands limits the model’s performance in certain application domains, especially in scenarios requiring the use of specific spectral characteristics for the identification and classification of land features. Additionally, the AIE-SEG currently only recognizes optical imagery and cannot process microwave imagery. Fusion of multi-source data has always been a research focus in the field of remote sensing; therefore, how to integrate multi-source data into LVSMs will be key to enhancing the recognition capabilities and application scope of these models.LVSMs typically utilize prompts such as clicks, bounding boxes, and text to label the targets for segmentation. Users can make corrections to the segmentation results, and feed the revised results back into the model to obtain more accurate segmentation results. However, due to the large coverage of single remote sensing image, the segmentation process can be time-consuming, making it challenging to correct the results in real-time. Therefore, exploring and researching how to increase human intervention during the segmentation process and provide feedback to the model with the correct results is highly valuable. Additionally, owing to the complexity of surface features and the phenomena of spectral variability for the same material and spectral similarity for different materials, RS-LVSMs cannot learn all surface features during the training process. Hence, incorporating human intervention during the identification stage can also facilitate the model’s learning of more feature information about the target to be segmented.§ CONCLUSIONExtracting surface features from remote sensing data for investigation of resource, environment, ecology, and climate development and changes has become a primary application aspect of Earth observation missions. To understand the evolution of landforms over a larger spatial extent, the use of massive remote sensing data forms the basis for analytical computations. Therefore, to promote the application of cloud computing in the field of remote sensing, this paper introduces the AI Earth intelligent computing cloud platform constructed by our research team. The platform provides a variety of publicly available common satellite datasets, as well as multiple global product datasets, with data management meeting the standard STAC protocol. In terms of general computing capabilities, the platform offers up to 440 API functions, and additionally, users can develop UDFs and submit them to the server of the platform for execution, which enhances the open access in function design. Compared to existing remote sensing cloud platforms, AI Earth platform has better integration in the field of AI, providing services for ML, DL, and AIE-SEG. Specifically, the AIE-SEG deployed on the platform can meet users’ application needs for target extraction, land cover classification, and change detection. The sustained evolution of the AI Earth platform will bolster the integration of intelligent computing within research focused on remote sensing applications.§ ACKNOWLEDGMENTSThe authors are very grateful to the scientists and practitioners who provided valuable suggestions for the construction and development of the AI Earth platform. Thanks to the rest of the AI Earth team: Hang Xia, Ci Song, Hualong Zhang, Diao Zhang, Quan Yu, Lijun Guan, Yixuan Zhu, Bin Xu, Mingyang Chen, Linlin Shen, Hao Luo, Yuan Gong, Dongyang Li, Shang Liu, Tingting Guo, Qiang Chen, Mengting Zhang, Tengfei Xue, Duoduo Hu. Finally, we would like to thank the providers of the hundreds of public datasets in AI Earth; in particular, NASA, USGS, NOAA, and ESA, whose enlightened open data policies and practices are responsible for the bulk of the data in our catalog.1 IEEEtranref1 Ma, Y.; Chen, S.; Ermon, S.; Lobell, D.B. Transfer learning in environmental remote sensing. Remote Sensing of Environment 2024, 301, 113924.ref2 Wang, D.; Xu, H.; Shi, Y.; Ding, Z.; Deng, Z.; Liu, Z.; Xu, X.; Lu, Z.; Wang, G.; Cheng, Z. The groundwater potential assessment system based on cloud computing: A case study in islands region. Computer Communications 2021, 178, 83-97.ref3 Kannadasan, R.; Prabakaran, N.; Boominathan, P.; Krishnamoorthy, A.; Naresh, K.; Sivashanmugam, G. High performance parallel computing with cloud technologies. Procedia computer science 2018, 132, 518-524.ref4 Zhang, S.; Yan, H.; Chen, X. Research on key technologies of cloud computing. Physics Procedia 2012, 33, 1791-1797.ref5 Soni, D.; Kumar, N. Machine learning techniques in emerging cloud computing integrated paradigms: A survey and taxonomy. Journal of Network and Computer Applications 2022, 205, 103419.ref6 Santoro, M.; Mazzetti, P.; Nativi, S. Virtual earth cloud: a multi-cloud framework for enabling geosciences digital ecosystems. International Journal of Digital Earth 2023, 16(1), 43-65.ref7 Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote sensing of Environment 2017, 202, 18-27.ref8 Microsoft Open Source, M.M., Rob Emanuele, Dan Morris, Tom Augspurger. microsoft/PlanetaryComputer: October 2022. Zenodo 2022, doi:10.5281/zenodo.7261897.ref9 Gomes, V.C.; Queiroz, G.R.; Ferreira, K.R. An overview of platforms for big earth observation data management and analysis. Remote Sensing 2020, 12, 1253.ref10 Shen, W.; Zhang, J.; Wang, K.; Zhang, Z. Identifying the spatio-temporal dynamics of regional ecological risk based on Google Earth Engine: A case study from Loess Plateau, China. Science of The Total Environment 2023, 873, 162346.ref11 Cheng, K.; Su, Y.; Guan, H.; Tao, S.; Ren, Y.; Hu, T.; Ma, K.; Tang, Y.; Guo, Q. Mapping China’s planted forests using high resolution imagery and massive amounts of crowdsourced samples. ISPRS Journal of Photogrammetry and Remote Sensing 2023, 196, 356-371.ref12 Wang, R.; Ding, J.; Ge, X.; Wang, J.; Qin, S.; Tan, J.; Han, L.; Zhang, Z. Impacts of climate change on the wetlands in the arid region of Northwestern China over the past 2 decades. Ecological Indicators 2023, 149, 110168.ref13 Pérez-Cutillas, P.; Pérez-Navarro, A.; Conesa-García, C.; Zema, D.A.; Amado-Álvarez, J.P. What is going on within google earth engine? A systematic review and meta-analysis. Remote Sensing Applications: Society and Environment 2023, 29, 100907.ref14 Tamiminia, H.; Salehi, B.; Mahdianpari, M.; Quackenbush, L.; Adeli, S.; Brisco, B. Google Earth Engine for geo-big data applications: A meta-analysis and systematic review. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 164, 152-170.ref15 Zhao, X.; Xia, H.; Pan, L.; Song, H.; Niu, W.; Wang, R.; Li, R.; Bian, X.; Guo, Y.; Qin, Y. Drought monitoring over Yellow River basin from 2003–2019 using reconstructed MODIS land surface temperature in Google Earth Engine. Remote Sensing 2021, 13, 3748.ref16 Venkatappa, M.; Sasaki, N.; Han, P.; Abe, I. Impacts of droughts and floods on croplands and crop production in Southeast Asia–An application of Google Earth Engine. Science of the Total Environment 2021, 795, 148829.ref17 Wang, W.; Samat, A.; Ge, Y.; Ma, L.; Tuheti, A.; Zou, S.; Abuduwaili, J. Quantitative soil wind erosion potential mapping for Central Asia using the Google Earth Engine platform. Remote Sensing 2020, 12, 3430.ref18 Han, L.; Ding, J.; Wang, J.; Zhang, J.; Xie, B.; Hao, J. Monitoring oasis cotton fields expansion in arid zones using the Google Earth Engine: A case study in the Ogan-Kucha River oasis, Xinjiang, China. Remote Sensing 2022, 14, 225.ref19 Wulder, M.A.; Roy, D.P.; Radeloff, V.C.; Loveland, T.R.; Anderson, M.C.; Johnson, D.M.; Healey, S.; Zhu, Z.; Scambos, T.A.; Pahlevan, N. Fifty years of Landsat science and impacts. Remote Sensing of Environment 2022, 280, 113195.ref20 Adiri, Z.; Lhissou, R.; El Harti, A.; Jellouli, A.; Chakouri, M. Recent advances in the use of public domain satellite imagery for mineral exploration: A review of Landsat-8 and Sentinel-2 applications. Ore Geology Reviews 2020, 117, 103332.ref21 Han, W.; Zhang, X.; Wang, Y.; Wang, L.; Huang, X.; Li, J.; Wang, S.; Chen, W.; Li, X.; Feng, R. A survey of machine learning and deep learning in remote sensing of geological environment: Challenges, advances, and opportunities. ISPRS Journal of Photogrammetry and Remote Sensing 2023, 202, 87-113.ref22 Sawant, S.; Garg, R.D.; Meshram, V.; Mistry, S. Sen-2 LULC: Land use land cover dataset for deep learning approaches. Data in Brief 2023, 51, 109724.ref23 Alabi, T.R.; Abebe, A.T.; Chigeza, G.; Fowobaje, K.R. Estimation of soybean grain yield from multispectral high-resolution UAV data with machine learning models in West Africa. Remote Sensing Applications: Society and Environment 2022, 27, 100782.ref24 Jiang, F.; Sun, H.; Ma, K.; Fu, L.; Tang, J. Improving aboveground biomass estimation of natural forests on the Tibetan Plateau using spaceborne LiDAR and machine learning algorithms. Ecological Indicators 2022, 143, 109365.ref25 Jiang, Z.; Yang, S.; Liu, Z.; Xu, Y.; Xiong, Y.; Qi, S.; Pang, Q.; Xu, J.; Liu, F.; Xu, T. Coupling machine learning and weather forecast to predict farmland flood disaster: A case study in Yangtze River basin. Environmental Modelling & Software 2022, 155, 105436.ref26 Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J. Recent advances in convolutional neural networks. Pattern recognition 2018, 77, 354-377.ref27 Pascanu, R.; Mikolov, T.; Bengio, Y. On the difficulty of training recurrent neural networks. In Proceedings of the International conference on machine learning, 2013; pp. 1310-1318.ref28 Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Communications of the ACM 2020, 63, 139-144.ref29 Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, 2015; pp. 3431-3440.ref30 Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 2015; pp. 234-241.ref31 Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 2017, 40, 834-848.ref32 Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019; pp. 5693-5703.ref33 Liu, S.; Wang, H.; Hu, Y.; Zhang, M.; Zhu, Y.; Wang, Z.; Li, D.; Yang, M.; Wang, F. Land Use and Land Cover Mapping in China Using Multi-modal Fine-grained Dual Network. IEEE Transactions on Geoscience and Remote Sensing 2023.ref34 Kirillov, A.; Wu, Y.; He, K.; Girshick, R. Pointrend: Image segmentation as rendering. In Proceedings of the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020; pp. 9799-9808.ref35 Yang, X.; Yan, J.; Feng, Z.; He, T. R3det: Refined single-stage detector with feature refinement for rotating object. In Proceedings of the Proceedings of the AAAI conference on artificial intelligence, 2021; pp. 3163-3171.ref36 Chen, L.; Qian, Q.; Li, H. Semi-Anchored Detector for One-Stage Object Detection. arXiv preprint arXiv:2009.04989 2020.ref37 Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020; pp. 10781-10790.ref38 Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y. Segment anything. arXiv preprint arXiv:2304.02643 2023.ref39 Zou, X.; Yang, J.; Zhang, H.; Li, F.; Li, L.; Gao, J.; Lee, Y.J. Segment everything everywhere all at once. arXiv preprint arXiv:2304.06718 2023.ref40 Wang, X.; Zhang, X.; Cao, Y.; Wang, W.; Shen, C.; Huang, T. Seggpt: Segmenting everything in context. arXiv preprint arXiv:2304.03284 2023.ref41 Hu, Y.; Yuan, J.; Wen, C.; Lu, X.; Li, X. RSGPT: A Remote Sensing Vision Language Model and Benchmark. arXiv preprint arXiv:2307.15266 2023.ref42 Dai, W.; Li, J.; Li, D.; Tiong, A.M.H.; Zhao, J.; Wang, W.; Li, B.; Fung, P.; Steven, H. InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. axViv preprintarXiv:2305.06500 2023.ref43 Yao, J.; Wu, S.; Cao, Y.; Wei, J.; Tang, X.; Hu, L.; Wu, J.; Yang, H.; Yang, J.; Ji, X. Dry deposition effect of urban green spaces on ambient particulate matter pollution in China. Science of The Total Environment 2023, 900, 165830.ref44 Fan, Q.; Shi, Y.; Mutale, B.; Cong, N. Spatiotemporal Gravitational Evolution of the Night Land Surface Temperature: An Empirical Study Based on Night Lights. Remote Sensing 2023, 15, 4347.ref45 Xu, C.; Du, X.; Jian, H.; Dong, Y.; Qin, W.; Mu, H.; Yan, Z.; Zhu, J.; Fan, X. Analyzing large-scale Data Cubes with user-defined algorithms: A cloud-native approach. International Journal of Applied Earth Observation and Geoinformation 2022, 109, 102784.
http://arxiv.org/abs/2312.16385v1
{ "authors": [ "Hao Xu", "Yuanbin Man", "Mingyang Yang", "Jichao Wu", "Qi Zhang", "Jing Wang" ], "categories": [ "cs.DC" ], "primary_category": "cs.DC", "published": "20231227031349", "title": "Analytical Insight of Earth: A Cloud-Platform of Intelligent Computing for Geospatial Big Data" }
Department of Physics, Hubei University, Wuhan 430062, China [email protected] Department of Physics, Hubei University, Wuhan 430062, China [email protected] Department of Physics, Hubei University, Wuhan 430062, China Key Laboratory of Intelligent Sensing System and Security of Ministry of Education, Hubei University, Wuhan 430062, China Floquet topological insulators have been widely investigated in lower-dimensional systems. However, Floquet topological insulators in higher-dimensional systems remain unexplored. In this work, we study the effects of time-periodic driving in a four-dimensional (4D) normal insulator, focusing on topological phase transitions at the resonant quasienergy gap. We consider two types of time-periodic driving, including a time-periodic onsite potential and a time-periodic vector potential. We reveal that both types of the time-periodic driving can transform the 4D normal insulator into a 4D Floquet topological insulator characterized by an emergent second Chern number. Moreover, it is found that the topological phase of the 4D system can be modulated by tuning the strength of the time-periodic driving. Our work will be helpful for future investigations on Floquet topological insulators in higher dimensions. Four-dimensional Floquet topological insulator with an emergent second Chern number Bin Zhou January 14, 2024 ===================================================================================§ INTRODUCTIONIn the past decades, topological matter has become an important topic in condensed matter physics <cit.>. Topological insulators (TIs) in d-dimensional space possess a d-dimensional gapped bulk and (d-1)-dimensional gapless boundary states. Recently, higher-dimensional (d>3) TIs have attracted extensive attention for exploring higher-dimensional physics, such as four-dimensional (4D) TIs <cit.>. The 4D TI hosts three-dimensional (3D) gapless boundary states, characterized by a topological invariant–the second Chern number <cit.>. The 4D TI cannot naturally arise in condensed matter systems due to limited dimensionality. In artificial systems, synthetic dimensions <cit.> and mapping 4D models onto low-dimensional systems <cit.> are promising schemes for implementing 4D TIs. Experimentally, the flexibility of atomic and photonic systems has inspired proposals to realize 4D topological physics <cit.>. Furthermore, since electric circuits are defined in terms of electronic elements and their interconnections, lattices with genuine 4D structures can be constructed by applying appropriate capacitive and inductive connections <cit.>.Floquet engineering is a controlled protocol to induce or manipulate exotic topological properties by time-periodic driving <cit.>. Time-periodic driving can induce topologically nontrivial matter in trivial static systems, and such topological matter is known as Floquet topological insulator (FTI) <cit.>. In lower-dimensional (d≤3) systems, considerable research on the FTI has been reported, such as the discovery of many intriguing topological phases that are absent in static systems <cit.>. Up to now, FTIs have been realized by experiments for solid-state <cit.>, photonic <cit.>, acoustic <cit.>, electric circuits <cit.>, and cold atom systems <cit.>. Then a question naturally arises whether the FTI phase can occur in 4D systems.In this paper, we answer the above question and provide the scheme for inducing 4D FTIs via two types of time-periodic driving. First, we study the influence of the time-periodic onsite potential V(τ) on the 4D system. When the frequency of the time-periodic onsite potential ω is smaller than the bandwidth of the static system E_W, the time-periodic onsite potential can induce a phase transition from the trivial static system to a 4D TI. This 4D TI possesses 3D gapless boundary states, characterized by a nonzero second Chern number C_2=-3, dubbed the 4D FTI. Moreover, when the amplitude of the time-periodic onsite potential V exceeds a critical value, we find that the time-periodic onsite potential can transform a topologically nontrivial phase with C_2=-3 to another topologically nontrivial phase with C_2=3. Second, we find that the time-periodic vector potential A(τ) can induce the emergence of the 4D FTI with C_2=2. However, when the amplitude A exceeds a critical value, the time-periodic vector potential destroys the topological properties of the Floquet system, accompanied by the decay of second Chern number from C_2=2 to C_2=0.The rest of the paper is organized as follows. In Sec. <ref>, we introduce a time-periodic onsite potential in the 4D Dirac model and demonstrate the method for calculating the second Chern number. Then, we present a 4D FTI driven by the time-periodic onsite potential in Sec. <ref>. In Sec. <ref>, we investigate the topological phase transition of the 4D FTI driven by the time-periodic vector potential. Finally, we summarize our conclusions in Sec. <ref>.§ TIME-PERIODIC ONSITE POTENTIAL §.§ Model We first study the influence of the time-periodic onsite potential on the 4D system. The time-dependent 4D TI model is given by the following equation:H(k,τ)=H(k)+V(τ).The first term is a static Hamiltonian describing the 4D TI <cit.>,H(k)= sin(k_x)Γ_2+sin(k_y)Γ_3+sin(k_z)Γ_4+sin(k_w)Γ_5+m(k)Γ_1,where the Dirac matrices Γ_j=(σ_x⊗σ_0, σ_y⊗σ_0, σ_z⊗σ_x, σ_z⊗σ_y, σ_z⊗σ_z), j=1, 2, 3, 4, 5, satisfying the anticommutation relations {Γ_i,Γ_j}=2δ_ij. m(k)=m+c[cos(k_x)+cos(k_y)+cos(k_z)+cos(k_w)], m is the Dirac mass, and c denotes the nearest-neighbour hopping amplitude. In subsequent calculations, c=1. The second term represents the time-periodic onsite potential V(τ)=Vcos(ωτ)Γ_1, where V is the amplitude of the time-periodic onsite potential, and ω is the frequency.The second Chern number is used to characterize the topological properties of 4D TIs <cit.>. The second Chern number is given by the following formula <cit.>:C_2=1/4π^2∫ dkTr[Ω_xyΩ_zw+Ω_wxΩ_zy+Ω_zxΩ_yw],with the non-Abelian Berry curvatureΩ_mn^αβ=∂_ma_n^αβ-∂_na_m^αβ+i[a_m,a_n]^αβ,where m, n=x, y, z, w, and the Berry connection of the occupied bands a_m^αβ=-i⟨ u^α(k)|∂/∂ k_m| u^β(k)⟩, | u^α(k)⟩ denotes the occupied eigenstates with α=1, …, N_occ. When the amplitude of the time-periodic onsite potential V=0, the static system exhibits different topological phases for different Dirac mass:C_2(m)=0,m<-41,-4<m<-2-3,-2<m<03,0<m<2-1,2<m<40,m>4.The values of the second Chern number |C_2| imply the number of 3D gapless boundary states for the 4D system.Now we investigate the effect of time-periodic onsite potential on the 4D system. Floquet theory is often used to study time-dependent systems <cit.>. Based on the Floquet theory, we can convert time-dependent Hamiltonian H(τ) into time-independent Floquet Hamiltonian H_F by using the Fourier transformation. The Floquet Hamiltonian H_F is an infinite Hamiltonian with the following form:H_F=[ ⋱ ⋮ ⋮ ⋮ ⋱; ⋯ H_0-ωH_+1H_+2 ⋯; ⋯H_-1 H_0H_+1 ⋯; ⋯H_-2H_-1 H_0+ω ⋯; ⋱ ⋮ ⋮ ⋮ ⋱ ],whereH_l =1/T∫_0^Tdτ H(τ)e^i lωτ,ω and T=2π/ω represent the frequency and period of the time-periodic onsite potential, respectively. l can be taken as 0, ±1, ±2, ⋯. After applying the Fourier transformation to the time-dependent 4D TI model H(k,τ) [Eq. (<ref>)], the block matrices of the Floquet Hamiltonian H_F are shown below:H_0= sin(k_x)Γ_2+sin(k_y)Γ_3+sin(k_z)Γ_4+sin(k_w)Γ_5+ [ m+c∑_n=x,y,z,wcos(k_n)] Γ_1,H_-1= H_+1^†=V/2Γ_1,H_|l|≥2= 0.In subsequent calculations, the infinite Floquet Hamiltonian is truncated when the results are convergent, and the Fermi energy is set to ω/2. §.§ Periodic-driven 4D FTI In the trivial phase (m=-6), the static system hosts a trivial bulk gap and the bandwidth of the system is E_W=20. We can study the effect of time-periodic onsite potential V(τ) on the 4D trivial system by solving the Floquet Hamiltonian H_F, H_F|ψ^α⟩=ε_α|ψ^α⟩. In the Floquet Hamiltonian H_F, the diagonal block H_0±ω is a copy of the original block H_0, shifting in energy by ω. The band of the diagonal block H_0 (H_0±ω) is referred to as the undriven (driven) band. As shown by the grey dashed line in Figs. <ref>(a) and <ref>(b), the driven and undriven bands of the Floquet Hamiltonian H_F cross each other when ω<E_W. After turning on the amplitude of the time-periodic onsite potential V, the off-diagonal blocks H_l(l≠0) of the Floquet Hamiltonian hybridize the resonant quasienergies and gap them out, as shown by the red solid line in Fig. <ref>(b).In order to explore the phase transition of the 4D Floquet system, we calculate the bulk gap in the resonance region as a function of the amplitude V as shown in Fig. <ref>(c). When V=0, the undriven conduction bands and the driven valence bands mix with each other, thus the bulk gap of the system is zero. This bulk gap in the resonance region can be opened by the time-periodic onsite potential with V>0. In Fig. <ref>(d), we show the second Chern number as a function of V, and one can find that the bulk gap opened by the time-periodic onsite potential in the resonance region is topologically nontrivial. In the interval V∈(0, V_c≈6.785), the second Chern number of the system maintains a quantized plateau C_2=-3. When V=V_c, the bulk gap closes at points K_1=(k_x=π,k_y=π,k_z=0,k_w=0), K_2=(π,0,π,0), K_3=(π,0,0,π), K_4=(0,π,π,0), K_5=(0,π,0,π), and K_6=(0,0,π,π). When V>V_c, the bulk gap reopens and the system transitions from a topologically nontrivial phase with C_2=-3 to another topologically nontrivial phase with C_2=3.In Figs. <ref>(a) and <ref>(b), we show the quasienergy spectra of the system when the open boundary condition is along the x direction. The color bar represents the natural logarithm of the density of states ln(ρ), where ρ is normalized to 1. It can be found that there are three gapless Dirac points in the resonance regions and no Dirac points in the gap near ε=0. These Dirac points are distributed at the high symmetry points Y=(k_y=π, k_z=0, k_w=0), Z=(0, π, 0), and W=(0, 0, π). The system with three gapless Dirac points is dubbed 4D FTI, characterized by a nonzero second Chern number C_2=-3. Moreover, in another topologically nontrivial phase with C_2=3, there are three gapless Dirac points distributed at M_yz=(k_y=π, k_z=π, k_w=0), M_yw=(π, 0, π), and M_zw=(0, π, π) when the open boundary condition is along the x direction.In the limit where the frequency ω is (nearly) resonant with the level spacing of the Hamiltonian, we can apply the rotating wave approximation (RWA) to analyze the time-dependent system <cit.>. For the time-dependent Hamiltonian H(k,τ)=H(k)+Vcos(ωτ)Γ_1, we can convert it to the rotating frame by using the unitary transformation:H_rot=U^† [ H(k,τ)-i∂/∂τ ] U,where U=P_++P_-e^iωτ, and P_± are projectors on the unoccupied and occupied bands of H(k). In the rotating-frame Hamiltonian H_rot, we can neglect the fast-oscillating terms as they quickly average to zero <cit.>. Hence, we find a time-independent Hamiltonian H_RWA as follow:H_RWA=H(k)+ω P_-P_-+V/2 ( P_+Γ_1P_-+P_-Γ_1P_+ ).In Fig. <ref>(c), we show the bulk gap of the Hamiltonian H_RWA as a function of V, labeled by red solid dots. When the amplitude of the time-periodic onsite potential V is small, the results obtained by the rotating wave approximation coincide with those obtained by solving the Floquet Hamiltonian H_F.§ TIME-PERIODIC VECTOR POTENTIAL In this section, we apply a time-periodic vector potential A(τ) to the 4D Dirac model H(k), then the time-dependent Hamiltonian ℋ(k, τ) is given by the following expression:ℋ(k, τ) =∑_j=1^5h_j(k, τ)Γ_j,withh_1(k, τ) =m+c∑_n=x, y, z, wcos[k_n+Acos(ωτ)],h_2(k, τ) =sin[k_x+Acos(ωτ)],h_3(k, τ) =sin[k_y+Acos(ωτ)],h_4(k, τ) =sin[k_z+Acos(ωτ)],h_5(k, τ) =sin[k_w+Acos(ωτ)],where A is the amplitude of the time-periodic vector periodic. After the Fourier transformation, the block matrices in the Floquet Hamiltonian ℋ_F are shown below:ℋ_0= [sin(k_x)Γ_2+sin(k_y)Γ_3+sin(k_z)Γ_4+sin(k_w)Γ_5]𝒥_0(A)+m(k)Γ_1, ℋ_-l= ∑_j=1^5h_-l, jΓ_j, ℋ_+l=ℋ_-l^†,withh_-l, 1= c/2∑_n=x,y,z,w[e^i k_n+(-1)^le^-i k_n]𝒥_l(A),h_-l, 2= -i/2[e^i k_x-(-1)^le^-i k_x]𝒥_l(A),h_-l, 3= -i/2[e^i k_y-(-1)^le^-i k_y]𝒥_l(A),h_-l, 4= -i/2[e^i k_z-(-1)^le^-i k_z]𝒥_l(A),h_-l, 5= -i/2[e^i k_w-(-1)^le^-i k_w]𝒥_l(A),where m(k)=m+c[cos(k_x)+cos(k_y)+cos(k_z)+cos(k_w) ]𝒥_0(A). 𝒥_l(A) is the l-th Bessel function of the first kind.By solving the Floquet Hamiltonian ℋ_F, we present the bulk quasienergy spectra for A=0 and A=1.5 as shown in Figs. <ref>(a) and <ref>(b), with the horizontal axis representing k=k_n (n=x, y, z, w). The grey dashed line (red solid line) represents the amplitude of the time-periodic vector potential A=0 (A=1.5). When A=0, the Floquet Hamiltonian ℋ_F contains only diagonal blocks, and the bands of different blocks overlap each other in the interval ε∈ ( 0, ω ). After applying the time-periodic vector potential, the band gap in the interval ε∈ ( 0, ω ) is opened as shown by the red solid line in Fig. <ref>(b). In Figs. <ref>(c) and <ref>(d) we show the band structure when the open boundary condition is along the x direction. One can find that there are two gapless Dirac points in these gaps opened by the time-periodic vector potential, and they are distributed on the diagonal (k=k_y=k_z=k_w) in the first Brillouin zone of the quasi-3D system. We emphasize that these gapless boundary states appear only in the resonance region ε∈ ( 0, ω ) and are not present in the gap near ε=0.In Fig. <ref>(a), we plot the bulk gap of the Floquet Hamiltonian ℋ_F as a function of A. It is obvious that when 0<A<1.966, the bulk gap is opened by the time-periodic vector potential, and there are gapless boundary states in this gap. The bulk gap is closed when A=1.966, and this gap is reopened with increasing the amplitude A. In order to investigate the topological phase transition of the system, we show the evolution of the second Chern number C_2 with amplitude A for different N in Fig. <ref>(b), where N^4 represents the number of k points in the first Brillouin zone for calculating the second Chern number. In the interval A∈(1,1.966), the second Chern number exhibits a nonzero quantized plateau C_2=2. As the bulk gap closes at A=1.966 and reopens, the second Chern number declines from a quantized value of 2 to 0. However, one can note that for weak amplitudes (0<A<1), the second Chern number exhibits nonzero fractional values. In Fig. <ref>(c), we show the variation of the second Chern number C_2 with N. The value of the second Chern number oscillates towards C_2=2 as N increases. We fit the evolution of ln|C_2-2| with N as shown by the dashed line in Fig. <ref>(c). It can be found that ln|C_2-2| decays linearly as N increases, i.e., the value of the second Chern number approaches C_2=2 with an exponential trend. Figure <ref>(c) shows that the rate of ln|C_2-2| decay is related to the amplitude A. When the number of k points N in the discrete first Brillouin zone for calculation is sufficiently large, the second Chern number can approach the integer value C_2=2, and the number of k points required for the calculation decreases as the amplitude A is enhanced. In Fig. <ref>(d), we show the boundary band structure when A=0.5, which implies that a weak vector potential can still induce the occurrence of topological boundary states.§ CONCLUSION In this paper, we present the scheme for inducing 4D FTI via two types of time-periodic driving, including time-periodic onsite potential V(τ) and the time-periodic vector potential A(τ). Firstly, we introduce a time-periodic onsite potential into the 4D Dirac model. For the topologically trivial static system, there is no boundary state in the bulk gap. When the frequency of the time-periodic onsite potential ω is less than the bandwidth of the static system E_W, the driven quasienergy bands and the undriven quasienergy bands overlap each other in the resonance region ε∈(0, ω). We find that the time-periodic onsite potential can open the bulk gap in the resonance region, leading to 3D gapless boundary states for the trivial static system. The system with 3D gapless boundary states is characterized by an emergent second Chern number C_2, referred to as a 4D FTI. By numerically calculating the evolution of the second Chern number with the amplitude of the time-periodic onsite potential V, it can be found that when the time-periodic onsite potential is introduced into the 4D system, the second Chern number exhibits a quantized plateau with C_2=-3. Furthermore, when the amplitude V increases to a critical value, the bulk gap closes and is accompanied by a phase transition of the system from a topological phase with C_2=-3 to another topological phase C_2=3.Secondly, we study the influence of the time-periodic vector potential A(τ) in the 4D system. Similar to the time-periodic onsite potential, the time-periodic vector potential can also open the bulk gap in the resonant region, and there are 3D gapless boundary states in this gap. By calculating the variation of the bulk gap and the second Chern number with the amplitude of the time-periodic vector potential A, we show that the vector potential can open the bulk gap and induce the emergence of a topologically nontrivial phase with C_2=2. Moreover, when the amplitude A increases to a critical value, the bulk gap closes and is accompanied by a phase transition from the 4D FTI phase with C_2=2 to the trivial phase C_2=0.Because of the limitation of spatial dimensions, 4D TIs are impossible to realize in real materials. Experimentally, based on the proposals including synthetic dimensions and mapping high-dimensional models onto low-dimensional systems, 4D topological states have been realized in artificially engineered systems such as electronic circuits <cit.>, acoustic lattices <cit.>, and photonic crystals <cit.>. Therefore, we expect that the scheme proposed in this work to generate 4D FTIs can be realized by configuring tunable complex-phase elements in electronic circuits.It is worth mentioning that in our another work, we investigate the effects of high-frequency time-periodic driving in a four-dimensional (4D) topological insulator. In that work, the frequency of the time-periodic driving is greater than the bandwidth of the static system, and the driven bands do not overlap with the undriven bands. Therefore, we focus on the topological phase transition in the off-resonant quasienergy gap. It is found that the second Chern number of 4D topological insulators can be modulated by tuning the amplitude of time-periodic driving.§ ACKNOWLEDGMENTSB.Z. was supported by the NSFC (Grant No. 12074107), the program of outstanding young and middle-aged scientific and technological innovation team of colleges and universities in Hubei Province (Grant No. T2020001) and the innovation group project of the Natural Science Foundation of Hubei Province of China (Grant No. 2022CFA012). R.C. acknowledges the support of NSFC (under Grant No. 12304195) and the Chutian Scholars Program in Hubei Province. Z.-R.L. was supported by the National Funded Postdoctoral Researcher Program (under Grant No. GZC20230751) and the Postdoctoral Innovation Research Program in Hubei Province (under Grant No. 351342).apsrev4-1-etal-title_6authors81 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Hasan and Kane(2010)]RevModPhys.82.3045 author author M. Z. Hasan and author C. L. Kane, title title Colloquium: Topological insulators, 10.1103/RevModPhys.82.3045 journal journal Rev. Mod. Phys. volume 82, pages 3045 (year 2010)NoStop [Qi and Zhang(2011)]RevModPhys.83.1057 author author X.-L. Qi and author S.-C. Zhang,title title Topological insulators and superconductors, 10.1103/RevModPhys.83.1057 journal journal Rev. Mod. Phys. volume 83, pages 1057 (year 2011)NoStop [Bansil et al.(2016)Bansil, Lin, and Das]RevModPhys.88.021004 author author A. Bansil, author H. Lin,andauthor T. Das, title title Colloquium: Topological band theory, 10.1103/RevModPhys.88.021004 journal journal Rev. Mod. Phys. volume 88,pages 021004 (year 2016)NoStop [Chiu et al.(2016)Chiu, Teo, Schnyder, and Ryu]RevModPhys.88.035005 author author C.-K. Chiu, author J. C. Y. Teo, author A. P. Schnyder,andauthor S. Ryu, title title Classification of topological quantum matter with symmetries, 10.1103/RevModPhys.88.035005 journal journal Rev. Mod. Phys. volume 88, pages 035005 (year 2016)NoStop [Haldane(2017)]RevModPhys.89.040502 author author F. D. M.Haldane, title title Nobel Lecture: Topological quantum matter, 10.1103/RevModPhys.89.040502 journal journal Rev. Mod. Phys. volume 89, pages 040502 (year 2017)NoStop [Wen(2017)]RevModPhys.89.041004 author author X.-G. Wen, title title Colloquium: Zoo of quantum-topological phases of matter, 10.1103/RevModPhys.89.041004 journal journal Rev. Mod. Phys. volume 89, pages 041004 (year 2017)NoStop [Wölfle(2018)]W_lfle_2018 author author P. Wölfle, title title Quasiparticles in condensed matter systems, 10.1088/1361-6633/aa9bc4 journal journal Rep. Prog. Phys. volume 81, pages 032501 (year 2018)NoStop [Shen(2017)]Shen_2017 author author S.-Q. Shen, 10.1007/978-981-10-4606-3 title Topological Insulators (publisher Springer Singapore, year 2017)NoStop [Bernevig and Hughes(2013)]Bernevig_2013 author author B. A. Bernevig and author T. L. Hughes, 10.1515/9781400846733 title Topological Insulators and Topological Superconductors (publisher Princeton University Press, year 2013)NoStop [Zhang and Hu(2001)]Zhang_2001 author author S.-C. Zhang and author J. Hu,title title A Four-Dimensional Generalization of the Quantum Hall Effect, 10.1126/science.294.5543.823 journal journal Science volume 294, pages 823 (year 2001)NoStop [Qi et al.(2008)Qi, Hughes, and Zhang]PhysRevB.78.195424 author author X.-L. Qi, author T. L. Hughes, and author S.-C. Zhang,title title Topological field theory of time-reversal invariant insulators, 10.1103/PhysRevB.78.195424 journal journal Phys. Rev. B volume 78, pages 195424 (year 2008)NoStop [Mochol-Grzelak et al.(2018)Mochol-Grzelak, Dauphin, Celi, andLewenstein]Mochol_Grzelak_2018 author author M. Mochol-Grzelak, author A. Dauphin, author A. Celi, and author M. Lewenstein,title title Efficient algorithm to compute the second Chern number in four dimensional systems, 10.1088/2058-9565/aae93b journal journal Quantum Sci. Technol. volume 4,pages 014009 (year 2018)NoStop [Sugawa et al.(2018)Sugawa, Salces-Carcoba, Perry, Yue,and Spielman]doi:10.1126/science.aam9031 author author S. Sugawa, author F. Salces-Carcoba, author A. R. Perry, author Y. Yue,andauthor I. B. Spielman,title title Second Chern number of a quantum-simulated non-Abelian Yang monopole, 10.1126/science.aam9031 journal journal Science volume 360, pages 1429 (year 2018)NoStop [Li et al.(2013)Li, Zhang, and Wu]PhysRevLett.111.186803 author author Y. Li, author S.-C. Zhang, and author C. Wu, title title Topological Insulators with SU(2) Landau Levels, 10.1103/PhysRevLett.111.186803 journal journal Phys. Rev. Lett. volume 111, pages 186803 (year 2013)NoStop [Zhu et al.(2022)Zhu, Zheng, Palumbo, and Wang]PhysRevLett.129.196602 author author Y.-Q. Zhu, author Z. Zheng, author G. Palumbo,and author Z. D. Wang, title title Topological Electromagnetic Effects and Higher Second Chern Numbers in Four-Dimensional Gapped Phases, 10.1103/PhysRevLett.129.196602 journal journal Phys. Rev. Lett. volume 129, pages 196602 (year 2022)NoStop [Terrier and Kunst(2020)]PhysRevResearch.2.023364 author author F. Terrier and author F. K. Kunst, title title Dissipative analog of four-dimensional quantum Hall physics, 10.1103/PhysRevResearch.2.023364 journal journal Phys. Rev. Res. volume 2, pages 023364 (year 2020)NoStop [Chen et al.(2023a)Chen, Yi, andZhou]PhysRevB.108.085306 author author R. Chen, author X.-X. Yi,andauthor B. Zhou, title title Four-dimensional topological Anderson insulator with an emergent second Chern number, 10.1103/PhysRevB.108.085306 journal journal Phys. Rev. B volume 108, pages 085306 (year 2023a)NoStop [Weisbrich et al.(2021)Weisbrich, Klees, Rastelli, andBelzig]PRXQuantum.2.010310 author author H. Weisbrich, author R. Klees, author G. Rastelli,andauthor W. Belzig, title title Second Chern Number and Non-Abelian Berry Phase in Topological Superconducting Systems, 10.1103/PRXQuantum.2.010310 journal journal PRX Quantum volume 2, pages 010310 (year 2021)NoStop [Chen et al.(2023b)Chen, Guan, Lenggenhager, Maciejko, Boettcher, and Bzdu ššek]PhysRevB.108.085114 author author A. Chen, author Y. Guan, author P. M. Lenggenhager, author J. Maciejko, author I. Boettcher,and author T. c. v. Bzdu ššek, title title Symmetry and topology of hyperbolic Haldane models, 10.1103/PhysRevB.108.085114 journal journal Phys. Rev. B volume 108, pages 085114 (year 2023b)NoStop [Boada et al.(2012)Boada, Celi, Latorre, and Lewenstein]PhysRevLett.108.133001 author author O. Boada, author A. Celi, author J. I. Latorre,andauthor M. Lewenstein, title title Quantum Simulation of an Extra Dimension, 10.1103/PhysRevLett.108.133001 journal journal Phys. Rev. Lett. volume 108, pages 133001 (year 2012)NoStop [Price et al.(2015)Price, Zilberberg, Ozawa, Carusotto, and Goldman]PhysRevLett.115.195303 author author H. M. Price, author O. Zilberberg, author T. Ozawa, author I. Carusotto,and author N. Goldman, title title Four-Dimensional Quantum Hall Effect with Ultracold Atoms, 10.1103/PhysRevLett.115.195303 journal journal Phys. Rev. Lett. volume 115, pages 195303 (year 2015)NoStop [Juki ćć andBuljan(2013)]PhysRevA.87.013814 author author D. Juki ćć and author H. Buljan, title title Four-dimensional photonic lattices and discrete tesseract solitons, 10.1103/PhysRevA.87.013814 journal journal Phys. Rev. A volume 87, pages 013814 (year 2013)NoStop [Ozawa et al.(2016)Ozawa, Price, Goldman, Zilberberg,and Carusotto]PhysRevA.93.043827 author author T. Ozawa, author H. M. Price, author N. Goldman, author O. Zilberberg,and author I. Carusotto, title title Synthetic dimensions in integrated photonics: From optical isolation to four-dimensional quantum Hall physics, 10.1103/PhysRevA.93.043827 journal journal Phys. Rev. A volume 93, pages 043827 (year 2016)NoStop [Chen et al.(2021)Chen, Zhu, Tan, Wang, andMa]PhysRevX.11.011016 author author Z.-G. Chen, author W. Zhu, author Y. Tan, author L. Wang,and author G. Ma, title title Acoustic Realization of a Four-Dimensional Higher-Order Chern Insulator and Boundary-Modes Engineering, 10.1103/PhysRevX.11.011016 journal journal Phys. Rev. X volume 11, pages 011016 (year 2021)NoStop [Chen et al.(2022)Chen, Shi, Liu, Shen, He, Chan, Chen, and Dong]10.1093/nsr/nwac289 author author X.-D. Chen, author F.-L. Shi, author J.-W. Liu, author K. Shen, author X.-T. He, author C. T. Chan, author W.-J.Chen,and author J.-W.Dong, title title Second Chern crystals with inherently non-trivial topology, 10.1093/nsr/nwac289 journal journal Natl. Sci. Rev. volume 10, pages nwac289 (year 2022)NoStop [Wang et al.(2020)Wang, Price, Zhang, and Chong]10.1038/s41467-020-15940-3 author author Y. Wang, author H. M. Price, author B. Zhang,and author Y. D. Chong, title title Circuit implementation of a four-dimensional topological insulator, 10.1038/s41467-020-15940-3 journal journal Nat. Commun. volume 11, pages 2356 (year 2020)NoStop [Yu et al.(2020)Yu, Zhao, and Schnyder]10.1093/nsr/nwaa065 author author R. Yu, author Y. X. Zhao,andauthor A. P. Schnyder,title title 4D spinless topological insulator in a periodic electric circuit, 10.1093/nsr/nwaa065 journal journal Natl. Sci. Rev. volume 7, pages 1288 (year 2020)NoStop [Lohse et al.(2018)Lohse, Schweizer, Price, Zilberberg, and Bloch]10.1038/nature25000 author author M. Lohse, author C. Schweizer, author H. M. Price, author O. Zilberberg,and author I. Bloch, title title Exploring 4D quantum Hall physics with a 2D topological charge pump, 10.1038/nature25000 journal journal Nature (London) volume 553, pages 55 (year 2018)NoStop [Zilberberg et al.(2018)Zilberberg, Huang, Guglielmon, Wang, Chen, Kraus, andRechtsman]10.1038/nature25011 author author O. Zilberberg, author S. Huang, author J. Guglielmon, author M. Wang, author K. P. Chen, author Y. E. Kraus,and author M. C. Rechtsman, title title Photonic topological boundary pumping as a probe of 4D quantum Hall physics, 10.1038/nature25011 journal journal Nature (London) volume 553, pages 59 (year 2018)NoStop [Kraus et al.(2013)Kraus, Ringel, and Zilberberg]PhysRevLett.111.226401 author author Y. E. Kraus, author Z. Ringel, and author O. Zilberberg,title title Four-Dimensional Quantum Hall Effect in a Two-Dimensional Quasicrystal, 10.1103/PhysRevLett.111.226401 journal journal Phys. Rev. Lett. volume 111, pages 226401 (year 2013)NoStop [Edge et al.(2012)Edge, Tworzydło, and Beenakker]PhysRevLett.109.135701 author author J. M. Edge, author J. Tworzydło,and author C. W. J. Beenakker, title title Metallic Phase of the Quantum Hall Effect in Four-Dimensional Space, 10.1103/PhysRevLett.109.135701 journal journal Phys. Rev. Lett. volume 109, pages 135701 (year 2012)NoStop [Lee et al.(2018)Lee, Wang, Chen, and Zhang]PhysRevB.98.094434 author author C. H. Lee, author Y. Wang, author Y. Chen,and author X. Zhang, title title Electromagnetic response of quantum Hall systems in dimensions five and six and beyond, 10.1103/PhysRevB.98.094434 journal journal Phys. Rev. B volume 98, pages 094434 (year 2018)NoStop [Petrides et al.(2018)Petrides, Price, and Zilberberg]PhysRevB.98.125431 author author I. Petrides, author H. M. Price,and author O. Zilberberg,title title Six-dimensional quantum Hall effect and three-dimensional topological pumps, 10.1103/PhysRevB.98.125431 journal journal Phys. Rev. B volume 98, pages 125431 (year 2018)NoStop [Chen et al.(2023c)Chen, Brand, Helbig, Hofmann, Imhof, Fritzsche, KieBling, Stegmaier, Upreti, Neupert, Bzdušek, Greiter, Thomale, andBoettcher]10.1038/s41467-023-36359-6 author author A. Chen, author H. Brand, author T. Helbig, author T. Hofmann, author S. Imhof, author A. Fritzsche,et al., title title Hyperbolic matter in electrical circuits with tunable complex phases, 10.1038/s41467-023-36359-6 journal journal Nat. Commun. volume 14, pages 622 (year 2023c)NoStop [Zhang et al.(2023)Zhang, Di, Zheng, Sun, andZhang]10.1038/s41467-023-36767-8 author author W. Zhang, author F. Di, author X. Zheng, author H. Sun,and author X. Zhang, title title Hyperbolic band topology with non-trivial second Chern numbers, 10.1038/s41467-023-36767-8 journal journal Nat. Commun. volume 14, pages 1083 (year 2023)NoStop [Liu et al.(2023)Liu, Lai, Wang, Cheng, Tian, and Chen]10.1515/nanoph-2022-0778 author author H. Liu, author P. Lai, author H. Wang, author H. Cheng, author J. Tian,and author S. Chen, title title Topological phases and non-Hermitian topology in photonic artificial microstructures, 10.1515/nanoph-2022-0778 journal journal Nanophotonics volume 12, pages 2273 (year 2023)NoStop [Dahlhaus et al.(2015)Dahlhaus, Fregoso, and Moore]PhysRevLett.114.246802 author author J. P. Dahlhaus, author B. M. Fregoso,and author J. E. Moore, title title Magnetization Signatures of Light-Induced Quantum Hall Edge States, 10.1103/PhysRevLett.114.246802 journal journal Phys. Rev. Lett. volume 114, pages 246802 (year 2015)NoStop [Kaladzhyan et al.(2017)Kaladzhyan, Simon, and Trif]PhysRevB.96.020507 author author V. Kaladzhyan, author P. Simon, and author M. Trif, title title Controlling topological superconductivity by magnetization dynamics, 10.1103/PhysRevB.96.020507 journal journal Phys. Rev. B volume 96, pages 020507 (year 2017)NoStop [Wang et al.(2013)Wang, Steinberg, Jarillo-Herrero, andGedik]doi:10.1126/science.1239834 author author Y. H. Wang, author H. Steinberg, author P. Jarillo-Herrero,andauthor N. Gedik, title title Observation of Floquet-Bloch States on the Surface of a Topological Insulator, 10.1126/science.1239834 journal journal Science volume 342, pages 453 (year 2013)NoStop [Sie et al.(2015)Sie, McIver, Lee, Fu, Kong, and Gedik]10.1038/nmat4156 author author E. J. Sie, author J. W. McIver, author Y.-H. Lee, author L. Fu, author J. Kong,and author N. Gedik, title title Valley-selective optical Stark effect in monolayer WS_2, 10.1038/nmat4156 journal journal Nat. Mater. volume 14, pages 290 (year 2015)NoStop [Mahmood et al.(2016)Mahmood, Chan, Alpichshev, Gardner, Lee, Lee, and Gedik]10.1038/nphys3609 author author F. Mahmood, author C.-K. Chan, author Z. Alpichshev, author D. Gardner, author Y. Lee, author P. A. Lee,and author N. Gedik, title title Selective scattering between Floquet-Bloch and Volkov states in a topological insulator, 10.1038/nphys3609 journal journal Nat. Phys. volume 12, pages 306 (year 2016)NoStop [McIver et al.(2019)McIver, Schulte, Stein, Matsuyama, Jotzu, Meier, and Cavalleri]10.1038/s41567-019-0698-y author author J. W. McIver, author B. Schulte, author F.-U. Stein, author T. Matsuyama, author G. Jotzu, author G. Meier,and author A. Cavalleri, title title Light-induced anomalous Hall effect in graphene, 10.1038/s41567-019-0698-y journal journal Nat. Phys. volume 16, pages 38 (year 2019)NoStop [He et al.(2019)He, Addison, Jin, Mele, Johnson, and Zhen]10.1038/s41467-019-12231-4 author author L. He, author Z. Addison, author J. Jin, author E. J. Mele, author S. G. Johnson,and author B. Zhen, title title Floquet Chern insulators of light, 10.1038/s41467-019-12231-4 journal journal Nat. Commun. volume 10, pages 4194 (year 2019)NoStop [Ito et al.(2023)Ito, Schüler, Meierhofer, Schlauderer, Freudenstein, Reimann, Afanasiev, Kokh, Tereshchenko, Güdde, Sentef, Höfer, and Huber]10.1038/s41586-023-05850-x author author S. Ito, author M. Schüler, author M. Meierhofer, author S. Schlauderer, author J. Freudenstein, author J. Reimann,et al., title title Build-up and dephasing of Floquet–Bloch bands on subcycle timescales, 10.1038/s41586-023-05850-x journal journal Nature volume 616, pages 696 (year 2023)NoStop [Zhou et al.(2023)Zhou, Bao, Fan, Zhou, Gao, Zhong, Lin, Liu, Yu, Tang, Meng, Duan, and Zhou]10.1038/s41586-022-05610-3 author author S. Zhou, author C. Bao, author B. Fan, author H. Zhou, author Q. Gao, author H. Zhong,et al., title title Pseudospin-selective Floquet band engineering in black phosphorus, 10.1038/s41586-022-05610-3 journal journal Nature volume 614, pages 75 (year 2023)NoStop [Katan and Podolsky(2013)]PhysRevLett.110.016802 author author Y. T. Katan and author D. Podolsky, title title Modulated Floquet Topological Insulators, 10.1103/PhysRevLett.110.016802 journal journal Phys. Rev. Lett. volume 110, pages 016802 (year 2013)NoStop [Li and Hu(2023)]10.1038/s41467-023-42139-z author author T. Li and author H. Hu,title title Floquet non-Abelian topological insulator and multifold bulk-edge correspondence, 10.1038/s41467-023-42139-z journal journal Nat. Commun. volume 14, pages 6418 (year 2023)NoStop [Chen et al.(2018a)Chen, Zhou, andXu]PhysRevB.97.155152 author author R. Chen, author B. Zhou,andauthor D.-H. Xu, title title Floquet Weyl semimetals in light-irradiated type-II and hybrid line-node semimetals, 10.1103/PhysRevB.97.155152 journal journal Phys. Rev. B volume 97, pages 155152 (year 2018a)NoStop [Chen et al.(2018b)Chen, Xu, andZhou]PhysRevB.98.235159 author author R. Chen, author D.-H. Xu,andauthor B. Zhou, title title Floquet topological insulator phase in a Weyl semimetal thin film with disorder, 10.1103/PhysRevB.98.235159 journal journal Phys. Rev. B volume 98, pages 235159 (year 2018b)NoStop [Lindner et al.(2011)Lindner, Refael, and Galitski]10.1038/nphys1926 author author N. H. Lindner, author G. Refael, and author V. Galitski,title title Floquet topological insulator in semiconductor quantum wells, 10.1038/nphys1926 journal journal Nat. Phys.volume 7, pages 490 (year 2011)NoStop [Oka and Aoki(2009)]PhysRevB.79.081406 author author T. Oka and author H. Aoki,title title Photovoltaic Hall effect in graphene, 10.1103/PhysRevB.79.081406 journal journal Phys. Rev. B volume 79, pages 081406 (year 2009)NoStop [Kitagawa et al.(2010)Kitagawa, Berg, Rudner, and Demler]PhysRevB.82.235114 author author T. Kitagawa, author E. Berg, author M. Rudner,and author E. Demler, title title Topological characterization of periodically driven quantum systems, 10.1103/PhysRevB.82.235114 journal journal Phys. Rev. B volume 82, pages 235114 (year 2010)NoStop [Gu et al.(2011)Gu, Fertig, Arovas, and Auerbach]PhysRevLett.107.216601 author author Z. Gu, author H. A. Fertig, author D. P. Arovas,andauthor A. Auerbach, title title Floquet Spectrum and Transport through an Irradiated Graphene Ribbon, 10.1103/PhysRevLett.107.216601 journal journal Phys. Rev. Lett. volume 107, pages 216601 (year 2011)NoStop [Rechtsman et al.(2013)Rechtsman, Zeuner, Plotnik, Lumer, Podolsky, Dreisow, Nolte, Segev, and Szameit]10.1038/nature12066 author author M. C. Rechtsman, author J. M. Zeuner, author Y. Plotnik, author Y. Lumer, author D. Podolsky, author F. Dreisow, author S. Nolte, author M. Segev,and author A. Szameit, title title Photonic Floquet topological insulators, 10.1038/nature12066 journal journal Naturevolume 496, pages 196 (year 2013)NoStop [Cayssol et al.(2013)Cayssol, Dóra, Simon, andMoessner]10.1002/pssr.201206451 author author J. Cayssol, author B. Dóra, author F. Simon,and author R. Moessner, title title Floquet topological insulators, 10.1002/pssr.201206451 journal journal Phys. Status Solidi RRL volume 7, pages 101 (year 2013)NoStop [Rudner and Lindner(2020)]10.1038/s42254-020-0170-z author author M. S. Rudner and author N. H. Lindner, title title Band structure engineering and non-equilibrium dynamics in Floquet topological insulators, 10.1038/s42254-020-0170-z journal journal Nat. Rev. Phys. volume 2, pages 229 (year 2020)NoStop [Foa Torres et al.(2014)Foa Torres, Perez-Piskunow, Balseiro, andUsaj]PhysRevLett.113.266801 author author L. E. F.Foa Torres, author P. M.Perez-Piskunow, author C. A.Balseiro,and author G. Usaj, title title Multiterminal Conductance of a Floquet Topological Insulator, 10.1103/PhysRevLett.113.266801 journal journal Phys. Rev. Lett. volume 113, pages 266801 (year 2014)NoStop [Li et al.(2018)Li, Lee, and Gong]PhysRevLett.121.036401 author author L. Li, author C. H. Lee,andauthor J. Gong, title title Realistic Floquet Semimetal with Exotic Topological Linkages between Arbitrarily Many Nodal Loops, 10.1103/PhysRevLett.121.036401 journal journal Phys. Rev. Lett. volume 121, pages 036401 (year 2018)NoStop [Peng and Refael(2019)]PhysRevLett.123.016806 author author Y. Peng and author G. Refael,title title Floquet Second-Order Topological Insulators from Nonsymmorphic Space-Time Symmetries, 10.1103/PhysRevLett.123.016806 journal journal Phys. Rev. Lett. volume 123, pages 016806 (year 2019)NoStop [Hu et al.(2020)Hu, Huang, Zhao, and Liu]PhysRevLett.124.057001 author author H. Hu, author B. Huang, author E. Zhao,and author W. V. Liu, title title Dynamical Singularities of Floquet Higher-Order Topological Insulators, 10.1103/PhysRevLett.124.057001 journal journal Phys. Rev. Lett. volume 124, pages 057001 (year 2020)NoStop [Huang and Liu(2020)]PhysRevLett.124.216601 author author B. Huang and author W. V. Liu, title title Floquet Higher-Order Topological Insulators with Anomalous Dynamical Polarization, 10.1103/PhysRevLett.124.216601 journal journal Phys. Rev. Lett. volume 124,pages 216601 (year 2020)NoStop [Liu et al.(2019)Liu, Xiong, Zhang, and An]PhysRevA.100.023622 author author H. Liu, author T.-S. Xiong, author W. Zhang,and author J.-H. An, title title Floquet engineering of exotic topological phases in systems of cold atoms, 10.1103/PhysRevA.100.023622 journal journal Phys. Rev. A volume 100, pages 023622 (year 2019)NoStop [Xiong et al.(2016)Xiong, Gong, and An]PhysRevB.93.184306 author author T.-S. Xiong, author J. Gong,andauthor J.-H. An, title title Towards large-chern-number topological phases by periodic quenching, 10.1103/PhysRevB.93.184306 journal journal Phys. Rev. B volume 93, pages 184306 (year 2016)NoStop [Kumar et al.(2020)Kumar, Rodriguez-Vega, Pereg-Barnea, andSeradjeh]PhysRevB.101.174314 author author A. Kumar, author M. Rodriguez-Vega, author T. Pereg-Barnea,and author B. Seradjeh, title title Linear response theory and optical conductivity of Floquet topological insulators, 10.1103/PhysRevB.101.174314 journal journal Phys. Rev. B volume 101, pages 174314 (year 2020)NoStop [Nag et al.(2021)Nag, Juričči ćć, and Roy]PhysRevB.103.115308 author author T. Nag, author V. Juričči ćć,andauthor B. Roy, title title Hierarchy of higher-order Floquet topological phases in three dimensions, 10.1103/PhysRevB.103.115308 journal journal Phys. Rev. B volume 103, pages 115308 (year 2021)NoStop [Yang et al.(2022)Yang, Xu, Zhou, Zhao, Xie, Ding, Ma, Gong, Shi, and Du]PhysRevB.106.184106 author author K. Yang, author S. Xu, author L. Zhou, author Z. Zhao, author T. Xie, author Z. Ding, author W. Ma, author J. Gong, author F. Shi,and author J. Du, title title Observation of Floquet topological phases with large Chern numbers, 10.1103/PhysRevB.106.184106 journal journal Phys. Rev. B volume 106, pages 184106 (year 2022)NoStop [Gao and An(2023)]PhysRevB.108.L241402 author author M.-J. Gao and author J.-H. An,title title Engineering rich two-dimensional higher-order topological phases by flux and periodic driving, 10.1103/PhysRevB.108.L241402 journal journal Phys. Rev. B volume 108, pages L241402 (year 2023)NoStop [Chen et al.(2020)Chen, Li, Hou, Ge, Zhou, Qian, Mei, Jia, Xu, and Shen]10.1364/prj.404163 author author B. Chen, author S. Li, author X. Hou, author F. Ge, author F. Zhou, author P. Qian, author F. Mei, author S. Jia, author N. Xu,and author H. Shen, title title Digital quantum simulation of Floquet topological phases with a solid-state quantum simulator, 10.1364/prj.404163 journal journal Photonics Res. volume 9, pages 81 (year 2020)NoStop [Kitagawa et al.(2012)Kitagawa, Broome, Fedrizzi, Rudner, Berg, Kassal, Aspuru-Guzik, Demler, and White]10.1038/ncomms1872 author author T. Kitagawa, author M. A. Broome, author A. Fedrizzi, author M. S. Rudner, author E. Berg, author I. Kassal, author A. Aspuru-Guzik, author E. Demler,and author A. G.White, title title Observation of topologically protected bound states in photonic quantum walks, 10.1038/ncomms1872 journal journal Nat. Commun. volume 3,pages 882 (year 2012)NoStop [Maczewsky et al.(2017)Maczewsky, Zeuner, Nolte, andSzameit]10.1038/ncomms13756 author author L. J. Maczewsky, author J. M. Zeuner, author S. Nolte, and author A. Szameit,title title Observation of photonic anomalous Floquet topological insulators, 10.1038/ncomms13756 journal journal Nat. Commun. volume 8, pages 13756 (year 2017)NoStop [Mukherjee et al.(2017)Mukherjee, Spracklen, Valiente, Andersson, Öhberg, Goldman, and Thomson]10.1038/ncomms13918 author author S. Mukherjee, author A. Spracklen, author M. Valiente, author E. Andersson, author P. Öhberg, author N. Goldman,and author R. R. Thomson, title title Experimental observation of anomalous topological edge modes in a slowly driven photonic lattice, 10.1038/ncomms13918 journal journal Nat. Commun. volume 8, pages 13918 (year 2017)NoStop [Ozawa et al.(2019)Ozawa, Price, Amo, Goldman, Hafezi, Lu, Rechtsman, Schuster, Simon, Zilberberg,and Carusotto]RevModPhys.91.015006 author author T. Ozawa, author H. M. Price, author A. Amo, author N. Goldman, author M. Hafezi, author L. Lu,et al., title title Topological photonics, 10.1103/RevModPhys.91.015006 journal journal Rev. Mod. Phys. volume 91, pages 015006 (year 2019)NoStop [Maczewsky et al.(2020)Maczewsky, Höckendorf, Kremer, Biesenthal, Heinrich, Alvermann, Fehske, and Szameit]10.1038/s41563-020-0641-8 author author L. J. Maczewsky, author B. Höckendorf, author M. Kremer, author T. Biesenthal, author M. Heinrich, author A. Alvermann, author H. Fehske,and author A. Szameit, title title Fermionic time-reversal symmetry in a photonic topological insulator, 10.1038/s41563-020-0641-8 journal journal Nat. Mater. volume 19, pages 855 (year 2020)NoStop [Fleury et al.(2016)Fleury, Khanikaev, and Alú]10.1038/ncomms11744 author author R. Fleury, author A. B. Khanikaev,and author A. Alú, title title Floquet topological insulators for sound, 10.1038/ncomms11744 journal journal Nat. Commun. volume 7, pages 11744 (year 2016)NoStop [Peng et al.(2016)Peng, Qin, Zhao, Shen, Xu, Bao, Jia, andZhu]10.1038/ncomms13368 author author Y.-G. Peng, author C.-Z. Qin, author D.-G. Zhao, author Y.-X. Shen, author X.-Y. Xu, author M. Bao, author H. Jia,and author X.-F. Zhu, title title Experimental demonstration of anomalous Floquet topological insulator for sound, 10.1038/ncomms13368 journal journal Nat. Commun. volume 7, pages 13368 (year 2016)NoStop [Dabiri and Cheraghchi(2023)]dabiri2023electric author author S. S. Dabiri and author H. Cheraghchi, @nooptitle Electric Circuit Simulation of Floquet Topological Insulators,(year 2023),http://arxiv.org/abs/2208.08196 arXiv:2208.08196 [cond-mat.mes-hall] NoStop [Jotzu et al.(2014)Jotzu, Messer, Desbuquois, Lebrat, Uehlinger, Greif, and Esslinger]10.1038/nature13915 author author G. Jotzu, author M. Messer, author R. Desbuquois, author M. Lebrat, author T. Uehlinger, author D. Greif,and author T. Esslinger, title title Experimental realization of the topological Haldane model with ultracold fermions, 10.1038/nature13915 journal journal Nature volume 515,pages 237 (year 2014)NoStop [Asteria et al.(2019)Asteria, Tran, Ozawa, Tarnowski, Rem, Fläschner, Sengstock, Goldman, and Weitenberg]10.1038/s41567-019-0417-8 author author L. Asteria, author D. T. Tran, author T. Ozawa, author M. Tarnowski, author B. S. Rem, author N. Fläschner, author K. Sengstock, author N. Goldman,and author C. Weitenberg, title title Measuring quantized circular dichroism in ultracold topological matter, 10.1038/s41567-019-0417-8 journal journal Nat. Phys. volume 15,pages 449 (year 2019)NoStop [Wintersperger et al.(2020)Wintersperger, Braun, Ünal, Eckardt, Liberto, Goldman, Bloch, and Aidelsburger]10.1038/s41567-020-0949-y author author K. Wintersperger, author C. Braun, author F. N. Ünal, author A. Eckardt, author M. D. Liberto, author N. Goldman, author I. Bloch,and author M. Aidelsburger, title title Realization of an anomalous Floquet topological system with ultracold atoms, 10.1038/s41567-020-0949-y journal journal Nat. Phys. volume 16, pages 1058 (year 2020)NoStop [Titum et al.(2017)Titum, Lindner, and Refael]PhysRevB.96.054207 author author P. Titum, author N. H. Lindner,and author G. Refael,title title Disorder-induced transitions in resonantly driven Floquet topological insulators, 10.1103/PhysRevB.96.054207 journal journal Phys. Rev. B volume 96, pages 054207 (year 2017)NoStop [Zeuch et al.(2020)Zeuch, Hassler, Slim, and DiVincenzo]ZEUCH2020168327 author author D. Zeuch, author F. Hassler, author J. J. Slim,andauthor D. P. DiVincenzo,title title Exact rotating wave approximation, https://doi.org/10.1016/j.aop.2020.168327 journal journal Ann. Phys. volume 423, pages 168327 (year 2020)NoStop
http://arxiv.org/abs/2312.16013v1
{ "authors": [ "Zheng-Rong Liu", "Rui Chen", "Bin Zhou" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20231226114603", "title": "Four-dimensional Floquet topological insulator with an emergent second Chern number" }
=1 ⌈⌉
http://arxiv.org/abs/2312.16009v2
{ "authors": [ "Md Sohel Mondal", "Dov Fields", "Vladimir S. Malinovsky", "Siddhartha Santra" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226113458", "title": "Entanglement topography of large-scale quantum networks" }
Emergent Gravity Completion in Quantum Field Theory, and Affine Condensation in Open and Closed Strings Durmuş Demir0000-0002-6289-9635 January 14, 2024 ======================================================================================================= Traffic signal control is crucial for optimizing the efficiency of road network by regulating traffic light phases. Existing research predominantly focuses on heuristic or reinforcement learning (RL)-based methods, which often lack transferability across diverse traffic scenarios and suffer from poor interpretability. This paper introduces a novel approach, LLMLight[Our project is available at <https://github.com/usail-hkust/LLMTSCS>.], utilizing large language models (LLMs) for traffic signal control tasks. By leveraging LLMs' impressive generalization and zero-shot reasoning capabilities, LLMLight executes a human-like decision-making process for efficient traffic management. Specifically, the framework begins by composing task descriptions, current traffic conditions, and prior knowledge into a prompt. Subsequently, we utilize LLM's chain-of-thought (CoT) reasoning ability to identify the next traffic signal phase, ensuring optimal efficiency in the road network. LLMLight achieves state-of-the-art (SOTA) or competitive results across five real-world traffic datasets. Notably, LLMLight showcases remarkable generalization, interpretability, and zero-shot reasoning abilities, even without any training for transportation management tasks. § INTRODUCTIONTraffic congestion has emerged as a critical issue impacting human society and the environment. This problem continues to escalate in severity as urban migration swells the population in city areas. In this context, the optimization of traffic signal control (TSC) has become a significant research topic in the field of intelligent transportation management <cit.>, given its substantial influence on a city's overall traffic efficiency. Mitigating traffic signal issues holds the promise of delivering substantial economic, environmental, and societal advantages. Nevertheless, this task is an intricate challenge, complicated by the dynamic nature of traffic and road networks.Past research in TSC has primarily fallen into two distinct categories: transportation methods <cit.> and reinforcement learning (RL)-based approaches <cit.>. Transportation methods primarily center around crafting efficient heuristic algorithms, dynamically adapting traffic signal operations based on lane-level traffic conditions. However, these methods heavily rely on manual design, demanding substantial human effort. The emergence of deep neural networks (DNNs) <cit.> led to the introduction of RL-based techniques to address this challenge <cit.>. These approaches have exhibited remarkable performance across various traffic scenarios. Nevertheless, RL-based models also present several drawbacks. Primarily, they may struggle with limited generalization ability, particularly in highly uncommon scenarios (extreme high-traffic situations), as their training data may not cover all possible traffic situations. Additionally, RL-based models lack interpretability as they are developed from complex black-box DNNs, which makes it hard to explain how the model arrives at a particular decision or policy.Recently, large language Models (LLMs) have emerged and showcased remarkable zero-shot and generalization abilities in various domains, which can perform human-like step-by-step reasoning processes to solve complex tasks. Notably, AutoGPT <cit.> proposes to break tasks into multiple sub-goals and iterate until the main task is completed. Voyager <cit.> prompts GPT4 to devise an automatic curriculum to explore the environment and solve harder tasks progressively. In the field of intelligent transportation, GPT-Driver <cit.> suggests that the previous rule-based and RL-based methods either fail to handle extreme driving scenarios or lack interpretability. Therefore, they propose instructing GPT-3.5 to tackle motion planning tasks in autonomous driving. PromptGAT <cit.> uses LLMs to generate human knowledge to help the DNN model understand special cases (extreme weather) in TSC tasks, bridging the gap between real-world scenarios and simulations. TrafficGPT <cit.> utilizes GPT to analyze and process traffic data, offering human-like decision support in relevant traffic control tasks. However, the effectiveness of LLMs as control agents in TSC tasks remains unexplored.This paper introduces LLMLight, an innovative framework designed to leverage LLMs as control agents for empowering human-like decision-making in traffic signal control tasks. Specifically, we consider TSC as a partially observable Markov Game. Each agent manages the traffic light located at an intersection. We first compose task descriptions and traffic conditions into a prompt at a signal-switching time step. Subsequently, we instruct the LLM control agent to perform zero-shot chain-of-thought (CoT) reasoning to generate a control policy that sustains optimal efficiency within the road network. To further enhance the performance of the generated policy, we additionally aid the generated prompt with prior knowledge (e.g., commonsense knowledge) to instruct the LLM for more sophisticated decision-making. The overview of LLMLight is shown in Figure <ref>.By conducting experiments on different LLM variants under diverse traffic scenarios, we conclude the key findings of this paper as follows: 1) Despite not being prompted by any demonstrations, LLMs can provide efficient control policies with detailed explanations in traffic signal control tasks; 2) LLMs showcases remarkable generalization abilities as they consistently achieve state-of-the-art or comparable results across two distinct road networks and seven traffic flow datasets, spanning diverse traffic volumes, including extreme high-traffic conditions; 3) Leveraging prior knowledge in prompting proves to be an effective method for enhancing the quality of policies generated by LLMs, while it also indicates that pre-trained LLMs lack specialty in intelligent transportation management.Overall, the above findings highlight LLMs' remarkable zero-shot reasoning ability, generalization, and interpretability in the context of traffic signal control even without dedicated pretraining or finetuning in transportation management tasks. We summarize the major contributions of this paper as follows: * We design a tailored framework, namely LLMLight, for integrating LLMs into traffic signal control tasks, which consistently achieve SOTA or competitive performance across diverse traffic scenarios. To our knowledge, this is the first exploration of LLMs as intelligent agents in traffic signal control tasks.* This paper offers an extensive analysis of LLMs' control policies under diverse traffic conditions and various prompt designs, highlighting both the strengths and limitations of employing LLMs in traffic signal control tasks.* We identify promising avenues for future research, specifically focusing on the potential of advancing intelligent transportation by further integration of LLMs in this domain. r0.5< g r a p h i c s >The illustration of intersection, lanes, and signal phases.§ PRELIMINARIESIn this section, we first introduce the key concepts in traffic signal control tasks.Road network: The road network is a directed graph consisting of intersection joints I and lanes L. Lanes are categorized into go-through (L_go), left-turn (L_left), and right-turn (L_right) lanes, determined by the movement of vehicles traveling on the lane. Each lane is divided into multiple segments S={s_1,…,s_n}.Traffic signal phase: A traffic signal phase is defined as p(L_allow), where L_allow is a group of allowed lanes. p(L_allow)=1 and p(L_allow)=0 indicate the green and red light, respectively. The active traffic signal phase allows vehicles in a specific lane group (without conflicting movements) to pass, while other phases display the red light, enforcing a stop. There are four signal phases in total, including ETWT (go-through from east and west), ELWL (left-turn from east and west), NTST (go-through from north and south), and NLSL (left-turn from north and south). Figure <ref> illustrates the intersection, lanes, and signal phases. Notably, lanes sharing a common color signify non-conflicting movements, and the passage is allowed in the corresponding signal phase. § LLM FOR TRAFFIC SIGNAL CONTROLThis section first introduces the problem definition of traffic signal control in the LLM-empowered context. Then, we detail the workflow of our proposed LLMLight, including 1) Observation collection: it collects the traffic condition of the intersection from the road network (the number of queued and approaching vehicles), and 2) Prompt generation: it composes task-relevant information and prior knowledge into human-readable text, prompting LLMs to find the traffic signal phase that can mostly improve the traffic efficiency of the intersection; 3) Action execution: it executes the policy generated by the LLM control agent, switching the traffic light into the target phase. §.§ LLM-based Traffic Signal ControlWe define traffic signal control as a partially observable Markov Game. An LLM control agent mitigates the traffic light of an intersection. Based on the observation space 𝒪_i of the current traffic condition at the intersection i, action space 𝒜, and relevant task descriptions 𝒟, the LLM control agent outputs the policy π_i that aims to maintain the optimal efficiency of the road network:π_i = LLM(Prompt(𝒪_i, 𝒜, 𝒟)). §.§ Observation Collection We collect traffic condition features that can be easily obtained from real-world traffic environment as the observation of the LLM control agent, including:* Queuing vehicle count: Vehicles with speeds slower than the threshold v_stop are considered as queuing vehicles. We count their number in lane l as n^l_q.* Average wait time of queuing vehicles: We summarize the average wait time of queuing vehicles in lane l as T̅^l = 1/n^l_q∑^n^l_q_j=1T^l_j, where T^l_j is the wait time of queuing vehicle j.* Approaching vehicle count: Vehicles faster than v_stop are considered as approaching vehicles. We count the number of approaching vehicles in the segment s of lane l as n^l_s.* Average speed of approaching vehicles: We summarize the average speed of all the vehicles approaching the intersection as v̅ = 1/n_i∑^n_i_j=1 v_j, where v_j is the speed of approaching vehicle j, and n_i is the number of vehicles approaching intersection i. Depending on the optimization target (minimizing queuing vehicle count, wait time, etc.), the agent either uses all the features listed above or a subset of them. These features will be composed into human-readable text as the observation of the LLM control agent. §.§ Prompt GenerationBesides the observation, we further prompt the LLM control agent with detailed traffic scenario descriptions, task descriptions, and action space so they can understand the task and make reasonable decisions. Additionally, we also inject prior knowledge to help the LLM make better decisions.In this work, we study the performance of four prompt templates. The basic prompt template refrains from providing the prior knowledge available to the LLM, enabling an assessment of its capacity to generate policies independently. Building upon this foundational template, we further propose three different types of prompts with prior knowledge to provide additional guidance for the reasoning of LLMs.* Basic Template: This template comprises the scenario, task, action space descriptions, and observations, providing the basic information required in the traffic signal control tasks.* With Commonsense Knowledge: In addition to the basic template, we provide the LLM with a commonsense-based hint to test its capacity for integrating general knowledge into the traffic control process. Specifically, the instruction guides the LLM to prioritize queuing vehicles and those approaching within close segments. * With Traffic Flow Coordination Hint: This level not only tasks the LLM with optimizing traffic conditions in incoming lanes but also challenges it to prevent potential congestion in outgoing lanes, presenting a more complex policy task that requires commonsense reasoning. * With Wait Time Forecast Guidance: We present a structured thinking approach to assist the LLM, prompting the LLM to forecast the future cumulative queuing times. This involves a step-by-step "what-if" analysis of potential delays if vehicles in a particular lane are not allowed to pass the intersection in the subsequent phase. This guidance encourages the LLM to alleviate potential congestion in lanes that might experience heavy traffic in the future.§.§ Action ExecutionTo instruct LLMs managing the traffic light, we prompt them to either directly output the selected action or a function of a control policy written in Python: * Output Action: LLMs directly answer the selected action after outlining the rationale for identifying the optimal traffic signal.* Output Policy Function: LLMs generate the function to implement a control policy in Python, leaving the mathematical calculation and making LLMs focus on logical policy generation.Ultimately, we instruct the LLM control agent by employing the generated prompt without demonstrations to perform zero-shot reasoning, thereby identifying the optimal traffic signal for the next phase. The framework of our designed prompt template is shown in Table <ref>. You can refer to Appendix <ref> for our detailed designs. § EXPERIMENTSWe provide evaluations on LLMLight by answering the following research questions: * RQ1: How is the performance of LLMLight with zero-shot reasoning compared with transportation and RL methods?* RQ2: How is the generalization ability of LLMLight across different cities and traffic volumes?* RQ3: How is the interpretability ability of LLMLight in providing explainable traffic signal control decisions? Initially, we introduce the experimental setup. Then, a comprehensive analysis of the overall performance of three distinct types of agents is given. Afterward, we examine the generalization and interpretability of different methods. Finally, we analyze their performance under extreme high-traffic scenarios, challenging their generalization ability and robustness in uncommon situations.§.§ Experimental Settings §.§.§ DatasetsWe use five real-world datasets <cit.> to deliver a thorough compassion among different traffic signal control methods. Across different time periods, these datasets include three traffic flow records in the Dongfeng sub-district, Jinan, China, and two in the Gudang sub-district, Hangzhou, China. Additionally, we construct two synthetic traffic flow datasets on the Jinan and Hangzhou road networks, featuring significantly higher arrival rates compared to the original datasets. You can refer to Appendix <ref> for more details on these datasets.§.§.§ Environment SettingsWe conduct experiments on the CityFlow <cit.>, an open-sourced simulator, to evaluate the efficiency of each compared method. Specifying the origin and destination, the simulator will control the target vehicle following the shortest paths to reach their destinations. Cityflow provides extensive APIs to access the traffic state features and executes the actions chosen by traffic agents. The green signal phase duration is set to thirty seconds. Following existing studies <cit.>, a three-second yellow signal and a two-second all-red time follow each green signal to prepare the transition. The real-world traffic flow simulations span one hour, and the synthetic datasets endure ten minutes. We consider vehicles slower than 0.1 m/s to be queuing vehicles. The turn-right movements are always allowed in the road network. §.§ MetricsFollowing previous studies <cit.>, we leverage average travel time (ATT) to evaluate the performance of different policies made by traffic signal control agents. This metric quantifies the duration vehicles travel from their origins to their respective destinations. Concurrently, we also analyze the average queue length (AQL) and the average wait time (AWT) of vehicles, providing a comprehensive evaluation of how each agent optimizes its performance for minimizing the ATT. §.§ Compared ModelsFor transportation methods, we adopt Random, FixedTime, and Maxpressure as baselines. For RL methods, we compare MPLight, AttendLight, PressLight, CoLight, Efficient-CoLight, and Advanced-CoLight with our proposed method. We utilize GPT-4 as the traffic signal control agents. You can refer to Appendix <ref> for more details about the abovementioned models. We further report experiment results conducted on Llama-2 and ChatGPT-3.5 in Appendix <ref>. §.§ Comparison Between the Classic Methods and LLMLight (RQ1)We first implement the prompt templates with commonsense knowledge and wait time forecast guidance to directly output actions on the GPT-4-based control agents. The experiment results are shown in Table <ref>. It reveals that LLMLight consistently achieve state-of-the-art (SOTA) or comparable performance across all baselines in Average Travel Time (ATT). Although Advanced-CoLight, the SOTA RL method, achieves the best performance in ATT across most datasets, the series of CoLight models requires communication among neighboring intersections. On the contrary, LLMLight achieves competitive results by leveraging observation features solely from the target intersection, underscoring LLMs' remarkable zero-shot ability in traffic signal control tasks. Furthermore, despite RL-based models excelling in achieving noteworthy results for ATT, there is a trade-off with a relatively prolonged Average Wait Time (AWT). This implies that although the overall travel time is reduced, certain drivers may experience extended waiting periods at intersections. Minimizing waiting time is crucial in real-world scenarios, as prolonged waits can induce the anxiety of drivers. In contrast, our proposed LLMLight with the wait time forecast guidance not only ensures a relatively short overall travel time but also achieve the lowest queuing waiting time across most datasets.Additionally, we present the results of experiments conducted on LLMLight that generate the control policy function. In contrast to directly making decisions, LLMLight demonstrates enhanced performance using Python to implement a heuristic algorithm across more datasets. Formulating the policy code can separate the mathematical calculations and make the LLM focus on logical reasoning, thereby attaining superior performance. Employing this methodology, future advancements may involve the development of LLM control agents for utilizing external APIs (calculators, weather, and traffic sensors), as proposed by <cit.>, paving the way for the realization of fully automated intelligent traffic signal control. The exploration of this avenue is reserved for forthcoming research endeavors. §.§ Generlization Comparision (RQ2)Transferability: We first study the transferability ability of different methods by implementing the pre-trained model in another distinct road network. The experiment results are shown in Figure <ref>. Models labeled without the "-T " are trained and tested in the same dataset. Otherwise, they are pre-trained on a distinct road network (we evaluate the transferability in Hangzhou using the models pre-trained in Jinan). The LLM control agent is prompted to output control actions directly. We observe a notable decline in the performance of RL-based methods after the transfer, especially for MPLight and CoLight. While Efficient- and Advanced-Colight maintain relatively stable performance by leveraging more representative observation features, their approaches necessitate additional domain knowledge and human effort in feature engineering. On the contrary, LLMLight stands out by maintaining the most stable performance across all datasets, even with simple commonsense reasoning. These results underscore LLMLight's impressive transferability across diverse traffic contexts and robustness for practical implementations.Extreme High-traffic Scenarios: Next, we delve into an uncommon scenario when considerable traffic flow continuously appears at the intersection, a situation that rarely appears during training. Figure <ref> presents the distribution of lane queue lengths across five real-world datasets, where traffic signals are controlled by Advanced-CoLight. Notably, they demonstrate relatively smooth traffic conditions, with the accumulation of queuing vehicles showing a long-tail distribution. To assess the efficacy of different methods in extreme high-traffic scenarios, we generate two synthetic traffic flow datasets on the Jinan and Hangzhou road networks, featuring approximately four times more vehicles arriving within a 300-second interval than the original flow datasets. Table <ref> shows the performance of both classic methods (RL models trained on Jinan 1 and Hangzhou 1, respectively) and LLMLight (prompted with the wait time forecast guidance to directly output actions). Our experiments reveal performance degradation in the RL models, as they exhibit similar or worse performance than Maxpressure. This suggests that these pre-trained RL models struggle to handle extreme high-traffic conditions, particularly when confronted with a significantly increased volume of vehicles compared to their training stage. In contrast, LLMLight consistently demonstrates superior performance, underscoring its robustness and practicality in much heavier traffic conditions. §.§ Interpretability of LLMLight (RQ2)To deliver a detailed analysis of the interpretability of LLMLight, we conducted a simulation case on the Jinan dataset. Figure <ref> presents the traffic condition of the intersection and the GPT-4's rationale behind its decisions. This scenario shows heavy congestion in the incoming lanes of the north and south sections, with a significant number of queuing and approaching vehicles. Compared to RL-based methods, LLMLight can not only output efficient control policies but also detailed explanations behind the corresponding decisions. We have the following observations by analyzing the GPT-4's rationales with different prompt templates.The agent with the basic prompt (without prior knowledge) accurately identifies the congestion on the left-turn lanes of the north and south sections. However, its rationale is based solely on the total number of vehicles in the lanes, treating queuing and approaching vehicles equally. This approach overlooks the distinction between vehicles close to the intersection and those in more distant segments, potentially prolonging the wait times for queued vehicles and those expected to arrive soon. Conversely, the agent equipped with commonsense knowledge is instructed to prioritize queued and closely approaching vehicles, effectively addressing the most urgent congestion situations. However, it also indicates that existing LLMs, even GPT-4, lack domain-specific expertise in traffic control tasks. The agent with the traffic flow coordination hint considers both the incoming and outgoing lanes, thereby coordinating the traffic flow distribution over the road network. Moreover, previous studies have often overlooked the impact of vehicles' queuing wait time. The agent with wait time forecast guidance addresses this aspect by anticipating the upcoming queuing time, answering, "If vehicles in a specific lane cannot pass the intersection in the next phase, how long will they keep waiting?" and then undertakes step-by-step reasoning. It identifies the cumulative queuing time for both early queued vehicles and those estimated to arrive soon, then ultimately selects the optimal signal phase to relieve vehicles with the longest potential wait time.§ RELATED WORK Traffic Signal Control: Traffic signal control (TSC) methods can be broadly categorized into transportation <cit.> and reinforcement learning (RL)-based approaches <cit.>. Initially, FixedTime <cit.> employs a fixed cycle length and phase split to regulate traffic signals in a predetermined pattern. Maxpressure <cit.> greedily selects signal phases to maximize road network throughput. PressLight <cit.> inherits the concept of Maxpressure and combines it with a deep RL algorithm to optimize the throughput of the intersection. Additionally, some studies propose to design more representative state features to describe traffic conditions. Advanced-MP <cit.> studies vehicle movement-level state features, designing Advanced-MP features based on traffic demand.Intelligent Transportation: The application of intelligent transportation mainly focuses on forecasting and control. Within traffic flow forecasting <cit.>, <cit.> advocates for using graph neural networks (GNNs) to facilitate the exchange of information among proximate regions.Regarding control tasks, NavTL <cit.> presents an RL-based framework designed to concurrently manage traffic signal activation and autonomous vehicle rerouting in hybrid traffic scenarios. <cit.> developed a Double-Deep Q Network (DDQN)-based model with fine-grained state features for efficient on-ramp traffic flow control. Moreover, to facilitate sustainable transportation, <cit.> develop dynamic charging pricing for electric vehicles (EV) via a multi-agent graph convolutional RL model. <cit.> propose a meta-learning-based framework to providepersonalized vehicle energy consumption predictions for EV drivers. Large Language Models for Decision Making: Large Language Models (LLMs) recently emerged and exhibited a remarkable capacity for zero-shot reasoning and generalization. Exploiting this proficiency, numerous studies <cit.> have increasingly employed LLMs as central planners in diverse control tasks. For instance, GLAM <cit.> trains the policy of LLM control agents through interactive engagement with the environment. In intelligent transportation, DriveGPT4 <cit.> presents an end-to-end autonomous driving system capable of interpreting vehicle actions and addressing diverse questions posed by human users. More studies on utilizing LLMs in intelligent transportation can be found at <cit.>.§ CONCLUSION AND OPEN PROBLEMSConclusion: In this study, we introduce LLMLight, a novel framework that leverages large language models (LLMs) as traffic signal control agents. By instructing LLMs to conduct a human-like, step-by-step analysis of current traffic conditions, the intelligent control agents could judiciously choose the optimal signal phases, thereby enhancing the overall efficiency of the road network. Through comprehensive experiments conducted on five real-world traffic datasets, we showcase the superior effectiveness of our proposed framework compared to previous studies.Open Problems: Our findings also reveal several promising revenues in future research: 1) LLM-empowered RL: Our experiments reveal that LLMs can tackle traffic signal control tasks by efficiently leveraging useful information at the natural language level. Further exploration of this aspect could lead to effectively integrating LLMs as supportive tools in RL-based traffic management tasks, such as feature engineering and reward function construction.2) Multi-intersection traffic signal control: It is important to note that our study does not consider multi-agent interactions. Further advancements in this direction involve exploring cooperation in multi-intersection traffic signal control scenarios, which include agent communication between adjacent intersections and behavior prediction of other agents. This integration could lead to more globally efficient traffic flow coordination.3) LLM agent-based automatic traffic signal control: Our experiment reveals a limitation in existing LLMs, as they lack specialized knowledge in traffic signal control tasks. This insight underscores the potential for future research to develop intelligent transportation-oriented LLMs with domain-specific expertise in traffic management. Furthermore, the LLM agent can be equipped with various external APIs (calculator, weather, traffic sensor, etc.). This integration enables the agent to autonomously perceive, analyze, and control the traffic dynamics, paving the way for a fully human-free and intelligent traffic management system. iclr2024_conference§ APPENDIX §.§ Datasets We use five real-world and two synthetic traffic flow datasets in our experiments. Statistics for these datasets are detailed in Table <ref>, while the visualizations of the road networks are shown in Figure <ref>.* Jinan: These datasets are collected from the Dongfeng sub-district in Jinan, China. It comprises data gathered from an area featuring 12 intersections. Each intersection connects four distinct road sections, namely East, West, North, and South. The East and West sections span 400 meters, while the South and North sections cover 800 meters. The cameras record the trajectory of each vehicle as it passes the intersection. Across different time periods, there are three traffic flow datasets recorded in this area. * Hangzhou: These datasets are collected in Gudang sub-district, Hangzhou, China. There is a total of 16 intersections in this area. The East and West sections span 800 meters, while the South and North sections extend 600 meters. Across different time periods, this dataset comprises two traffic flow datasets from the recorded area. * Extreme High-traffic Sceanrios: To simulate high-traffic scenarios, our experiments encompass two synthetic traffic flow datasets on the Jinan and Hangzhou road networks, featuring 4000 vehicles arriving within ten minutes.§.§ Compared Models Transportation Methods:* Random: A baseline policy randomly switches signal phases with a fixed duration. * Fixedtime <cit.>: A policy gives a pre-defined fixed cycle length with phase time, widely implemented in most traffic scenarios. * Maxpressure <cit.>: The state-of-the-art heretics traffic signal control method in the transportation field, which greedily relieves vehicles on the lanes with the highest pressure (a variable derived from the difference between upstream and downstream queue lengths). RL-based Methods:* MPLight <cit.>: A reinforcement learning (RL)-based method that utilizes pressure as both observation and reward. It employs FRAP, a network structure specifically designed to manage unbalanced traffic flow, as the foundational model. * AttendLight <cit.>: Leveraging an attention mechanism, this model constructs observation features and predicts phase transition probability. * PressLight <cit.>: Leveraging the concept of Maxpressure with DRL, optimizing the pressure of the intersection. * CoLight <cit.>: This model employs a graph attention network (GAT) to enable communication among neighboring intersections, enhancing coordination and decision-making. * Efficient-CoLight <cit.>: Building upon the CoLight model, it integrates efficient pressure as an observation, optimizing its decision-making capabilities. * Advanced-CoLight <cit.>: Building upon the CoLight model, it incorporates advanced traffic state features, achieving the SOTA performance in traffic signal control tasks. Large Language Models:* Llama 2 <cit.>: This is a series of pre-trained and finely-tuned large language models (LLMs) developed by the AI group at Meta, ranging in scale from 7 billion to 70 billion parameters. They are adaptable for various natural language generation tasks. In our experiments, we useandin our experiments. * ChatGPT-3.5 <cit.>: Developed by OpenAI, ChatGPT-3.5 is a chatbot explicitly designed for engaging in conversations, answering queries, and assisting with diverse tasks. We adoptAPI from OpenAI. * GPT-4 <cit.>: As the newest advancement in LLM by OpenAI, this SOTA foundation model is more reliable and creative compared to GPT-3.5. It excels in handling significantly more nuanced instructions and tasks. We utilizeAPI in our experiments.§.§ Model SettingsAll RL methods are trained with uniform hyperparameters, encompassing a learning rate of 1 × 10^-3, a replay buffer size of 12000, a sample size of 3000, and a hidden size of 20. Thevalue is standardized at 1.0 for all LLMs. To stabilize the LLMs' output, a temperature setting of zero is implemented for ChatGPT-3.5 and GPT-4, while the Llama-2 models are set to 0.1. §.§ Comparison among LLMsThe performance of different LLMs, prompted to directly output actions, is shown in Table <ref>. Firstly, we observe that GPT-4 outperforms all other LLMs, with Llama-2-70B and 13B securing the second and third positions, respectively. This indicates the adherence of LLMs to the scaling law <cit.> in the traffic signal control tasks. Surprisingly, ChatGPT-3.5 exhibits the least favorable performance.To delve into the failure of other LLMs, we present the reasoning processes of Llama-2 and ChatGPT-3.5 (prompted by the template with commonsense knowledge) in Figure <ref>. Table <ref> summarizes the detailed traffic conditions. The visualization is presented in Figure <ref>. This scenario shows heavy congestion in the incoming lanes of the north and south sections, characterized by a notable accumulation of queuing vehicles in the left-turn lane and a considerable influx of approaching vehicles in the go-through lane. On the one hand, the Llama-2 models exhibit a common limitation in instruction-following <cit.>. Although they understand the optimal traffic signal needs to address the most congested lanes of the intersection, Llama-2 chooses to prioritize the approaching vehicles, which ignores the instruction to pay more attention to the queuing vehicles. On the other hand, ChatGPT-3.5 manifests a substantial logical flaw by asserting that the signal phase with the lowest vehicle count is most effective in enhancing traffic conditions. This observation may suggest a pronounced hallucination issue <cit.> of ChatGPT-3.5 in the context of traffic signal control.Furthermore, other LLMs, except GPT-4, perform better when employing the prompt template with commonsense knowledge rather than wait time forecast guidance. Conversely, GPT-4 displays the opposite trend. This is because the reasoning chain in the latter template is more intricate, involving a more complex approach that requires forecasting future outcomes in contrast to the straightforward commonsense reasoning, whereas the inference capabilities of other LLMs are relatively limited compared to GPT-4. Notably, GPT-4 achieves optimal performance across most datasets, especially on the more complex road network, Hangzhou, when aided with wait time forecast guidance. This highlights the model's versatility in adapting to diverse and complex scenarios by leveraging its advanced inference capabilities. §.§ Prompt Templates and Reasoning of GPT-4-based Control AgentIn this subsection, we present the prompt templates and the corresponding reasoning of the GPT-4-based control agent below. Table <ref> summarizes the statistics of the traffic conditions.
http://arxiv.org/abs/2312.16044v1
{ "authors": [ "Siqi Lai", "Zhao Xu", "Weijia Zhang", "Hao Liu", "Hui Xiong" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20231226131706", "title": "Large Language Models as Traffic Signal Control Agents: Capacity and Opportunity" }
theoremTheorem[section] lemma[theorem]Lemma thmTheorem lem[thm]LemmacoroCorollary
http://arxiv.org/abs/2312.16561v1
{ "authors": [ "Yoon-Seok Choun", "Ki-Seok Kim", "Sang-Jin Sin" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20231227125736", "title": "Scaling Dimension of the Operators from the Black hole inside" }
International Research Centre MagTop, Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, PL-02668 Warsaw, Poland [email protected] International Research Centre MagTop, Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, PL-02668 Warsaw, PolandInternational Research Centre MagTop, Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, PL-02668 Warsaw, PolandUsing first principle calculations we examine properties of (Cd,V)Te, (Cd,Cr)Te, (Hg,V)Te, and (Hg,Cr)Te relevant to the quantum anomalous Hall effect (QAHE), such as the position of V- and Cr-derived energy levels and the exchange interactions between magnetic ions. We consider CdTe and HgTe, containing 12.5% of cation-substitutional V or Cr ions in comparison to the well-known case of (Cd,Mn)Te and (Hg,Mn)Te, and examine their suitability for fabrication of ferromagnetic barriers or ferromagnetic topological quantum wells, respectively. To account for the strong correlation of transition metal d electrons we employ hybrid functionals with different mixing parameters a_HSE focusing on a_HSE=0.32, which better reproduces the experimental band gaps in HgTe and Hg_0.875Mn_0.125Te.We find that Cr, like Mn, acts as an isoelectronic dopant but V can be an in-gap donor in CdTe and a resonant donor in HgTe, similar to the case of Fe in HgSe. From the magnetic point of view, Cr-doping results in a ferromagnetic phase within the general gradient approximation (GGA) but interactions become antiferromagnetic within hybrid functionals. However, (Hg,V)Te is a ferromagnet within both exchange-correlation functionals in a stark contrast to (Hg,Mn)Te for which robust antiferromagnetic coupling is found theoretically and experimentally. Furthermore, we establish that the Jahn-Teller effect is relevant only in the case of Cr-doping. Considering lower defect concentrations in HgTe-based quantum wells compared to (Bi,Sb)_3Te_2 layers, our results imply that HgTe quantum wells or (Cd,Hg)Te barriers containing either V or Cr show advantages over (Bi,Sb,Cr,V)_3Te_2-based QAHE systems but whether (i) ferromagnetic coupling will dominate in the Cr case and (ii) V will not introduce too many electrons to the quantum well is to be checked experimentally. 71.15.-m, 71.15.Mb, 75.50.Cc, 74.40.Kb, 74.62.FjCdTe and HgTe doped with V, Cr, and Mn – prospects for the quantum anomalous Hall effect Tomasz Dietl January 14, 2024 ========================================================================================== § INTRODUCTION The theoretical prediction <cit.> and the experimental discovery of the quantum anomalous Hall effect (QAHE) in dilute ferromagnetic semiconductor (Bi,Sb,Cr)_2Te_3 <cit.> and other systems <cit.> have triggered research on the prospects of dissipationless and spin-polarized carrier channels for energy-efficient and decoherence-free electronic and spintronic classical and quantum devices <cit.>. Simultaneously, the application potential of the QAHE for resistance <cit.> and current <cit.> standards operating in the absence of an external magnetic field has been demonstrated. It has, however, become clear that relatively large native defect concentrations in (Bi,Sb)_2Te_3 and related systems, typically above 10^19 cm^-3, and the associated in-gap impurity-band charge transport, limit the standards' operation to below 100 mK and 1 μA <cit.>. Particularly relevant for the present work is the observation of the QAHE in (Bi,Sb)_2Te_3layers sandwiched between ferromagnetic (Zn,Cr)Te barriers <cit.>.It is, therefore, interesting to consider the metrology prospects of HgTe and related systems, in which the native defect concentration is at the 10^16 cm^-3 level <cit.>. It was found that at the quantum well (QW) thickness corresponding to the topological phase transition, the quantum Hall (QH) plateau R_xy = -h/e^2 appears in weak magnetic fields and persists in a broad magnetic range of the magnetic fields <cit.>, the observation relevant to the QHE metrology <cit.>. The effect is particularly spectacular in the case of (Cd,Hg)Te/(Hg,Mn)Te QW, where the broad plateau begins at 50 mT <cit.>. Those surprising observations were explained by an energetic overlap of the acceptor impurity band with the hole portion of the QW Dirac cone <cit.>. The Coulomb gap, charge ordering, and the formation of bound magnetic polarons (in the Mn-doped samples) are the essential ingredients of the model <cit.>. In the case of the studied samples <cit.>, QHE dominates over a possible QAHE <cit.>,as the QAHE requires also the presence of a magnetic field – ferromagnetic coupling between Mn spins in II-VI compounds appears only if the hole density exceeds 10^20 cm^-3 <cit.>, whereas intrinsic Mn-Mn interactions are antiferromagnetic <cit.>.In the present and companion paper <cit.>, we address the question of which cation-substitutional transition-metal (TM) impurities in barriers or QWs could lead to the QAHE in HgTe-based systems. The conditions to be considered experimentally and theoretically include:* Sufficiently high – in a couple of percent range – solubility limits of particular dopants at the cation-substitutional positions and without aggregation. Importantly, the equilibrium limits can often be overcome by appropriate growth protocols <cit.> and, for instance, incorporation of Cr to ZnTe by low-temperature molecular beam epitaxy appears successful <cit.>. * Ferromagnetic coupling between localized spins. We anticipate that Mn and Co dopants can be excluded, as antiferromagnetic superexchange dominates for these ions in II-VI compounds, also in the case of Hg_1-xMn_xTe <cit.>. Similarly, Fe impurities do not appear appropriate, as they exhibit Van Vleck's paramagnetism in HgTe <cit.>. Guided by the early theoretical prediction on the ferromagnetic superexchange in Cr-doped II-VI compounds <cit.> as well by the observation of ferromagnetism at low temperatures in Zn_1-xCr_xTe <cit.>, we consider the Cr and V case. * Sufficiently high Curie temperature T_C at x small enough in the case of QW doping to prevent a transition to the topologically trivial phase. In the case of Hg_1-xMn_xTe, the inverted band structure disappears at x_c ≃ 7% <cit.> but QW confinement shifts x_c to lower values <cit.>. However, higher TM doping is possible in the case of barriers. * Isoelectronic character of TM impurities, as charge doping by magnetic ions will hamper shifting of the Fermi level to the QW gap. The internal reference rule <cit.>, together with the known valence band offsets <cit.> and the positions of V levels in CdTe <cit.> and Cr levels in CdTe <cit.> and ZnTe <cit.> suggest that the relevant TM^2+/3+ donor level resides in the HgTe conduction and valence band, respectively. This indicates that V in either (Cd,Hg)Te barriers or HgTe QWs might act as an electron dopant, while Cr as an isoelectronic impurity.* Formation of a single chiral edge channel in the presence of spin-polarized magnetic ions. According to the pioneering theoretical analysis <cit.>, the QAHE shows up in magnetically-doped HgTe quantum wells if p =-α/β≳ 0.25, where α>0 and β<0 are the s-d and p-d exchange integrals, respectively. As discussed in the companion paper <cit.>, this condition can be relaxed by tilting the magnetization vector away from the growth direction, as in such a magnetization orientation, spin-orbit interaction diminishes spin-splitting of heavy-hole-like subbands and reduces an effective |β| value. * Single-ion magnetic anisotropy. Except for Mn^2+ for which orbital momentum is zero, magnetic ions such as V^2+ and Cr^2+ exhibit sizable single-ion anisotropy enlarged by the Jahn-Teller distortion. Accordingly, properties of HgTe QWs with those dopants are expected to be more sensitive to epitaxial strain than (Hg,Mn)Te QWs.In this paper, we exploit a range of ab initio methods to asses three aspects of HgTe doped with 12.5% of cation-substitutional V and Cr ions. First, the concentration of V and Cr opens the band gap in (Hg,TM)Te.As mentioned, in the case of Hg_1-xMn_xTe, the transition between the topological and non-topological phase occurs for x_c =0.07 but we find that x_c can be larger for V and Cr.Second, the positions of states brought about by V and Cr ions in CdTe and HgTe in comparison to the better-known case of Mn doping. Our results indicate that V impurities, in agreement with experimental results, act as mid-gap donors in CdTe but are close to the bottom of the conduction band in HgTe, so their isoelectronic character has to be checked experimentally. In contrast, Cr impurities do not provide carriers, as the relevant donor state is close to the valence band maximum in CdTe and deeper in the valence band of HgTe. However, compared to the Mn case, Cr donor level is much closer to the Fermi energy resulting in a relatively high magnitude of the p-d exchange integral |β| compared to the (Hg,Mn)Te value.Third, the magnitudes and signs of coupling between magnetic ions in those systems. We conclude that ferromagnetism dominates in the case of V ions, whereas there is a competition between ferromagnetic and antiferromagnetic coupling in the case of neighbor Cr spins. § COMPUTATIONAL DETAILS In order to describe properly the band gap and strong correlation of electrons in the TM d bands, we have used theHeyd-Scuseria-Ernzerhof 2006 (HSE06) hybrid functional<cit.>. The mixing parameter is often set to around 0.20-0.30, and the value a_HSE=0.25 was used for Co-dopedCdTe <cit.>. Other II-IV semiconductors have been studied with a_HSE up to 0.36 <cit.>. We have carried out computations for a_HSE= 0.25, 0.32, and 0.5 with spin-orbit coupling (SOC) taken into account. Guided by our results presented in Sec. <ref> and Appendix A, we focus on data for a_HSE=0.32, which reproduces with accuracy of 0.1 eV experimental band gaps of CdTe, HgTe, Cd_0.875Mn_0.125Te, andHg_0.875Mn_0.125Te. We have performed the band structure calculations using a plane-wave energy cutoff of 400 eV and an 8×8×8 k-points grid centered in Γ with 512 k-points in the Brillouin zone, adopting the experimental lattice parameters,a_0 = 6.46152 Å for HgTe and 6.4815 Å for CdTe<cit.> and considering cation-substitutional transition metal content of 12.5% for all the compounds investigated. We put the spin polarization along the [001] direction. The band structure of undoped HgTe obtained with a_HSE=0.32 is shown in Fig. <ref>. For comparison, we have also performed computations within the GGA +U approach. The obtained results are most similar to those obtained for a_HSE=0.25.Since V and Cr atoms are distributed periodically in supercells, the V- and Cr-derived states form bands. In reality, for randomly distributed magnetic ions, Anderson-Mott localization will result, at least at low magnetic ion concentrations, in strongly localized band gap levels or resonant states, which have a donor or acceptor character, and correspond to d^n/d^n-1 or d^n/d^n+1 states, respectively, where n = 3, 4, 5 for V, Cr, and Mn, respectively. Furthermore, for the band structure determination, we arrange TM spins ferromagnetically along the [001] direction. Such a TM spins' configuration leads to sp-d exchange splitting of bands, which is k-dependent due to the interplay of the sp-d exchange with kp and spin-orbit interactions<cit.>.§ BAND STRUCTURES WITHOUT DISTORTIONS AND PROSPECTS FOR QAHE We start by showing electronic properties of the CdTe and HgTe doped with V and Cr without taking into account the distortions produced by the structural relaxation and assuming ferromagnetic ordering of TM spins. The band structures obtained within the HSE approach with a_HSE=0.32 for doped CdTe and HgTe with SOC interaction included and for ferromagnetic and periodic spin arrangement in comparison to the Mn-dopedCdTe and HgTe are shown in Figs. <ref> and  <ref>, respectively. In contrast to the Mn case, where d states are high in the conduction band and deep in the valence band, the states derived from V and Cr d levels reside close to the bottom of the conduction band and the top of the valence band. In accord with experimental observations for CdTe:V <cit.>, the uppermost occupied V-derived donor band in Cd_0.875V_0.125Te lies in the middle of the band gap and the lowest unoccupied V-derived acceptor band near the bottom of the conduction band. In the case of Cd_0.875Cr_0.125Te, the occupied Cr-derived d donor bands are strongly hybridized with the host valence band, whereas the Cr unoccupied acceptor band is near the bottom of the conduction band, again in accord with experimental data for CdTe:Cr <cit.>. A close proximity of the d states to the top of the valence band results in a large magnitude of the p-d exchange integral β and, therefore, in a larger splitting of the topmost valence band subbands near the Γ point of the Brillouin zone in the Cr case compared to Mn doping, the effect clearly visible in Fig. <ref>. In accord with experimental results for Hg_1-xMn_xTe, the band gap at the Γ point in a paramagnetic phase, i.e., the distance between the centers of two split conduction subbands and four valence subbands implies normal band ordering with a gap of 0.2 eV. A similar band gap is implied by our results for Hg_0.875Cr_0.125Te meaning that QWs with Cr concentrations below 10% can show the QAHE.Furthermore, k-dependent exchange splitting of bands leads to an overlap of the valence and conduction band states for certain k directions in TM-doped HgTe, which points to the presence of outstanding carrier transport properties in and near the topological regime in those dilute magnetic systems.Previous extensive studies of dilute magnetic materials <cit.>, together with the results for Hg_0.815V_0.125Te presented in Fig. <ref> for a_HSE=0.32 and also in Appendix Afor a_HSE=0.25 and 0.5, indicate that four different scenarios relevant to the QAHE are possible: * for certain V concentrations: (i) the band structure remains inverted and (ii) V donor states are in the valence and, therefore, V acts as an isoelectronic impurity, similarly to the case of Mn and, presumably Cr, in HgTe; * V forms a resonant state in the conduction band, similarly to the case of Fe in HgSe <cit.> and Sc in CdSe <cit.>; * V acts as an electron dopant but does not give rise to the presence of resonant states; * substantial hybridization between V d orbitals and band states leads to unusual band ordering and minigaps in the vicinity of the Fermi energy.In the case (1), (Hg,V)Te quantum wells can show the QAHE, as–according to our results presented in Sec. <ref>–there is a ferromagnetic interaction between V spins in HgTe. The QAHE could be observed as long the QW is topological, i.e., neither a reduction in the QW thickness nor V doping makes the band ordering topologically trivial. By contrast, within scenarios (2) and (3), it will be difficult to shift the Fermi level from the conduction band to the topological gap for V concentrations sufficiently high to result in a ferromagnetic ground state. Finally, in the fourth case, presumably approximately described byab initio results for Hg_0.815V_0.125Te with ferromagnetic and periodic spin arrangement (Fig. <ref>), hybridization between host and dopant states leads to band reconstruction and hybridization gaps. It is unclear on whether the QAHE is possible under these conditions. Finally, we note that V in the (Cd,Hg,V)Te/HgTe quantum wells may act as a modulation electron dopant.Another situation occurs in (Hg,Cr)Te QWs or (Cd,Hg,Cr)Te barriers, where the Cr donor-like d states reside in the valence band but closer to the Γ_8 point compared to the Mn case. Accordingly, we expect Cr to be an isoelectronic dopant and to generate larger, compared to Mn doping, exchange splitting of Γ_8 subbands. These large splittings are seen in Fig. <ref> which presents computational results for the ferromagnetic arrangement of TM spins along the [001] direction. These results substantiate also the tight-binding model of the (Hg,Cr)Te band structure employed to evaluate the sign and magnitude of exchange interactions between pairs of Cr impurities in HgTe <cit.>. As tight-binding and ab initio results presented in Sec. <ref> point to a competition between antiferromagnetic and ferromagnetic interactions, the observation of the QAHE may require the application of a magnetic field in order to polarize Cr spins.§ LOCAL DISTORTIONS PRODUCED BY THE JAHN-TELLER EFFECT Lattice distortions produced by the Jahn-Teller (JT) effect in zinc-blende II-IV semiconductors are not straightforward. The impurities in ZnSe_xS_1-x were shown to produce octahedral rotations and changes of bond-angles <cit.>. Both Cr- and V-doping induce moderate tetrahedral distortions in CdTe and smaller tetrahedral distortions in HgTe. The main effect is a change in the Te-M-Te bond angle. In regular tetrahedra, we have four angles equal to 109.5 degrees. Among these four angles, two remain almost unchanged in the case of doping, one of these angles becomes smaller and another becomes larger. In Fig. <ref> we show the crystal structure and the angles that change with the JT distortion.When we dope the systems with Cr, the angle α becomes larger and β becomes smaller, the opposite happens when we dope with V. This situation is different from what happens in other systems<cit.>, where the two angles α and β become larger or smaller together. This allows a breaking of the degeneracy of the energy levels compatible with the cubic symmetry of the system. Two larger angles or two smaller angles will be more efficient in producing the JT splitting, and they will create a local tetragonal atom arrangement. In Table <ref>, we describe the local distortions in different systems and the gain in the total energy resulting from the JT distortion. These JT distortions help in stabilizing the localized character of states derived from TM d levels.§ EXCHANGE COUPLINGS BETWEEN TRANSITION METAL SPINSIn this section, we present computational results on exchange couplings between TM spins employing a_HSE=0.32. In Table <ref>, we report energy differences between antiferromagnetic (AFM) and ferromagnetic (FM) configurations of TM spins in CdTe and HgTe doped withV, Cr, and Mn without taking the Jahn-Teller distortion into account. As seen, FM couplings prevail for the V case, whereas the interaction is AFM for Cr- and Mn-doped compounds. Once the Jahn-Teller distortion is taken into account, FM couplings show up in the Cr-compounds, too, as shown in Table <ref>. A comparison of data for a_HSE=0.32 presented here to results obtained for a_HSE=0.25 and 0.50 summarized in Appendix, demonstrate a strong sensitivity of the coupling strength to the employed theoretical framework. This sensitivity confirms the presence of a delicate balance between FM and AFM contributions to the TM coupling in the case of early transition metals in II-VI compounds <cit.>. § CONCLUSIONSAccording to theoretical studies presented in the present and the companion paper <cit.>, the predicted magnitude and often sign of magnetic coupling between Cr ions in II-VI compounds depends on the approach employed. This fact reflects a competition between ferromagnetic superexchange and mostly antiferromagnetic Bloembergen-Rowland and two-electron terms <cit.>. Furthermore, according to results presented in Appendix in Fig. <ref>, pinning of the Fermi level below the top of the valence band by Cr ions in (Hg,Cr)Te cannot be entirely excluded. Ferromagnetic interactions appear more robust inV-doped compounds, though the donor character of V-impurities may show up in II-VI tellurides.The ensemble of our results indicates that the most promising strategy for obtaining a QAHE device of HgTe-based systems is to employ (Cd,Hg,Cr)Te barriers that should be ferromagnetic already above 4.2 K, and in which sp-d exchange splitting of bands should lead to splitting of QW states. That splitting can be further enhanced by a few percent Cr doping of the HgTe QW. However, the observation of the QAHE may require tilting of magnetization away from in-plane or perpendicular orientation <cit.>.Doping by V is expected to lead to a more robust ferromagnetic ground state but it could bethat the resulting electron doping may preclude the shifting of the Fermi level to the topological gap. In any case, a close energetic proximity ofd states and the Fermi level opens doors for a new physics in both (Hg,V)Te and (Hg,Cr)Te, not encountered in the case of (Hg,Mn)Te.§ ACKNOWLEDGMENTS The work is supported by the Foundation for Polish Science through the International Research Agendas program co-financed by the European Union within the Smart Growth Operational Programme (Grant No. MAB/2017/1). We acknowledge the access to the computing facilities of the Interdisciplinary Center of Modeling at the University of Warsaw, Grant G84-0, GB84-1, and GB84-7. We acknowledge the CINECA award under the ISCRA initiativeIsC85 "TOPMOST" and IsC93 "RATIO" grant, for the availability of high-performance computing resources and support. We acknowledge the access to the computing facilities of the Poznan Supercomputing and Networking Center Grant No. 609.§ TRANSITION METAL DOPED COMPOUNDS WITH 25% AND 50% OF THE EXACT EXCHANGE In this Appendix, we present band structures and the magnetic properties for TM-doped CdTe and HgTe obtained employing the hybrid functional witha_HSE=0.25 and a_HSE=0.50 of the exact exchange. The band structure results displayed in Figs. <ref>, <ref>, and <ref> substantiate the use of a_HSE=0.32 in the main body of the text. In particular, in the case of Hg_0.875Mn_0.125Te, the experimental value of the band gapE_g =E_Γ6 -E_Γ8 =0.2 eV <cit.> is close to the computational data for a_HSE=0.32 (Fig. <ref>) but E_g is too small and too large for a_HSE=0.25 and 0.5, respectively, as shown in Figs. <ref> and <ref>.As we can see in Fig. <ref>, for a_HSE =0.25 we find a metallic phase in the case of HgTe doped with Cr because acceptor d-states of the dopants undergo a shift below the top of the valence band. The band structures obtained within the GGA+U approach resemble the results obtained at a_HSE =0.25 for all compounds and, therefore, we do not display them. In Tables <ref> and Table <ref> we report the values of the spin-spin couplings for the Cr-doped compounds obtained by using a_HSE =0.25 and 0.5, respectively. In the latter case, we find that HgTe doped with Cr shows ferromagnetic couplings.
http://arxiv.org/abs/2312.16732v1
{ "authors": [ "Giuseppe Cuono", "Carmine Autieri", "Tomasz Dietl" ], "categories": [ "cond-mat.mtrl-sci", "cond-mat.str-el" ], "primary_category": "cond-mat.mtrl-sci", "published": "20231227221505", "title": "CdTe and HgTe doped with V, Cr, and Mn -- prospects for the quantum anomalous Hall effect" }
: Bringing Performance Profiles into Integrated Development EnvironmentsQidong ZhaoNorth Carolina State University Raleigh, USA [email protected] Milind ChabbiScalable Machines Research San Francisco, USA [email protected] Xu LiuNorth Carolina State University Raleigh, USA [email protected] 14, 2024 ========================================================================================================================================================================================================================================================== Dynamic program performance analysis (also known as profiling) is well-known for its powerful capabilities of identifying performance inefficiencies in software packages. Although a large number of profiling techniques are developed in academia and industry, very few of them are widely used by software developers in their regular software developing activities. There are three major reasons. First, the profiling tools (also known as profilers) are disjoint from the coding environments such as IDEs and editors; frequently switching focus between them significantly complicates the entire cycle of software development. Second, mastering various tools to interpret their analysis results requires substantial efforts; even worse, many tools have their own design of graphical user interfaces (GUI) for data presentation, which steepens the learning curves. Third, most existing profilers expose few interfaces to support user-defined analysis, which makes the tools less customizable to fulfill diverse user demands.We develop , a general solution to integrate the interpretation and visualization of various profiling results in the coding environments, which bridges software developers closer with profilers during the code development cycle. The novelty oflies in its significant improvement on the usability of profilers.not only provides deep insights to support intuitive analysis and optimization in a simple interface, but also enhances user experiences in using the profilers effectively and efficiently in the IDEs.Our evaluation shows that is able to support various profilers for different languages and provide unique insights into performance inefficiencies in different domains. Our user studies show thatcan largely improve the usability of profilers in software development cycles via facilitating performance debugging efforts. Profiling, Software optimization, Performance measurement, Visualization, Tools. § INTRODUCTIONProduction software packages have become increasingly complex.They are comprised of a large amount of source code of different languages, sophisticated control and data flows, a hierarchy of component libraries, and growing levels of abstractions.This complexity often introduces inefficiencies across the software stacks, leading to resource wastage, performance degradation, and energy dissipation <cit.>. The provenance of these inefficiencies can be many: rigid abstraction boundaries, missed opportunities to optimize common cases, suboptimal algorithm design, inappropriate data structure selection, poor compiler code generation, and problematic software-hardware interactions. Program analysis plays an important role in understanding performance inefficiencies and guiding code optimization.There are two kinds of program performance analysis: static and dynamic. Static analysis typically leverages compiler infrastructures to study program source code, byte code, or binary code.Static analysis is adept at exploring performance inefficiencies viatechniques such as common subexpression elimination <cit.>, value numbering <cit.>,constant propagation <cit.>, among others.Orthogonal to static analysis is the dynamic program analysis (aka profiling) that identifies program inefficiencies at runtime.Performance analysis tools (aka profilers) such as HPCToolkit <cit.>, VTune <cit.>, gprof <cit.>, pprof <cit.>, OProfile <cit.>, perf <cit.>, and many others monitor code execution to identify hot code regions, idle CPU cycles,arithmetic intensity, and cache misses, among others. Static analysis and dynamic analysis complement each other and are often used together for deep performance insights.While program performance analysis has been proven useful, having it widely adopted by software developers and continuously used in the regular software development cycles is difficult. The main reasons include (1) mastering various program analysis tools requires steep learning curves, especially for tools of different features; (2) customizing the analysis to fulfill the diverse needs is difficult; and (3) frequently switching focus between analysis tools and coding environment can significantly distract developers and complicate development cycles <cit.>, especially when developers are immersed in mission-critical tasks <cit.>. To address these challenges, many research efforts, such as MagpieBridge <cit.>, IBM AppScan <cit.>, Xanitizer <cit.>,aim to integrate program analysis intointeractive development environments (IDEs)or editors. Most existing approaches focus on only the static analysis because the static analysis is usually based on compilers, which hide most of the language details with compiler front ends and intermediate representations. Moreover, language server protocol <cit.> is proposed to easily integrate static analysis into IDEs and editors. In contrast, there is no systematic solution for enhancing the usability of profilers in the development cycles. Today, profilers <cit.> are usually implemented as standalone tools with their own data collection, analysis, and visualization. These profilers target different programming languages (e.g., C/C++, Java, Go), different application domains (e.g., cloud, high performance computing, mobile devices), different performance inefficiencies (e.g., hotspots, insufficient parallelism, memory bottlenecks), and different insights (e.g., various metrics, data/control flows, profiling, and tracing). However, users face to steep learning curves to master these profilers. For improvement, some profilers are integrated into popular IDEs, such as Visual Studio <cit.>, JetBrains products <cit.> (e.g., IntelliJ, Goland, CLion), and Eclipse <cit.>,for in-situ analysis and visualization <cit.>. However, existing IDE-based solutions lack in three aspects to prevent them from high usability. (1) Existing solutions limit IDEs to support only few profilers, so users may not be able to use their desired profilers. (2) Existing solutions do not take advantage of profiler-IDE integration; one does not have flexibility to extend and customize built-in profilers to improve the use experiences. (3) Existing approaches do not work efficiently to handle large profiles, so users may suffer from high latency in opening and exploring profiles.To fill this gap, we develop , which aims to improve user experiences of profiling tools by bringing profilers closer to software engineers for the regular use in development cycles.makes the following contributions. *employs a generic solution. It unifies common features of mainstream profilers into a generic representation. As a result, supports profiles from awide range of analysis and visualization to fulfill different needs from software engineers. *obtains deep insights.supports customizable interfaces to produce unique views to advance state-of-the-art analysis and visualization of performance data, which enables the most flexibility for users to enjoy deep performance insights. *enhances user experiences.tightly integrates the analysis with Visual Studio Code <cit.>, supporting efficient profile analysis and visualization. Users can enjoy low response time andsmooth interactions in exploring the profiling data together with source codes.We evaluatewith both case and user studies. The case studies show thatrequires minimum coding efforts to support different profilers and enables new performance analysis that provides unique insights for application optimization in various domains. Moreover, we showoutperforms prior approaches in efficient data processing and visualization. The user studies with experienced software engineers show that , with tight integration into the coding environment, can largely facilitate the code analysis in their daily development activities. We also show the effectiveness ofby creating control groups to evaluate the user experiences withand the existing standalone or IDE-based solutions. Experimental results show thatcan significantly improve usability of profilers for software engineers. § BACKGROUND AND RELATED WORK Data analysis and visualization are important components of performance engineering tools. There are typically three mechanisms to present the data to users: text format, dedicated graphical user interface (GUI), and plugins in IDEs/editors. The text format is widely used in research tools <cit.> developed in academia, which is easy to implement and evaluate. Some tools, such as perf <cit.>, pprof <cit.>, and Scalene <cit.>, improve the text format via the Markdown format <cit.>, which can be visualized in a shell terminal or a web browser with being translated to HTML.However, the text, markdown, and HTML formats lack flexibility in analyzing the performance data, such asderiving new metrics, customizing views, and presenting large codebases in a scalable way. Instead, many research and commercial tools utilize dedicated GUI to visualize their analysis results for more flexibility. For example, tools such as HPCToolkit <cit.>, VTune <cit.>, hotspot <cit.>, TAU <cit.>, and Oprofile <cit.> have their own GUIs written in Java or Qt.Google Cloud Profile <cit.>, Perfetto <cit.>, SpeedScope <cit.>, Pyroscope <cit.>, gProfiler <cit.>, pprof <cit.>, and FlameScope <cit.> visualize the performance data interactively in web browsers.However, these dedicated data visualizations suffer from four weaknesses, which impede them from wide adoption. First, installing these visualizers and learning their usage incurs extra overhead for programmers. Second, according to prior studies <cit.>, the stand-alone GUIs require users to switch between the tool and code editors/IDEs, which significantly delays the development process. Third, these dedicated GUIs lack interoperability across profiles produced by different tools andare not designed to support customizable analysis. Fourth, some web-based approaches such as Pyroscope (i.e., flamegraph.com) require uploading the profiles to their servers, which raises some security and privacy concerns.To address these limitations,adopts a plug-in approach in IDEs and code editors. Compared to other approaches, it provides in-situ visualization of performance in the same IDE where developers write, test, and debug code. In the rest of this section, we first introduce VSCode on whichis built as a plug-in and then compare to existing approaches that integrate profiling into code editors/IDEs.§.§ VSCode Microsoft’s Visual Studio Code (VSCode) <cit.> is a text editor with powerful IDE-like features. VSCode supports a wide range of programming languages and is highly customizable with various extensions, which are designed for both beginners and advanced programmers. VSCode has been a popular editor in the community. VSCode enjoys the following features: (1) VSCode is free to download and simple to install on most operating systems (e.g., Windows, macOS, Linux); (2) VSCode is easy to customize and supports many useful plug-in extensions.leverages these features of VSCode to enjoy the general, extensible, and applicable analysis. Similar to VSCode, IDEs/editors such as JetBrains products, Atom, Eclipse provide similar plug-in capabilities.§.§ Related Work on Program Analysis in IDEs While many performance analysis and visualization techniques are developed, we only review the most related approaches that tightly integrate the program analysis into IDEs or code editors.Some existing approaches integrate static analysis into IDEs and code editors to improve usability. For example, the widely used Language Server Protocol (LSP) <cit.> defines the protocol between an IDE and a language server that provides language features like auto-completion, goto definition, and finding all references. With the support of LSP, tools such asMagpieBridge <cit.> integrate static analyses into IDEs. However, LSP is not designed for profilers; it does not handle various formats and internals of program profiles.For profiling, almost all the mainstream editors/IDEs have integrated profilers. JetBrains integrates Async-profiler for its IntelliJ IDEA <cit.>, Perf and DTrace for CLion <cit.>, and PProf for Goland <cit.>.For VSCode, a variety of profilers, such as VTune <cit.>, PProf <cit.>, and Austin <cit.> implement their visualization interfaces as extensions for tight integration.Moreover, existing approaches <cit.> augment the source code views in IDEs with integrating profiling information.These existing approaches are limited to a few individual profilers, with no general and interoperational solutions. Moreover, these approaches simply show the traditional stand-alone views in IDEs, without fully taking advantages of integrations such as various annotations on codes and actions on profiles. In contrast,provides a systematic solution that goes beyond the simple improvement over some individual profilers.supports general, extensible, insightful, and efficient analysis across multiple profilers, which no existing approaches can easily obtain. §OVERVIEW AND SCOPEFigure <ref> overviews , which consists of a data abstraction interface, a data analysis engine, and a data visualization GUI.follows four design principles. * General.is not designed for supporting specific profilers. Instead, it aims to provide a general solution that widely supports a broad range of profilers.designs general data representation, supports general analysis, and presents profiles in general views.* Customizable.is designed to be highly flexible. It supports customized or personalized analysis and visualization to better fulfill the diverse demands in different domains. This enablesto integrate users' knowledge or various data mining and machine learning techniques to maximize insights.* Applicable.is implemented as an IDE/editor's plug-in with web front techniques (e.g., TypeScript, JavaScript, Web Assembly), which is portable on different platforms such as Linux, Windows, and macOS. Furthermore,analyzes and visualizes data locally without uploading data to a remote server, which minimizes the security and privacy concerns.* Efficient.minimizes the overhead of processing profiling data, so users have smooth experiences (i.e., low response time) when handling large profiles. Scope:focuses on analyzing and visualizing profile data but not on profile data collection. The goal ofis not to replace existing profilers; instead,aims to improve the usability of most (if not all) existing profilers by bringing them closer to the development environment familiar to the software — IDEs. IDEs can also easily support various performance tools via , not limited to their built-in profilers only. We elaborate on different components ofin the following sections.§ 'S DATA ABSTRACTION INTERFACEWith this component,abstracts the performance data collected from different tools into a unified representation. We first define a generic data representation that unifies common features of mainstream profilers and then develop a set of APIs to bind the generic data representation with existing and emerging profilers.§.§ Generic Data Representation To design the generic data representation, we study more than 50profilers, which are both mainstream and state-of-the-art.These profilers cover different domains, such as high performance computing (e.g., HPCToolkit <cit.>, TAU <cit.>, ScoreP <cit.>, Caliper <cit.>),system and microarchitecture (e.g., Intel VTune <cit.>, ARM MAP <cit.>, AMD uProf <cit.>, NVIDIA Nsight Compute <cit.>, Linux Perf <cit.>, and Oprofile <cit.>), high-level languages, e.g., Python, Java, Go ( Async-Profiler <cit.>, JXPerf <cit.>, PProf <cit.>, Scalene <cit.>), and fine-grained analysis (e.g., Valgrind <cit.>, CCTLib <cit.>, DrCCTProf <cit.>, Pin <cit.>, NVBit <cit.>). We identify the following common features owned by most profilers. * Profiling contexts: Profilers typically analyze and report the performance insights of code regions at the granularity of the entire program, functions, loops, basic blocks, or individual instructions, which are known as profiling contexts.* Metrics: Profilers always provide one or more metrics, such as time, cycles, memory consumption, cache misses, lock contention, and many others. These metrics are associated with profiling contexts and used to rank performance issues.* Call paths: Many profilers provide call paths (also known as calling contexts, backtraces) to give additional analysis insights. A calling context consists of a series of frames on the call stack at any given profiling context. * Code mapping: In order to provide actionable optimization guidance, profilers map the analysis results to programs at the binary or source code levels. The mapping requires instruction pointers, load modules and offsets, source code files and their paths, and line numbers.encodes these features into a generic data representation, as shown in Figure <ref>. All the monitoring points are organized into a compact calling context tree (CCT) by merging the common prefixes of their call paths,which minimizes the storage in both memory and disk for the profiles. For each monitoring point,maintains two pieces of information: context and metric list; context points to the corresponding CCT node with the source code attribution (line location, function, and file), and metric list points to the list of metrics associated with this monitoring point. 's data representation, expressed in a Protocol Buffer schema <cit.> supports all the aforementioned common features.Besides expressing these common features, 's representation also supports advanced features owned by different tools. First, the profiling contexts represent not only the traditional code regions but also data objects, such as heap objects in their allocation call paths and static objects with their names in the symbol tables, so it can handle many memory profilers with data-centric analysis, such as Perf, ScaAnalyzer <cit.>, DrCCTProf <cit.>, Cheetah <cit.>, MemProf <cit.>. Second, 's representation is able to associate multiple metrics with a monitoring point. Moreover,can associate multiple contexts and monitoring points to a single metric.This capability is particularly useful for profilers that identify performance inefficiencies involving multiple contexts, for example, data reuse <cit.> with use and reuse contexts, computation redundancy <cit.> with redundant and killing contexts, data races with two memory access contexts, and false sharing <cit.> with memory accesses in two contexts ping-ponging.We show how these features support powerful analysis in Section <ref>. §.§ Data Bindingbinds its representation with the performance data produced by various tools. However, developing a general solution to fulfill most existing profilers is challenging. On the one hand, tools usually have their own data formats based on binary (e.g., PProf, Perf, HPCToolkit) or JSON (e.g., Chrome Profiler). On the other hand, different languages are used to develop profilers; for example, Perf uses C and PProf uses Go. These diverse formats and languages complicate the applicability of 's representation.To address these challenges,employs a data builder, which (1) derives simple high-level APIs from the functions generated by Protocol Buffers and (2)binds high-level APIs with different languages, such as C, C++, Python, Go, to name a few. Existing profilers use the data builder in two ways. On the one hand, profilers can directly leverage the data builder to output data in 's representation. The examples include some open-source tools, such as DrCCTProf <cit.> and JXPerf <cit.>. On the other hand,provides a format converter atop the data builder, which translates the outputs of existing profilers to 's representation. This mechanism avoids major changes to existing profilers, which broadens the support for a wide range of existing profilers. Currently, 's format converter supports PProf <cit.>, Perf <cit.>, Cloud Profiler <cit.>, Scalene <cit.>, Chrome profiler <cit.>, HPCToolkit <cit.>, TAU <cit.>, and pyinstrument <cit.>; the list is expanding rapidly. We show that usingdata builder directly or for conversion requires minimum engineering efforts in Section <ref>.§.§ Data Translation Framework We develop a compiler-like framework into translate any profile data to the one represented by 's format. Figure <ref> overviews this framework. Like a three-pass compiler, 's data translation framework consists ofthree components: front, middle, and back ends. Moreover, an intermediate representation (IR) is developed to communicate across different components. In the rest of this section, we elaborate on each component of 's data translation framework. Front endThe front end accepts the analysis data produced by various profilers and convert them into an intermediate representation (IR) according to our generic profiling format for further processing.adopts a structural IR, which is based on calling context tree (CCT) <cit.>. CCT is widely used by profilers. A CCT is rooted at the process or thread main function; all the leaf nodes represent the profiling contexts and the internal nodes are frames in call paths; the common prefixes of call paths are merged to give a compact structure. The metrics are associated with each CCT node. Unlike traditional CCT, 's IR allows the manipulation of the CCT during its building process, such as correlating multiple call paths. The front end provides two mechanisms to produce the IR. On one hand,provides a set of APIs that can be integrated into existing profiling framework directly. Table <ref> shows these APIs, which have different different language bindings, such as C, C++, Python, to name a few. Currently, some open-source tools, such as DrCCTProf and JXPerf, employ these APIs. On the other hand,provides a format converter, which translates the outputs of existing profilers to our IR. This avoids the changes of existing profilers, which broadens the support for a wide range of existing profilers. Currently, 's format converter supports PProf <cit.>, Perf <cit.>, HPCToolkit <cit.>, TAU <cit.>, and pyinstrument <cit.>, to name a few. § 'S DATA ANALYSIS ENGINEIn this component,analyzes the performance data in its representation. To enable broad applicability, the analysis engine is implemented with pure web front-end techniques (i.e., JavaScript, TypeScript), so we can easily deploy the analysis together with the GUI in VSCode, with no need for additional software installation. In the rest of this section, we describe 's engine for both general and customized analyses.§.§ General Data AnalysisAs shown in Figure <ref>,represents the performance data into a tree data structure, sooperates the tree for several common analyses, which is backward compatible with existing profilers. Tree traversalsupports basic tree traversal operations, which iterate all tree nodes in different orders. Associated with the tree traversal,performs corresponding analysis, such as computing inclusive/exclusive metrics, collapsing deep and recursive call paths, and pruning insignificant tree nodes.Tree transformationtransforms the tree into top-down, bottom-up, and flat shapes for more insights. The top-down tree, rooted at the entry function (e.g., main or thread_main) with callees as children, shows how the metric distribution along the call paths. The bottom-up tree, which reverts the top-down tree, shows the hot functions called in various call paths. The flat tree, regardless of call paths, shows the hot load modules (e.g., shared libraries), files, and functions.visualizes all the three shapes of the tree to enable powerful analysis.Operations across multiple profilesis able to analyze multiple profiles via managing multiple tree structures, which is particularly useful for tools that produce separate profiles for different threads, processes, or executions.supports two basic operations: aggregation and differentiation. The aggregation operation merges the profiles by constructing a unified tree and deriving statistical metrics associated with each node in the unified tree.maintains the metrics from all the profiles and predefines some operators to derive metrics from them, such as sum, min, max, and mean across different profiles. The aggregation enablesto correlate multiple profiles and show a compact view.The differentiation operation quantifies the difference between two profiles collected in two different executions, which provides unique insights <cit.>, such as scaling losses and resource contention. The differentiation operation is similar to the aggregation operation. By default, two nodes are differentiable if all the parents (ancestors) are differentiable.Unlike existing approaches <cit.>,shows detailed differences in both tree nodes and metrics in all the top-down, bottom-up, and flat trees. §.§ Customized Data Analysis To allow extensions to the data analysis,exposes a programmable interface, as a programming pane in the GUI, for users to customize the analysis. Based on Python-WASM <cit.>, users write Python codes to customize the profile data in the pane, which can be translated to Web Assembly for direct execution in . It is worth noting thatdoes not require any additional software installation or server setup to enable the customizable analysis. To minimize the manual efforts on devising the customizable analysis,triggers user-defined callback functions in the tree operations defined in Section <ref>. There are mainly two types of callback functions. * Callbacks at node visit. Upon visiting each node during tree traversal or transformation,triggers a callback for users to define how to process the current node. For example, users can decide to merge two nodes if they are mapped to the same source code line. Moreover, users can elide any nodes in the tree that are not of interest.* Callbacks at metric computation.triggers a callback to allow users to define any formula to derive new metrics. For example, users can compute cycles per instructions, cache misses per thousand instructions, and many others via specifying the corresponding formulae. Moreover, users can use division instead of subtraction to derive differential metrics, which is used to measure memory scaling <cit.>.exposes traversals over the internal tree, so users can manipulate the profile via accessing any nodes and metrics. One can easily integrate more data mining or machine learning techniques to analyze the profiles. §.§ Optimization for EfficienciesFor efficient data analytics,adopts WebAssembly and WebGL extensively with thorough optimization.manages the memory manually to avoid frequent invocation of garbage collectors. Moreover,avoids unnecessary data movement and computation in transforming or traversing the trees. In Section <ref>, we showsignificantly outperforms existing approaches in response time with large profiles.The middle end accepts the profiling IR and performs various analyses with customizable functions.supports multi-scale analyses on three levels, which are all based on the tree structural IR. * Analyzing a single profile:supports some basic operations on a single CCT. These operations include (1) deriving new metric, (2) collapsing deep/recursive call paths, (3) pruning insignificant tree nodes, and (4) ranking metrics and highlighting call paths. * Analyzing multiple profiles produced by a single tool:provides customizable APIs to analyze multiple profiles produced by the same tool. This is particularly useful to analyze a program running with multiple threads/processes or in different execution configurations.supports two typical operations: aggregation and differentiation, which are illustrated in Figure <ref>. The aggregation operation merges the profiles by constructing an aggregate tree; derived metrics are associated with the node in the aggregate tree.For example,aggregates the profiles produced by different threads or processes and computes thestatistics, such as summation, min/max, standard deviation of metrics in any call paths, which are useful to identify, model, and predict performance inefficiencies.The differentiation operation differentiates two profiles, which is known as differential analysis <cit.>, widely used in analyzing architectural-related inefficiencies, such as poor scalability and data locality. * Analyzing multiple profiles produced by different tools:supports the correlation operation to link the profiling contexts and call paths from different profiles. This analysis gives unique insight into a program by combining different analyses from multiple tools.provides customized APIs to specify how to correlate different profiles, such as the same profiling contexts or the same call paths, as shown in Figure <ref>. Back endThe back end outputs the IR into a single binary file, which self-contains all the necessary information for visualization.maintains the file compact to minimize the disk overhead. § 'S VISUALIZATION INTERFACES visualizes the profiles produced by its analysis engine.supports generic views integrated with IDE-specific features to present the profiles. Like LSP <cit.>,defines a set of activities that correlate views with source code in any IDEs.§.§ Customizable ViewsGeneric views The default views ofare flame graphs <cit.>, which are the de facto view for performance data widely used by the community. Figure <ref> shows a typical flame graph produced bybased on the top-down tree produced by the analysis engine, which is called as a top-down view. The flame graph succinctly represents the tree structure maintained by : the root represents the program entrance (i.e., process or thread start); the nodes underneath represent the call paths; the length of each node denotes the inclusive metric value. Besides the top-down flame graph,also presents another two variants.* Bottom-up flame graph. Based on the bottom-up tree,constructs the bottom-up flame graph, which reverses the call path to have callees as parents and callers as children.computes both the inclusive and exclusive metrics and shows them in two flame graphs. The bottom-up view is particularly useful to identify hot functions and understand where they are called from. * Flat flame graph. Based on the flat tree,elides all the call path information to provide a flat view. The flame graph is organized as follows in a hierarchy: the entire program, load modules (executable binary and shared libraries), files, and functions. Similar to the bottom-up flame graph,also shows inclusive and exclusive metrics in two flat flame graphs. This flat view can highlight the hot shared libraries, files, and functions for optimization. Moreover, all the flame graphs are searchable. One can highlight any function in the flame graph by searching the function name. While the design of these flame graphs is not new,supports them for backward compatibility of existing approaches. Thus,can attract users who are used to existing profilers. Advanced views with customizable flame graphs Users can easily customize the flame graphs shown into better visualize the profiles.Currently,supports three advanced flame graphs that (1) correlate multiple profiles, (2) aggregate multiple profiles, and (3) differentiate two profiles. Compared to existing approaches, these advanced views are novel in visualizing data in a more intuitive way. First, , with the support of the profile representation, can correlate contexts across multiple profiles.presents this correlation in a flame graph variant. Figure <ref> shows an example, which correlates a context of a memory allocation in a program with all call paths in which the program accesses this memory. This visualization advances the solution used by state-of-the-art memory profilers of similar functionality <cit.>. Section <ref> shows a use case of this advanced view. Second,visualizes the metric distribution from multiple profiles in an aggregate view. For any context in the aggregate profile,attaches a histogram to show all the metrics of the same context from different profiles. This view is particularly useful to investigate the behavior across different threads/processes or across different runs. We show a case study in Section <ref> with this advanced view.Third, the differential view is a special case of the aggregate view.compares two profiles in a differential flame graph, which is particularly useful to quantify the performance impact of the code or execution parameter changes. Compared to existing approaches <cit.>, which only differentiate top-down flame graphs and use colors to provide a qualitative view, 's differential flame graphs provide more insights into all the three types (i.e., top-down, bottom-up, and flat) of flame graphs and quantify the differences. As shown in Figure <ref>, 's differential flame graph shows four tags to compare profiles 𝒫_1 and 𝒫_2. [A] means the newly added contexts in 𝒫_2 but not existing in 𝒫_1; [D] means the newly deleted context in 𝒫_2 but existing in 𝒫_1. The prefixes [+] and [-] mean that the context exists in both 𝒫_1 and 𝒫_2; [+] means the metric associated with the context is larger in 𝒫_2 compared to 𝒫_1, while [-] means the metric associated with the context is smaller in 𝒫_2 than in 𝒫_1.Figure <ref> shows an example of differential profilescollected by Async-Profiler <cit.> for Spark <cit.> running with Spark-Bench <cit.>. 𝒫_1 uses the RDD APIs <cit.> and 𝒫_2 uses the SQL Dataset APIs <cit.>. From the figure, we can clearly see that SQL DataSet APIs outperform RDD APIs. The flame graph shows that the performance gains are from using an efficient SQL engine and bypassing costly data shuffle in RDD APIs.Other views Besides flame graphs,also supports the tree table view, which is another mainstream view in many mainstream profilers, such asVTune, HPCToolkit, and TAU. 's tree table view supports all the top-down, bottom-up, and flat views. Compared to flame graphs, the tree table view is less straightforward as it requires users to manually unfold any call paths; but it is particularly useful to visualize a profile with multiple metrics. In our user study, we compare the tree table views with the flame graphs and evaluate their effectiveness. §.§ IDE-enhanced ViewsInspired by LSP,gives the first efforts on defining a set of actions to annotate source code with profiling data shown in IDEs, which significantly improve user expriences.Mandatory actions Code link is the only mandatory action required by , which links the profile with the source code. By clicking a block in the flame graph or a frame in the tree table, the IDE can open the corresponding source code file, jump to the line, and highlight it with a background color if the line mapping information is available in the profile. Optional actions There are several actions that are unnecessary forto integrate the profiles within the IDEs, but can facilitate the users to interpret the profiles. * Color semantics.can define different colors upon different properties in the source code. 's flame graphs can use different colors to represent profiles from different files or libraries and use different darkness to represent the availability of source line mapping.* Code lens.can provide additional insights with code lens, which are annotations above (or below) the source code statements.uses the code lens to show the assembly instructions associated with the source code statement (if the profile provides this information) and the metric values, as some profilers are designed for compiler developers to collect and maintain such assembly-level information.* Hovers.can generate any optimization tips and show them in hovers. Hovers can be associated with individual source lines and pop up when the mouse cursor is placed on the code. Whilecurrently only shows all metric values associated with the selected line in the hovers, it opens an interface to record any advanced analysis results and show the optimization guidance with user-defined analysis.* Floating windows.is able to open a floating window in the source code pane to summarize the entire profile. Unlike hovers, the floating windows provide the global summary of the entire program. These actions are generally supported by mainstream IDEs. We implementas an extension of Microsoft Visual Studio Code.can be easily integrated into JetBrains products with its platform SDK <cit.>. § IMPLEMENTATION OF THE PROFILING PROTOCOLFigure <ref> shows all the components of . The data format is expressed with Google's proto buffer <cit.>, which can automatically generate a set of low-level APIs of different language bindings to produce the data in the format.The format building APIs leverage these low-level APIs to build the IR. A VSCode plug-in is implemented to visualize the profiling data.The implementation challenge mainly resides in the efficient processing voluminous data.needs to handle computation efficiently to minimize the user waiting and interacting time in the GUI. To reduce the interacting latency, we decide to precompute all the necessary data before showing them in the GUI.divides the computation tasks and distributes them to data translation framework and visualization interface. Computation in data translation frameworkThe data translation framework provides more flexibility and power in computation. Thus, the data translation framework performs most unbounded computation, which cannot be estimated for the computation amount offline. The following computation is computed in this component.* Merging profiles from different threads and processes, different CPUs and GPUs, and different tools.* Mining the profiles to analyze or predict performance bugs.* Supporting customized functions to manipulate the performance data, such as training a model to automatically identify performance bugs.Since 's data translation framework provides APIs for different language bindings, one can choose different strategies depending on the needs. For example, one can use Python APIs to easily process the data with Pandas framework or use C/C++ APIs to enjoy the high performance and parallel computing supports (multithreads, multiprocesses, GPU). Computation in the visualization interfaceThe visualization interface reads the processed data from the translation framework for visualization. Specifically, the visualization interface constructs the tree structure for the basic top-down view in both flame graph and tree view. Moreover, the visualization interface computes all the necessary data to support derived views in flame graphs, such as bottom-up view, flat view, and differential view. It is worth noting that these computations are bounded, which is a constant time complexity over the tree traversal.However, one challenge is that the tree produced by the data translation framework can consist a large amount of nodes. For example, profiling a complex Go program can result in 1.8 million nodes in the tree. We find that using default TypeScript (a variant of JavaScript) can incur a significant delay or even crash in visualizing the data, mainly because of the frequent GC (garbage collection) invocation. Thus, we use web assembly language to accelerate the processing without breaking the portability of the visualization interface. We will compare the latency (i.e., user experience) with other state-of-the-art tools in Section <ref>. §.§ Visualizing voluminous data Visualizing a large amount of data can incur two issues. First, rendering a large amount of data can incur significant delays. Second, it is challenging to render a large amount of data with limited number of pixels in a non-dedicated monitor (e.g., laptop and desktop).§ EVALUATIONWe evaluateon the following fronts: (1) the efforts of using 's profile representation (i.e., programmability), (2) the response time of(i.e., efficiency), (3) the insightscan provide to optimize programs with mainstream profilers in multiple domains (i.e., effectiveness), and (4) the user experiences of using(i.e., user studies). §.§ Programmability ofSince the design ofis to support a generic visualization for existing profilers, we evaluate the programming efforts of generatingdata representation. We quantify the lines of code needed for existing profilers to producedata representation. There are three methods to adapt a profiler to support :(1) using 's APIs to output the format used bydirectly (e.g., DrCCTProf and JXPerf), (2) converting to 's data representation (e.g., HPCToolkit, TAU, PyInstrument), or (3) using PProf data format, which is a subset ofrepresentation in Protocol Buffer (e.g., perf, PProf, Cloud Profiler). We find that changing tools to directly outputformat requires less than 20 lines of code (C++ for DrCCTProf and Python for JXPerf); converting to 's format requires less than 200 lines of codes (e.g., C for Perf and Python for TAU/HPCToolkit) and most of them are used to parse the original profile formats. §.§ Efficiency ofWe measure the response time, which is defined as the end-to-end time ofto open a profile, including data processing (e.g., creating trees and computing metrics) and data visualization (e.g., rendering flame graphs). We glean real profiles collected by PProf data collector for industrial production software packages. The profile size ranges from ∼1MB to ∼1GB. To be fair, we have all the tools generate a top-down flame graph. Figure <ref> shows the comparison among , default PProf, GoLand of PProf plugin. We can see thatis much more efficient and others, especially for handling large profiles.§.§ Effectiveness ofWe evaluate the effectiveness ofby demonstrating how it can help obtain deep insights with the profilers in various domains. We perform several case studies by two graduate students. These students know various profiling techniques but are not familiar withor the applications under study. We show the case studies in both cloud and HPC domains, with the mainstream profilers and representative workloads.We show how , with its unique capabilities, gives better optimization guidance. We also compare the usability ofand the profilers' default GUIs to show the advantages of . §.§.§in the Cloud DomainWe study Go, one of the most popular program languages in cloud. The profiler used in this study is PProf <cit.>, which is the de facto profiler for measuring Go programs.We show thatcan be easily integrated into the profiling workflow to facilitate the analysis efforts. We study rpcx-benchmark <cit.>, a popular test suite for gRPC <cit.> written in Go. This application uses the server-client model to simulate the high concurrency in practice. In this study, we show the profiling results in the clients.We follow the common usage of memory profiling in PProf to pinpoint potential memory leaks <cit.>. We use PProf to periodically (i.e., every 0.1 second) capture a memory snapshot as a profile during the execution. Each snapshot shows the active memory usage in the allocation call paths. By analyzing the active memory consumption in each snapshot along the timeline, we can identify patterns that are due to potential memory leaks. PProf only identifies suspicious leaks, which require further investigations to confirm. can analyze all the snapshots by aggregating them (with the technique described in Section <ref>), as shown in Figure <ref>.In this figure, the top two panes show the source code as well as the metrics in code lens and hovers supported by ; the bottom pane shows the profiles in flame graphs. We show the top-down flame graph, but one can also select bottom-up and flat flame graphs to show. One can right-click any part of the flame graph to associate the clicked frame with the source code. On the top right of the bottom pane shows the metric, which is the total allocated bytes in the profile.can clearly highlight the hot memory allocations at bufio.NewResearSize and transport.newBufWriter, which are invoked when creating new HTTP clients (observed from the call path). When clicking the frame of transport.newBufWriter in the flame graph, a hover popped up to show the histogram of active memory usage across different snapshots along the timeline. This histogram shows a pattern: the active memory in this call path is continuously high with no clear sign of reclamation. According to prior studies, this pattern raises a warning of potential memory leaks. Similarly, bufio.NewResearSize also suffers from potential memory leaks. These memory leaks can be potentially caused by the clients not closing the connections in time. We have reported this finding to the application developers for further investigation. In contrast, the histogram in the right hover indicates no memory leaks in function passthrough as the active memory usage is diminishing at the end of the program execution. Comparing to off-the-shelf PProf: We obtain the similar analysis with the built-in PProf analysis. While PProf can collect and visualize profiles, it does not support automatic analysis across multiple profiles. Thus, one needs to write scripts to extract the data and manually plot the histogram of active memory usage for a given allocation context. In contrast,significantly facilitates these efforts with its powerful analysis and visualization interfaces. §.§.§in the HPC DomainIn the HPC Domain, we show howcombines the outputs oftwo mainstream profilers to analyze LULESH <cit.>, a proxy application written in C++ developed by Lawrence Livermore National Laboratory to solve the Sedov blast wave problem for one material in three dimensions. The two profilers are HPCToolkit <cit.> and DrCCTProf <cit.>, which are state-of-the-art profilers for HPC applications.* : Supported by the U.S. Department of Energy Exascale Computing Project, HPCToolkit is the state-of-the-art profiler that has been deployed on the nation's largest supercomputers. By using statistical sampling of timers and hardware performance counters, HPCToolkit identifies program hotspots, resource consumption, and inefficiencies, incurring low overhead (<5%). * DrCCTProf is the state-of-the-art fine-grained profiler that monitors binary instruction execution to obtain microscopic insights, such as instruction operator, operands, and values. A variety of clients <cit.> atop DrCCTProfhave been developed to identify computation redundancies and data locality issues in HPC applications.We use them for complementary insights: HPCToolkit pinpoints hotspots that are worthy of further investigation;, while DrCCTProf identifies some root causes of inefficiencies. We show thatis able to combine the two independent profiles in a unified view for more intuitive analyses. first shows the flame graphs for hotspot analysis measured by HPCToolkit.Figure <ref> is a bottom-up flame graph, which shows the hot leaf function in the reversed call paths. It is straightforward to see that function brk from the libc.so library is the hotspot, which is called in multiple call paths. With further investigation of these reversed call paths and their attribution to the source code, we can see that the hotspot is rooted in the memory management (i.e., memory allocation and release). We replace the default memory management in libc with TCMalloc <cit.>, a more efficient solution, which yields a 30% speedup to the entire program.The top-down flame graph (not shown in the paper) highlights the function CalcVolumeForceForElems and its callee CalcHourglassForceForElems as hotspots. To understand the root causes, we investigate the profile produced by DrCCTProf. DrCCTProf measures the memory reuses <cit.> to quantify the data locality in LULESH. As shown in Figure <ref>, three flame graphs are correlated in : the left one shows all the array allocations; when selecting one array allocation (1), the middle one appears to show all the uses to this array. When selecting a use (2), the right one appears to show all the reuses following this use. The metric is the memory bytes allocatedfor the left flame graph and occurrences of memory accesses for the other two flame graphs. Figure <ref> shows a reuse tuple related to the hot function CalcHourglassForceForElems.correlates the three flame graphs and clearly shows the call paths, which can easily guide locality optimization: hoisting the use and reuse code to the least common ancestor of the call paths and performing the loop fusion. The optimization yields an additional 28% speedup to the entire LULESH program. Comparing to off-the-shelf HPCToolkit and DrCCTProf: We find that their original GUIs are not as straightforward asandrequire substantial efforts to learn and use to obtain the same insights compared to . Moreover, these two tools have their own GUIs that are separated from the IDEs, which cannot easily combine their profiles in a unified view for easy analysis.In most cases, performance issues with HPC programs cannot be discovered by analyzing general metrics. We need to collect specific metrics, and these metrics are often associated with multiple program execution contexts. So it can not easily be shown through general view.Reuse-distance is a metric representing the distinct memory references number between two memory references of the same location. The metic is associated with three runtime contexts: the memory allocation context, the first memory access context, and the second same location memory access context. The traditional profiler merges these three context's call paths and uses one tree table to show the profile.Then we get a big tree with many redundancies nodes. And the big tree makes it difficult for us to find the performance issues. In this case, we use DrCCTProf's client ArmReuse to profiler Lulesh and get the 's format.The samples in the profile use two metrics to store the allocation context and the first memory access context.detects the special metrics and draws three flamegraph for the profile. Figure <ref> shows the LULESH reuse-distance profile. §.§ User StudiesTo give a fair evaluation ofon helping software engineers analyze their software, we perform a user study in a broader group beyond the graduate students.We releasedin VSCode market and recorded a few short videos for the tutorial to help users install and use . We sent emails to the mailing lists and posted them on the technique forums to recruit potential users. We invite programmers with at least 1-year programming experience to use , regardless of their experiences in code profiling and optimization. After trying out , users are invited to fill out an anonymous survey form. By the paper submission, we found 216 installations ofrecorded in VSCode marketplace and received 26 completed forms. We show the statistics of the survey as follows.We first let the survey participants identify their own expertise in code profiling and optimization to ensure this survey covers programmers of different levels. Among them, 53.8% actively tune the code for high performance. There are 34.6%, 19.2%, and 46.2% of participants using profilers at the frequency of weekly, monthly, and very rarely, respectively.Among all the participants, 92.3% agree that , integrating the profile analysis and visualization within VSCode,is effective in facilitating performance analysis. It is worth noting that even the participants with limited knowledge in performance analysis agreeis useful, which flattens their learning curves of performance analysis. 7.7% of participants question the effectiveness of ; they are not VSCode users as they suggestsupport other IDEs besides VSCode. We further ask the participants to quantify the performance improvement of their code guided by .80.8% of participants improve their code performance with 's help. The speedups range from 1.03× to 3× from optimizing hotspot, memory, and mutex. For the participants who do not obtain performance improvement, they also admit thathelps pinpoint performance inefficiencies worth of further investigation.Our user studies also show the comparison among different views in Figure <ref>. Overall, flame graphs are more effective than tree tables (92.3% vs. 84.6%). Among all of the views, the top-down views are the most helpful, compared to bottom-up and flat views in both flame graphs and tree tables. These insights guide the evolution ofin two directions: (1) deriving various analyses based on the top-down view and (2) demonstrating more use cases for bottom-up and flat views to enable their wide adoption. Control group evaluation. To further comparewith existing approaches, we create experimental and control groups to analyze the performance data collected by PProf. The experimental group (7 people) use , while two control groups: 7 people use the default PProf visualizer and 7 people use JetBrain's GoLand PProf plugin. People in different groups are mixed with newbies and experienced engineers in performance engineering, all of whom are trained with the basic knowledge about flame graphs <cit.>. All the groups are given the same set of profiles and are asked to perform three tasks. Task I: Pinpointing hotspot functions in their calling contexts for high CPU and memory consumption. This is ause case for top-down flame graphs. Task II: Identifying hot memory allocation, garbage collection invocation, and lock wait, and figuring out where they are called. This is ause case for bottom-up flame graphs. Task III: Identifying memory leak as described in Section <ref>. This is to evaluate handling multiple profiles. The observation is as follows.* For Task I, the experimental group uses ∼10 min on average to investigate all the profiles. The control group using GoLand spends longer time (∼15 min). Although the top-down flame graph produced byand GoLand are similar, GoLand requires much more time to open and navigate large profiles. The control group using default PProf requires most time (sim30min) to complete the task. It is mainly because unlikeand GoLand, PProf requires manual correlate profiles with source code.* For Task II, the experimental group uses ∼10 min on average to complete. The control group using GoLand needs ∼1 hour as GoLand does not provide bottom-up flame graphs for analysis. Instead, it provides a bottom-up tree table, which requires more learning time. The control group using PProf needs more than 3 hours to complete the task because PProf does not provide any bottom-up view but requires tedious manual analysis.* For Task III, the experimental group uses ∼10 min to complete. Both control groups cannot complete the task in 3 hours. It is because PProf and GoLand do not provide an easy way to analyze multiple profiles. One needs to either analyze multiple profiles manually or devise a script for automatic analysis, both of which add significant burdens to users.In summary,significantly flattens the learning curves for various performance analyses. Becauseintegrate these analyses in a consistent flame graph view, one needs to spend minimum efforts to master it. § THREATS TO VALIDITYLike all existing approaches based on empirical and user studies, our research suffers from internal and external threats. Internal threatsWe suffer from the internal threats in picking up the representative survey participants, which may result in biased results.The participants may have different levels of knowledge in performance tuning with profilers, different levels of expertise in the use of Microsoft VSCode, different levels of familiarity in flame graphs or tree tables.To obtain representative results, we use three manners to minimize the internal threats. First, we disseminate our surveys via multiple channels (e.g., mailing list, tech forums, and community slack/discord) to invite a relatively large number of participants. Second, we pre-record several short videos to demonstrate the usage of . Users can use these videos as the tutorials to gain the basic knowledge of . Third, we actively answer questions raised by the users to ensure they can usecorrectly and efficiently. In our survey, we explicitly ask the users' programming knowledge and find that our survey is representative, covering programmers of different levels. Another internal threat is related to the claim thatprovides general support for profilers. We are not able to study all profilers in practice, but we guarantee our claim applies to the popular and mainstream profilers as described in Section <ref>.External threats We identify one potential external threat that we use a Google form to maintain all the survey questions. Users ofmay not actively fill the form, as participating the survey can distract them from their daily work. To increase the response ratio, we highlight the survey link inwebpage in the VSCode marketplace. Moreover,is currently released as an extension of VSCode only. It excludes some users that actively use other IDEs, such as IntelliJ or Eclipse. Supporting other popular IDEs is under development.§ CONCLUSIONSIn this paper, we describe , which brings performance profiles into IDEs to facilitate performance analysis for both deeper insights and better user experiences.employs a generic profiling data representation that covers all basic features and many advanced features of mainstream profilers. Moreover,unifies the data analysis with predefined schemes as well as customized schemes for one or multiple profiles.visualizes the performance data in variants of flame graphs and tree tables, which are tightly integrated into IDEs with low response time. With our empirical studies, we showprovides simple APIs to support various tools and deep insights to analyze workloads in different domains. We further leverage survey forms for user studies. A majority of the participants agree thatcan significantly help their efforts in understanding program performance.Many users have obtained nontrivial speedups with the optimization guided by .Our controlled experiments show thatsignificantly outperforms state of the arts in interpreting profiling data. § ACKNOWLEDGEMENTSWe thank anonymous reviewers for their valuable comments. This research is partially supported by NSF CNS 2050007 and a Google gift.§ ARTIFACT APPENDIX §.§ Abstract The provided artifact encompasses an extension of the Visual Studio Code, specifically engineered for the purpose of visualization evaluation. Additionally, it includes a binary executable file used to evaluate response time and a Python script for Profile converter. The artifact is published in Zenodo <cit.>.§.§ Artifact check-list (meta-information) * Program:The artifact includes a profile format converter, which has been implemented using the Python programming language.* Binary:The artifact comprises a Visual Studio Code Extension and an executed binary file.* Hardware:The physical machine must have at least 8GB of memory, and the CPU architecture should be either Intel x86-64 or ARM64.* Data set:The artifact includes three overhead evaluation profiles.* How much disk space required (approximately)?: 10 GB* How much time is needed to prepare workflow (approximately)?:10 minutes* How much time is needed to complete experiments (approximately)?:1 hour* Publicly available?:The profile format converter in the artifact is open-source. §.§ InstallationThe installation involves three key steps: firstly, installing the background Docker container; secondly, connecting Visual Studio Code to the Docker container; and thirdly, installing the Visual Studio Code extension.§.§.§ Docker Container Installation * To install the Docker container, open a terminal and enter the command below.[language=bash] # On Intel machine docker pull qzhao24/easyview-cgo23docker run –hostname=easyview-cgo23 -d -p 2222:22 qzhao24/easyview-cgo23 # On Arm machine docker pull qzhao24/easyview-cgo23-arm64docker run –hostname=easyview-cgo23 -d -p 2222:22 qzhao24/easyview-cgo23-arm64§.§.§ Connecting to Docker Container in Visual Studio Code * Install Remote Development Extension <cit.>. * Use the extension to connect the docker container. The SSH command is as follows [language=bash] # The password is '1234' ssh -p 2222 qzhao24@localhost§.§.§Visual Studio Code Extension Installation* After establishing a connection to the Docker container, proceed to install the Visual Studio Code extension using a .vsix file located within the Docker container:[language=bash] /home/qzhao24/easyview-artifact/Extension/easyviewcgo23-0.2.0.vsix * Access the extension window, locate the item "EasyViewCGO23," and click trust it on its detail window. §.§ Experiment workflow This experiment is structured into three parts: §.§.§Profile Format Converter Evaluation* To conduct the experiment provided by this artifact, follow these steps: First, open a terminal within Visual Studio Code, ensuring it is connected to the Docker container. Then, execute the command provided below in the terminal.[language=bash] cd /home/qzhao24/easyview-artifact/Source/drcctprof-databuilder./hpctoolkit-converter.py /home/qzhao24/easyview-artifact/Source/hpctoolkit-lulesh-par-original-database lulesh.hpctoolkit.drcctprof§.§.§Response Time Assessment* To conduct the experiment provided by this artifact, follow these steps: First, open a terminal within Visual Studio Code, ensuring it is connected to the Docker container. Then, execute the command provided below in the terminal.[language=bash] cd /home/qzhao24/easyview-artifact/Overhead # On Intel machine./easyview_overhead_test 1M.profile ./easyview_overhead_test 100M.profile./easyview_overhead_test 1GB.profile # On Arm machine ./easyview_overhead_test_arm64 1M.profile./easyview_overhead_test_arm64 100M.profile ./easyview_overhead_test_arm64 1GB.profile§.§.§Virtualization Evaluation* To conduct the experiment associated with this artifact, employ Visual Studio Code to access the specified files:[language=bash] /home/qzhao24/easyview-artifact/Profiles/fig4_memory_flow.ezview/home/qzhao24/easyview-artifact/Profiles/fig6_lulesh_bottomup.ezview /home/qzhao24/easyview-artifact/Profiles/fig7_lulesh_reuse_distance.ezview§.§ Evaluation and expected result Regarding the three experiments described:§.§.§Profile Format Converter Evaluation* The anticipated outcome of this evaluation is the successful execution of the command, resulting in the generation of a .drcctprof file. Subsequently, this profile should be accessible via Visual Studio Code, allowing for the display of a Flamegraph.§.§.§Response time assessment* The expected outcomes involve the successful execution of the last three commands, resulting in the generation of execution time outputs.§.§.§Virtualization evaluation* The expected results for this evaluation involve the three profiles showing a figure that closely resembles the one presented in the paper.IEEEtran
http://arxiv.org/abs/2312.16598v1
{ "authors": [ "Qidong Zhao", "Milind Chabbi", "Xu Liu" ], "categories": [ "cs.SE", "cs.PF" ], "primary_category": "cs.SE", "published": "20231227144928", "title": "EasyView: Bringing Performance Profiles into Integrated Development Environments" }
Full-Stack End-To-End Sub-THz Simulations at 140 GHz using NYUSIM Channel Model in ns-3 This work is supported by the NYU WIRELESS industrial affiliates program.Hitesh Poddar^†, Akhileswar Chowdary^†, Theodore S. Rappaport^†, Marwa Chafii^*† ^†NYU WIRELESS, NYU Tandon School of Engineering, Brooklyn, NY, USA, {hiteshp, akhileswar.chowdary, tsr}@nyu.edu ^*Engineering division, New York University Abu Dhabi, UAE, [email protected] ...; accepted... ======================================================================================================================================================================================================================================================================================= firststyle The next generation of wireless communication is expected to harness the potential of the sub-THz bands to achieve exceptional performance and ubiquitous connectivity. However, network simulators such as ns-3 currently lack support for channel models above 100 GHz. This limits the ability of researchers to study, design, and evaluate systems operating above 100 GHz. Here, we use the drop-based NYUSIM channel model to simulate channels above 100 GHz in all 3GPP scenarios including urban microcell (UMi), urban macrocell (UMa), rural macrocell (RMa), indoor hotspot (InH), and indoor factory (InF). We evaluate the full stack downlink end-to-end performance (throughput, latency, and packet drop) experienced by a single user equipment (UE) connected to a Next Generation Node B (gNB) operating in the sub-THz bands for three gNB–UE antenna configurations: 8x8–4x4, 16x16–4x4, and 64x64–8x8 by using the NYUSIM channel model at 140 GHz in the ns-3 mmWave module. Our simulations demonstrate that sub-THz bands can enable high-fidelity applications that require data rates exceeding 1 Gbps and latency below 15 milliseconds (ms) using the current mmWave protocol stack, and large antenna arrays. In addition, we show the variation in throughput vs number of realizations and find the optimal number of realizations required to obtain statistically significant results. We strongly encourage researchers worldwide to adopt a similar approach, as it enables the readers to assess the accuracy and reliability of the reported results and enhance the findings' overall interpretability. 6G, latency, NYUSIM, packet drop, sub-THz, throughput, ns-3, system level simulation. § INTRODUCTION6G communication holds tremendous potential to enable a wide array of applications, including autonomous navigation, smart cities <cit.>, augmented and virtual reality (AR/VR) <cit.>, haptics, integrated sensing and communication <cit.>, and Industry 4.0 <cit.>. To fully realize the capabilities of these cutting-edge applications, exceptional data rates, imperceptible latency, and ubiquitous connectivity are essential <cit.>. However, the current mmWave bands, operating in the frequency range of 24–72 GHz, cannot meet the necessary performance requirements for these demanding applications due to limited bandwidth. Typically the maximum continuous bandwidth allocated in mmWave bands is limited to 400 MHz <cit.>. To overcome the bandwidth limitation, it is imperative to leverage the sub-THz bands, which span frequencies from 100–300 GHz and provide larger continuous bandwidths in the order of tens of GHz <cit.>. The Federal Communications Commission's (FCC) Office of Engineering and Technology, as outlined in the ET Docket No. 18–21 issued in 2019 <cit.>, has taken proactive measures to foster the advancement of novel wireless technologies in the sub-THz bands. Furthermore, standardization bodies, industry, and academia have embarked upon an exploration of the D-band, centered around 140 GHz, as a promising candidate for 6G communication <cit.>. However, the official release of the 6G standard, encompassing the defining elements of the physical layer, MAC layer, and higher-layer procedures is scheduled to be released in 2026 <cit.>. As a result, this work adopts a pragmatic approach by leveraging the existing ns-3 mmWave module <cit.> which implements the 5G NR protocol stack to evaluate the end-to-end performance of the 140 GHz (sub-THz) channel for all 3GPP-listed scenarios namely UMi, UMa, RMa, InH, and InF <cit.>. Additionally, by systematically scaling up the antenna elements at both the gNB and UE, we aim to quantitatively determine the extent of performance enhancements achievable for the end-to-end performance metrics (throughput, latency, and packet drops). We conduct an extensive series of simulations using the NYUSIM channel model <cit.> at 140 GHz for all the 3GPP specified scenarios, implemented into the widely used ns-3 mmWave module <cit.>. NYUSIM is an active open-source mmWave and sub-THz channel simulator implemented in ns-3 <cit.> and MATLAB <cit.>. NYUSIM channel models are developed based on an exhaustive collection of field measurement data for the frequency range of 28 to 140 GHz[Although the measurements were at 142 GHz, we use 140 GHz interchangeably for simplicity. However, the simulations in this paper are at 140 GHz.], conducted between 2011–2022. NYUSIM possesses the capability to generate channels for the frequency range of 0.5–150 GHz across diverse 3GPP-defined scenarios <cit.>. This paper is structured as follows. In Section <ref>, we present the simulation setup. Section <ref> presents results and insights. Furthermore, Subsection <ref> illustrates the trade-off between latency, throughput, and packet drops. Additionally, Subsection <ref> proposes an effective methodology for presenting statistical results derived from extensive wireless network simulations. Finally, in Section <ref>, we draw our conclusions. § SIMULATION SETUPTo thoroughly explore the full stack downlink end-to-end performance for a single UE connected to a gNB operating in the sub-THz band, we employ the NYUSIM channel model <cit.> in the widely-utilized ns-3 mmWave module <cit.>. Our examination encompasses an evaluation of the end-to-end throughput, latency, and packet drop experienced by a wireless modem operating at 140 GHz with 1 GHz bandwidth (maximum bandwidth supported by NYUSIM <cit.>) across various UMi, UMa, RMa, InH, and InF scenarios, accounting for both line-of-sight (LOS) and non-line-of-sight (NLOS) channel conditions. The simulations consider three distinct gNB–UE antenna configurations, namely Ant1: 8×8 (gNB)–4×4 (UE), Ant2: 16×16 (gNB)–4×4 (UE), and Ant3: 64×64 (gNB)–8×8 (UE), respectively. The simulations focus on a single gNB and UE in fixed LOS/NLOS channel conditions, in a realistic channel generated by the NYUSIM channel model. A single UE and gNB are chosen because they facilitate a fundamental understanding of the system's peak achievable performance, devoid of intra or inter-cell interference that may arise in multi-user scenarios. The UE is fixed at a distance of 100 meters from the gNB, which is well within a typical small cell size of approximately 200 meters and for ease of comparison of performance across different scenarios at a fixed distance. The gNB transmits at a power level of 30 dBm. Video is transmitted from a remote server using user datagram protocol (UDP) at different source application rates ranging from 250 Mbps to 3000 Mbps <cit.>. The choice of UDP for transmission stems from its lower overhead and accurate representation for link performance estimation than that of the transmission control protocol (TCP). TCP entails mechanisms for reliable data delivery, flow control, and congestion control, which introduces additional latency and processing overhead. In contrast, UDP operates as a connectionless protocol devoid of such mechanisms, thereby reducing overhead and minimizing latency. Furthermore, TCP's congestion control mechanisms can significantly impact the observed throughput, making it challenging to measure the actual link performance accurately. We execute 2500 realizations (the reason for choosing this number is explained in Subsection <ref>) for each channel condition (LOS/NLOS), scenario, antenna configuration, and source application rate. Within each realization, which spans a duration of 9 seconds, analog beamforming with a single spatial stream is employed, hybrid automatic repeat request (HARQ) processes are set to 8, and blockage effects are disabled (NYUSIM in ns-3 currently doesn't support blockage models) <cit.>. The size of the radio link control (RLC) buffer is set to 10 Megabytes. The remaining configuration parameters align with the default values specified within the mmWave module <cit.>.§ RESULTS AND DISCUSSIONSWe present the experimental results in two distinct categories: Outdoor and Indoor. The Outdoor category encompasses UMi and RMa scenarios, while the Indoor category includes InH and InF scenarios. We omit a detailed presentation of results for the UMa scenario as they closely align with those of UMi due to similar small-scale and large-scale channel parameters at 140 GHz <cit.>. In this paper, we use the term `SNR' to refer to the signal-to-noise ratio at the UE. Analyzing Figure <ref>, we observe that the SNR in the RMa scenario is consistently lower than in the UMi scenario, regardless of the channel condition and antenna configuration. The lower SNR in the RMa scenario is due to the higher path loss exponent compared to UMi under both channel conditions <cit.>. Additionally, increasing the number of antenna elements at the gNB and UE (Ant1, Ant2, Ant3) results in a rightward shift of the SNR curves for both the UMi and RMa scenarios, indicating that a higher number of antenna elements yields higher antenna gain, resulting in a strong received signal and consequently an increase in the SNR of the signal received at the UE. Similar trends are also evident for the Indoor (InH and InF) scenarios in Figure <ref> with increasing the number of antenna elements at the gNB and UE. However, the InH and InF scenarios exhibit approximately the same path loss exponent in LOS conditions and thus similar received power and SNR, which is reflected in the SNR curves of Figure <ref>. In contrast, the InF scenario experiences a higher path loss exponent in NLOS channel conditions <cit.> compared to the InH scenario, causing a lower SNR for the InF scenario compared to the InH scenario. Figures <ref>, <ref>, <ref>, and <ref> illustrate the variation of average end-to-end throughput and average packet drops with respect to application rate for all three gNB-UE antenna configurations in LOS and NLOS channel conditions for the Outdoor scenarios (UMi and RMa). It is worth noting from Figures <ref> and <ref> that the throughput increases with the application rate until reaching a certain threshold for a given antenna configuration. The threshold for each antenna configuration is determined by the maximum physical layer throughput that can be sustained by the link <cit.>. The maximum physical layer throughput can be calculated using the transport block (TB) size and slot duration where the TB size is in bits. For instance, consider a sample run from the UMi LOS scenario (Figure <ref>) using antenna configuration “Ant3” having the maximum possible Modulation and Coding Scheme (MCS) of 28 and TB size 56200 bytes. For this case, the maximum possible physical layer throughput calculated from <cit.> will be ∼1800 Mbps. Thus, when the source application rate (values in the x-axis of Figure <ref>) exceeds the maximum physical layer throughput, a drastic decline in throughput is observed which can be attributed to buffer overflows, as explained in <cit.>. Additionally, increasing source application rates beyond the maximum sustainable physical layer throughput leads to excessive packet drops as seen in Figures <ref> and <ref>. Moreover, when conducting 2500 realizations using a particular antenna configuration, source application rate, scenario, and channel condition, it's important to note that not all realizations will have strong SNRs. This is because each realization generates an independent channel and thus a different SNR. In realizations where the SNR is low, the physical layer throughput is low, leading to an increase in packet loss caused by buffer overflows <cit.>. Note that in practical end-to-end communication scenarios, packet drop can stem from various sources, not just a buffer overflow. However, in this particular simulation scenario, the root cause of packet drop is due to buffer overflows. By taking the average of all realizations, we calculate the average end-to-end throughput and packet drops. Interestingly, even before reaching the threshold value, we observe a rise in packet loss and a decline in throughput. The reason behind this is that some of the realizations cannot sustain such high source application rates, resulting in degradation. For instance, from Figure <ref> in Ant 3 configuration we observe that RMa LOS outperforms UMi LOS, this is because of some of the realizations for UMi LOS in Ant 3 that have extremely bad SNR and incur higher packet drops, and thus the averaging overall realizations leads to an overall decrease in throughput. Moreover, if the application source rate exceeds the maximum sustained physical layer throughput, the reduction in throughput becomes consistent. For instance, as demonstrated in Figure <ref>, it becomes evident that the “Ant3” configuration cannot sustain a source application rate beyond 1800 Mbps. Consequently, any attempt to increase the source application rate beyond 1800 Mbps consistently results in a significant drop in throughput. These findings highlight a notable observation: in certain scenarios characterized by low SNR in specific realizations, the physical layer throughput exhibits a decline, even when the source application rates are set at relatively lower levels. This observation underscores the fact that in such scenarios, throughput can decrease, irrespective of the source application rate being set at modest levels. Similar conclusions can be drawn from Figures <ref>, <ref>, <ref>, and <ref>, which illustrate average end-to-end throughput and packet drops in the Indoor (InH and InF) scenarios under LOS and NLOS channel conditions, respectively. Figure <ref> depicts the latency for the Outdoor (UMi and RMa) and Indoor (InH and InF) scenarios under LOS and NLOS channel conditions. We can observe from Figure <ref> that latency is related to packet drops. As the packet drop increases, the latency also increases. This phenomenon can be attributed to the activation of HARQ triggered by packet drops. HARQ aims to reduce end-to-end packet drops and enhance end-to-end throughput at the expense of increased latency <cit.>. §.§ Trade-off between latency, throughput and packet dropThis subsection aims to provide an understanding of how a chosen KPI influences other performance metrics. In Tables <ref>, <ref>, <ref>, and <ref> by specifying a desired latency target of less than 5 and 10 milliseconds (ms)for outdoor and indoor scenarios in LOS and NLOS channel conditions, we showcase the corresponding levels of throughput and packet drop that are observed. A report <cit.> published by Qualcomm highlighted that certain applications in 6G which include extended reality (XR), autonomous vehicles, crowded event sharing, remote control, and immersive content require data rates exceeding 1 Gbps and latency below 15 ms. Our analysis from Tables <ref> and <ref> demonstrates that, in outdoor LOS and NLOS scenarios, the sub-THz band can enable applications demanding over 1 Gbps and latency under 15 ms. This is achievable using the existing mmWave protocol stack but with a large antenna array (in this paper denoted as “Ant3”). Conversely, for indoor LOS scenario, the same benchmarks can be met using “Ant2” as shown in Table <ref>. However, in indoor NLOS scenario, the stipulated throughput and latency criteria are achieved using “Ant3” as seen in Table <ref>. Tables <ref> and <ref> do not provide exact numerical values for some of the performance metrics, as the corresponding Figures <ref>, <ref>, and <ref> do not fully reveal the granular values due to the wide range of application source rates (250 Mbps to 3000 Mbps) used in the current simulation. To capture exact values, it is recommended that readers simulate with narrower application source rate ranges, such as 10–250 Mbps. This analysis serves to highlight the trade-offs and dependencies that exist among various performance metrics. Note that even though we use latency as a KPI in this case, one could have chosen other metrics like throughput or packet drop. For instance for a desired throughput of 500 Mbps, one could estimate what is the observed latency and packet drop. By gaining insights into how selecting a KPI can provide information about other metrics, engineers can make well-informed decisions and establish realistic expectations regarding system performance in real-world scenarios. §.§ Impact of the Number of Realizations on End-to-End Performance MetricsPerforming extensive Monte Carlo wireless simulations for system-level or network-level analyses for any frequency, scenario, etc can pose significant computational and time constraints. Typically, statistical significance in Monte Carlo simulations is achieved by 10^4–10^6 realizations <cit.>. The end-to-end performance metrics are heavily dependent on the number of realizations to obtain statistically accurate results. By harnessing the capabilities of high-performance clusters (HPCs) and employing parallelization techniques tools such as “sem” <cit.> in ns-3 it becomes possible to complete 10^4–10^6 realizations in comparatively much less time compared to standard workstations. However, determining the exact number of realizations required to obtain statistically significant results remains challenging. As a result, researchers commonly employ a range of realizations, spanning from a few hundred for expedited results to a vast number in the order of 10^4 or more. Researchers must balance computational resources, time constraints, and the desired level of statistical confidence when designing their simulation experiments. Thus, in this work, we present the statistics of the number of realizations versus confidence intervals and strongly encourage researchers worldwide, utilizing ns-3 or any other simulation tools involving statistical models, to provide similar metrics in their work. Such metrics enable readers to assess the reported results' accuracy and reliability and enhance the finding's overall interpretability.Figure <ref> illustrates the relationship between the number of realizations and the corresponding confidence intervals. Observing the graph, it is evident that with a lower number of realizations, the confidence interval is wider, indicating a higher degree of uncertainty. Moreover, the mean values exhibit significant variation within the confidence intervals, indicating instability in the results. Additionally, for a given number of realizations, the confidence interval shifts noticeably, which is undesirable for reliable analysis. As the number of realizations increases, the confidence interval becomes narrower, indicating increased precision and reduced uncertainty. The variation in mean values within the confidence intervals also decreases. Notably, in this particular experiment with 10,000 realizations, it can be observed that after approximately 2,000 realizations, the confidence interval stabilizes. The narrower confidence interval and reduced mean variation provide greater confidence in the obtained results. Based on these observations and time constraints, we have chosen to perform 2,500 realizations for each scenario, channel condition, source application rate, and antenna configuration ensuring statistically significant results with a reliable level of precision.§ CONCLUSIONBy conducting extensive single-user simulations using the ns-3 mmWave module, employing the NYUSIM channel model at 140 GHz, we obtain insights into end-to-end downlink SNR, average throughput, packet drops, and latency across various gNB and UE antenna configurations in different 3GPP-listed LOS and NLOS scenarios. Our simulations indicate that sub-THz bands can support applications demanding data rates above 1 Gbps and latency below 15 ms <cit.>, leveraging the current mmWave protocol stack but with large antenna arrays. Moreover, our findings reveal that increasing antenna elements at the gNB and UE enhances the 140 GHz system's performance by improving SNR. This SNR impacts MCS, influencing the transport block (TB) size and setting the receiver's maximum physical layer throughput for a given antenna configuration. Overshooting this throughput can lead to RLC buffer overflow, causing significant packet loss, latency spikes, and frequent retransmissions. Our findings highlight that the 8x8–4x4 antenna setup (Ant1) at 140 GHz is less efficient than its 28 GHz counterpart <cit.>, emphasizing the necessity to adjust antenna elements to counteract heightened path loss after the first meter of propagation at 140 GHz. While our current study emphasizes single-user scenarios to obtain maximal performance insights, future research should delve into multi-user environments to capture real-world performance nuances, taking into account factors such as co/adjacent channel interference and blockage effects. IEEEtran
http://arxiv.org/abs/2312.15987v3
{ "authors": [ "Hitesh Poddar", "Akhileswar Chowdary", "Theodore S. Rappaport", "Marwa Chafii" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20231226103402", "title": "Full-Stack End-to-End Sub-THz Simulations at 140 GHz using NYUSIM Channel Model in ns-3" }
.psmallmatrix ([ ) positioning shadows shapes ⌈⌉ ⌊⌋ lemmasection lemmaLemma propositionsection propositionProposition Corollarysection corollaryCorollary*§ * §.§ *§.§.§ **definitionmytheoremTheorem mytheoremsectionmyremarkRemark myremarksectionmycorollaryCorollary mycorollarysectionmyassumptionAssumption myassumptionsectiondefinitionDefinition definitionsection -.5in -.5in 1in 1.3in -.8in#1 1 Properties of Test Statistics for Nonparametric Cointegrating Regression Functions Based on Subsamples Sepideh Mosaferi Contact the corresponding author at mailto:[email protected]@umass.eduUniversity of Massachusetts Amherstand Mark S. Kaiser Iowa State University and Daniel J. Nordman Iowa State University ============================================================================================================================================================================================================================================================================ Nonparametric cointegrating regression models have been extensively used in financial markets, stock prices, heavy traffic, climate data sets, and energy markets. Models with parametric regression functions can be more appealing in practice compared to non-parametric forms, but do result inpotential functional misspecification. Thus, there exists a vast literature on developing a model specification test for parametric forms of regression functions. In this paper, we develop two test statistics which are applicable for the endogenous regressors driven by long memory and semi-long memory input shocks in the regression model.The limit distributions of the test statistics under these two scenarios are complicated and cannot be effectively used in practice.To overcome this difficulty, we use the subsampling method and compute the test statistics on smaller blocks of the data to constructtheir empirical distributions. Throughout, Monte Carlo simulation studies are used to illustrate the properties of test statistics. We also provide an empirical example of relating gross domestic product to total output of carbon dioxide in two European countries. Keywords:between-block mixing coefficient; endogeneity; long memory; size of test. 1.5§ INTRODUCTIONIn this article we consider the following nonlinear regression modely_k=f(x_k)+u_k,k=1,...,N,where f(.) is an unknown real function, and x_k and u_k are regressors and regression errors, respectively. In econometrics, when x_k is a nonstationary time series(<ref>) is called a nonlinear cointegrating regression.In the literature, x_k has often been assumed to be a short memory process uncorrelated with u_k, entailing so-called exogeneity. Extending this to a case where x_k is driven by long memory (LM) or semi-long memory (SLM) innovations depending on u_k, so-called endogeneity, which has received less attention in the literature though may be anticipated in many applications.A number of tests has been proposed to check the adequacy of the form of the regression in (<ref>).Under the assumptions of E(u_k|x_k)=0 and independent observations {(x_k,y_k)},<cit.> (H-M) derived a test statistic which involves the L_2-distance between the nonparametric and parametric fits. They approximated the asymptotic behavior of the test statistic by a Gaussian distribution with mean that converges to infinity. For the same situation, <cit.> proposed a rate-optimal test statistic, which is uniformly consistent, and its asymptotic distribution has a mean of zero and variance of one.<cit.>constructed a so-called self-normalized U (SNU) test statistic based on martingale differences and although this statistic does not explicitly incorporate a nonparametric estimate of f(x), it does apply a kernel weight function to residuals from the fitted hypothesized model, namely y_k - g(x_k, θ̂).These authors proposed a self-normalized version of the test statistic to remove the effect ofnuisance parameters and demonstrated that its limit distribution follows a standard normal variate. The SNU test offers an attractive procedure in applications, but it is hard to extend its theory to a case involvingendogeneity in the regressor x_k. Subsequently, <cit.> extended the setting of <cit.> to allow the equation error u_k to be serially dependent and the regressors to be endogenous and driven by LM innovations.These authors proposed a modified H-M (MHM) test statistic in the form of <cit.> (H-M) and showed the limit law of the statistic involves the local time of fractional Brownian motion, and thus depends on a fractional differencing parameter d. Consequently, this statistic does not easily lend itself to use in applications.In the so-called semi-long memory (SLM) case, <cit.> have used the idea of tempering the LM innovations for the regressors, and considered atest statistic with the form of <cit.>. With this modification, the limit distribution for the test involves the local time of standard Brownian motion and is free of the unknown fractional differencing parameter. However, the limit distribution of the test statistic still does not have a simple form, which hampers its use in practice.<cit.> proposed a so-called Portmanteau (P) test statistic that is appropriate under LM and endogeneity when the equation errors u_k are assumed to follow an autoregressive process.When the order of the process is known the statistic has a simple chi-squared limiting distribution. To the best of our knowledge, there has not been a successful test to effectively examine the form of the regression function in (<ref>) when the regressors are endogenous with LM or SLM structure, and the error terms u_k are general. A central purpose of this article is to modify the subsampling method of <cit.> so that it can be used to determine reference distributions for the SNU and MHM test statistics to make their use more practicalfor the class of endogenous regressors with LM or SLM input shocks. Both of the aforementioned test statistics do not assume any particular structures for the error process u_k, and their limiting distributions have complex forms and are impractical for use in applications.The crux of the subsampling method is to recompute the test statistics on smaller blocks or “subsamples" of the observed data to construct empirical distributions of the test estimates across data subsamples. A complication, however, is that standard subsampling usesa common form of sequential data blocks, which are motivated by stationary time series (cf. ),and such subsamples require modification to handle the non-stationary processes considered here. Under appropriate conditions, these empirical distributions can then approximate the sampling distributions of the test statistics to make them applicable for actual problems. The remainder of the article is organized as follows.Our models and main assumptions are presented in Section <ref>, andSection <ref> explains the SNU, MHM, and P test statistics. In Section <ref>, we describe and establish the subsampling methodology for the SNU and MHM test statistics. Through the use of Monte Carlo simulation studies we examine the behaviors of the test statistics in terms of size and power in Section <ref>.In Section <ref>, we apply the procedures in testing hypothesized forms of cointegrated regressions to relate carbon dioxide emissions (CO_2) to gross domestic product (GDP) in two developed countries. Concluding remarks are contained in Section <ref>.Technicalproofs and additional simulation results are given in the supplementary material.Throughout the paper, we use and to denoteconvergence in distribution and probability, repsectively. Also,i.i.d. means independent and identically distributed, . is the floor function, and f(x) ∼ g(x) denotes asymptotic equivalence of two generic non-zero functions, i.e.,the ratio f(x)/g(x)→ 1 as x→∞. We denote thefractional differencing parameter for LM or SLM processes asd, and the tempering parameter for SLM processes as λ. § COVARIATE PROCESS MODELS AND ASSUMPTIONSIn this section we present two models to be used for the covariate processes x_k in (<ref>), and several technical assumptions that will be needed to verify the efficacy of subsampling to approximate sampling distributions of statistics that incorporate these processes. In model (<ref>), we let the regressors x_k=∑_j=1^kX(j)be a partial sumof input shocks X(j), with possible LM input shocksdenoted asX(j)≡ X_d(j), or possible SLM input shocks denoted as X(j)≡ X_d, λ(j). To define the shocks, for each integer k ≥ 0, let ϕ(d,k) ∼ k^d-1ρ(k) to denote coefficients ϕ(d,0) ≠ 0, and ρ(k) is a function slowly varying at ∞.Here, d ∈ (0,1/2) is called the fractional differencing parameter.Then define, * LM: X_d(j)=∑_k=0^∞ϕ(d,k) ξ(j-k), and* SLM: X_d,λ(j)=∑_k=0^∞ e^-λ kϕ(d,k) ξ(j-k), where {ξ(j)} is an i.i.d. noise with 𝔼ξ(0)=0 and 𝔼ξ^2(0)=1. In the above, λ >0 represents a tempering parameter in the SLM case.To incorporate endogeneity, we take ξ(k) from the above expressions and let η_k=(ξ(k),ϵ(k))' be a sequence of random vectors with 𝔼(η_0)=0 and 𝔼(η_0 η_0')=Σ such thatΣ≡[1 𝔼(ξ(0) ϵ(0)); 𝔼(ϵ(0) ξ(0))𝔼(ϵ(0)^2) ], where 𝔼(ξ(0) ϵ(0)) ≠ 0. We assume the characteristic function φ(t) of ξ(0) satisfies the integrability condition ∫_ℝ (1+|t|) |φ(t)| dt < ∞, which ensures smoothness in the corresponding density. Now, we set some assumptions as follows. The tempering parameter λ≡λ_N>0 in SLM depends on N and satisfies λ→ 0 and N λ→∞ as N →∞.This is the strongly tempered case (see ).For equation errors in the model, let u_k=∑_j=0^∞ψ_j η_k-j for ψ_j=(ψ_j1,ψ_j2), and assume (a) ∑_j=0^∞ψ_j ≠ 0 where ∑_j=0^∞j^1/4(|ψ_j1|+|ψ_j2|)< ∞ holds.(b) 𝔼||η_0||^α < ∞ for some α>2 holds. Assumption <ref> implies that 𝔼(u_0^2)=∑_j=0^∞ψ_j Σψ_j' and cov(u_k,x_k) ≠ 0.If x_k = ∑_j=1^k X_d(j) from a LM process,{x_k}_k ≥ 1 is a stochastic process such that the following weak convergence applies on D[0,1]: as N→∞, x_Nt/d_N B_d+1/2(t),, where scaling d_N:=[𝔼(x^2_N)]^1/2asymptotically has the form of d_N ∼ρ(N) N^d+1/2√(c_d) as N →∞ with c_d= 1/d(1+2d)∫_0^∞{x(x+1)}^-(1-d)dx. Above B_d+1/2(t) denotesfractional Brownian motion with parameter d+1/2; see <cit.>. If x_k = ∑_j=1^k X_d,λ(j) from a SLM process, {x_k}_k ≥ 1 is a stochastic process such that the following weak convergence applies on D[0,1]: as N→∞,x_Nt/d_N B(t), ,where scaling d_N:=[𝔼(x^2_N)]^1/2asymptotically has the form of d_N∼√(N)/λ^d as N →∞.Above B_(t) denotes standard Brownian motion; see <cit.>. Assumptions <ref>-<ref> are basic, while Assumptions <ref>-<ref> are mild for prescribing limit behaviors in standardized partial sums of regressors.Additionally, note that in Assumption <ref> under SLM, we have standard Brownian motion B(t) in (<ref>), which does not depend on the fractional differencing parameter d and is applicable for any d>0. This is unlikeAssumption <ref>, in which fractional Brownian motion B_d+1/2(t) appears in (<ref>).In addition,by tempering the coefficients ϕ(d,k) under SLM setting, we can extend the range of the fractional differencing parameter d from (0, 1/2) to (0, ∞).A tempered linear process that we consider is the autoregressive tempered fractionally integrated moving average process,denoted as ARTFIMA. ARTFIMA models extendearlier work on tempered fractionally integrated (TFI) models given by <cit.> and can capture aspects of low frequency activity of time series better than ARFIMA models. In practice under the SLM setting, when the tempering parameter λ is fixed and does not depend on the sample size N, one can use Whittle estimation or maximum likelihood estimation to estimate both the unknown parameters d and λ. These estimators are strongly consistent under quite general conditions (see ). On the other hand, when the parameter λ depends on the sample size N, estimating λ is a complex problem and can be pursued by developing confidence intervals (see Remark 3.7 in <cit.> for a related argument as well as Section 5 in <cit.>). § BACKGROUND ON TEST STATISTICSWe consider in detail several versions of three test statistics to be applied to the hypothesis thatf(x) = g(x,θ) in (<ref>) for some known parametric function g.§.§ Modified Härdle and Mammen (MHM) test statisticThe MHM test is intended for situations that include LM shocks to the covariate process and endogeneity, and has a kernel-smoothed form givenbyT_N := ∫_ℝ{∑_k=1^N K[x_k-x/h] û_k}^2 π(x) dx,where û_k = y_k-g(x_k,θ̂_N) denote residuals based on a parametric estimator θ̂_Nof θobtained through some optimization procedure such as minimizing Q_N(θ)=∑_k=1^N(y_k-g(x_k,θ))^2.Note that the estimated regression is from a parametric procedure, but the test statistic (<ref>) depends on choosing a kernel smoothing function K not involved in that estimation. In (<ref>) π(x) denotes a positive weight function, which is integrable such that K(x)π(x) has a compact support. We willusea Gaussian kernel for K in what follows. Under LM input shocks,<cit.> established a limit distribution for a normalized version of T_N given by τ_N^-1T_N, which uses scaling τ_N:=Nh/d_N with d_N as defined in Assumption <ref>.The result involves certain bandwidth conditions that we also suppose. Assume that, for the bandwidth h, it holds that τ_N →∞ and τ_N h^2γ→ 0 as N→∞, where γ∈ (0,1].Also assume that for a small enoughδ_0, τ_N N^-δ_0→∞.Then, under the hypothesis that f(x)=g(x, θ), the normalized test statistic converges as τ_N^-1 T_Nd^2_(0) L_B_d+1/2(1,0), as N →∞, where L_B_d+1/2(1,0) denotes a local time random variable with respect to fractional Brownian motion B_d+1/2 andd^2_(0)=𝔼(u^2_0) ∫_ℝ K^2(s)ds ∫_ℝπ(x)dx is a process constant; see Theorem 3.1 of <cit.>.The limit distribution in (<ref>) depends on the fractional differencing parameter d under LM and is not simple to use directly.If we consider SLM shocks rather than LM ones, <cit.> have shown that the same test statistic form T_Nin (<ref>) with a modifiednormalization factor τ_N has a somewhat simpler limit distribution. Under SLM, the scaling becomes τ_N:=Nh/d_N=√(N)λ^d h with d_N as in Assumption <ref>.Under the same bandwidth conditions from the LM case (e.g., τ_N h^2γ→ 0, τ_N N^-δ_0→∞) and the hypothesis that f(x) = g(x, θ)the normalized test statistic τ_N^-1 T_N converges as, τ^-1_N T_N d^2_(0) L_B(1,0), as N λ→∞,where d^2_(0) is as in (<ref>) andL_B(1,0) denotes a local time random variable with respect to standard Brownian motion B; see Theorem 5.1 of <cit.>. In contrast to (<ref>), the limit distribution with SLM processes does not depend on thefractional differencing parameter d and thus is simpler.This limit does, however, stillinvolve d^2_(0) as well as a local time process and so is still noteasy to use in practice.Thus, we use the subsampling technique to approximate thedistribution of MHM test statistic under both LM and SLM versions.§.§ Self-Normalized U (SNU) test statistic<cit.> proposed a test statistic for assessing the hypothesisthat f(x) = g(x,θ) in (<ref>) for some known specified g with unknown parameter θ. Although the limit theory for the test statistic was originally developed for nonstationary covariate processes as near unit root autoregressions that are driven by short memory errors with exogeneity, the test statistic itself may have applicability to more complex situations. Similar to the MHM test statistic from (<ref>),the SNU test statistic also uses residualsû_k≡ y_k-g(x_k,θ̂_N) along with a kernel smoothing function K, which are combined to define the statistic,S_N ≡∑_k,j=1, k ≠ j^Nû_k û_j K[x_k-x_j/h]. As for the MHM test statistic, we will use a Gaussian kernel K in all that follows. Dependence of S_N on nuisance parameters can be removed by a self-normalization, resulting in,Z_N ≡S_N/√(2)V_N,V_N^2 ≡∑_k,j=1, k ≠ j^Nû^2_k û^2_j K^2[x_k-x_j/h].Under the conditions of a short memory covariate process with unit root or near unit root behavior and exogeneity, <cit.> showed that Z_NN(0,1)as N →∞.For LM and SLM processes with endogeneity, limit distributions can follow under appropriate assumptions (e.g., Assumptions <ref>-<ref> under LM and Assumptions <ref>-<ref>, and <ref> under SLM) which we denote as,Z_NZ_0,LM, or Z_NZ_0,SLM, as N →∞.The distributions of these limit variables can be complex because of the involvement of the kernel weights K((x_k-x_j)/h) in the test statistic, and would be difficult to determine in the case of an endogenous regressor. We assume there exists a continuous limit distribution for Z_N as in (<ref>), which may differ across LM and SLM processes and may not be N(0,1). Thus, we use the subsampling technique to approximate the finite reference distribution of Z_N and find its related critical values.§.§ Portmanteau (P) test statistic<cit.> proposed a test suitable for the framework of endogeneity with LM and SLM regressors that is based on an idea originally suggested by <cit.> and <cit.>. This statistic was developed under an assumption that u_k in (<ref>) follows an AR(p) structure,u_k= ψ_1 u_k-1+ψ_2 u_k-2+...+ψ_p u_k-p + ϵ(k).Let û_k=y_k-g(x_k,θ̂_N) and take ψ̂_j to be the least squares estimate of ψ_j, for j = 1, . . . , p, based on the assumed model taking û_k as the observed version of u_k, namely û_k= ψ_1 û_k-1+ψ_2 û_k-2+...+ψ_p û_k-p + ϵ(k). Set ϵ̂(k)=û_k-ψ̂_1 û_k-1-ψ̂_2 û_k-2-...-ψ̂_p û_k-p. The P test statistic is then, Ũ_N(ℒ):= N(N+2) ∑_k=1^ℒâ_k^2/N-k,for some integer ℒ≥ 1, whereâ_k=∑_t=k+1^N ϵ̂(k) ϵ̂(t-k)∑_k=1^Nϵ̂^2(k).The limiting distribution of Ũ_N(ℒ) can be approximated by χ^2(ℒ-p) for large ℒ. Although the settings under which (<ref>) is appropriate are restricted to autoregressive structure in the regression error terms u_k of (<ref>), it has a limit distribution that is easy to use in practice. In the applications, the order of the assumed autoregressive process must be determined before the test statistic is constructed.§ SUBSAMPLING METHODOLOGY Subsampling is an approach for approximating unknown sampling distributions of statistics by dividing data into blocks that preserve the dependence structure of the entire process within each block.<cit.> and <cit.> have shown that subsampling works asymptotically under certain verifiable conditions, whereas block bootstrap fails for some LM processes. Establishing the validity of subsampling for time series under strong dependency is not easy because of slowly decaying correlations, but there have been successful efforts in a number of different situations. <cit.> used the subsampling technique to make inference about the mean of heavy-tailed LM time series. <cit.> used a fast subsampling method for estimating the distribution of signal to noise ratio statistics in nonparametric time series regression models that encompass both short and long range dependence.Under a LM linear process which is not necessarily Gaussian, <cit.> verified the consistency of subsampling for the sample mean with a deterministic normalization. <cit.> used the canonical correlation between two blocks instead of the usual α-mixing coefficient to establish the validity of subsampling for LM Gaussian subordinated models. <cit.> applied subsampling to a self-normalized change-point test statistic to make inference about structural breaks in long range dependent time series when the limit distribution depends on unknown parameters.We note thatall of the previously mentioned work on subsampling with LM assume stationary series, whereas our inference involves non-stationary processes.This distinctioncreates complications for subsampling because standard data blockswill not provide “mini-copies" of the original series, which is a fundamental property in many subsampling developments, even under forms of non-stationarity (cf. , p. 101).Hence, we need to define subsamples in a modified manner to accommodate non-stationarity, which we do presently.§.§ Subsample constructionThe null limit distributions in (<ref>)-(<ref>) are theoretically determined by replacing the parametric residuals û_k in the test statistic T_N from (<ref>) with original errors u_k.Essentially then, subsamples shouldideally provide small-scale copies of the original data-level series 𝒮_N≡{(x_1,u_1),…, (x_N,u_N)} for replicating the distribution of test statistics under the null hypothesis f(x)=g(x,θ).However,note thatlengthb<N data blocks based on time windows {(x_i,u_i),…, (x_i+b-1,u_i+b-1)} with starting indices i=1,…,N-b+1,as the standard form of blocks for subsampling (cf. ), would not provide small-scalecopies of𝒮_N.The reason owes to the regressors x_k beingnon-stationary partial sums (e.g., x_k=∑_j=1^kX_d(j)orx_k=∑_j=1^kX_d,λ(j)).This issue can be repaired by uniformly “re-setting" the regressors in the i-th data block as {(x_i-x_i-1,u_i),…, (x_i+b-1-x_i-1,u_i+b-1)} byshifting these by the regressor x_i-1 preceding the block, i=1,…,N-b+1; we define x_0=0 here.Such blocks do representdistributional copies of 𝒮_N of length b. As the pure errors u_k are not observed in practice, though, we replace each u_k with a residual, sayũ_k ≡ y_k - f̂(x_k), based on a full data estimator f̂(x) of the trend function f(x). We then define subsamples through data blocks as𝒮_i,b≡{(x_i-x_i-1,ũ_i), ..., (x_i+b-1-x_i-1,ũ_i+b-1)},i=1,...,N-b+1. For clarity, in defining residuals ũ_k≡ y_k - f̂(x_k) for use in subsamples, thesame parametric residualsũ_k = û_k = y_k-g(x_k,θ̂_N) may be applied as appearing in the test statistic T_N in (<ref>) (i.e., setting f̂(x_k) = g(x_k,θ̂_N)).Another possibility is the use of nonparametric residuals defined by ũ_k = y_k-f̂(x_k), where f̂(x)= ∑_t=1^N y_t K( (x- x_t)/h)/∑_t=1^NK( (x- x_t)/h) denotes a standard (full-data-based) kernel-smoothing estimator of f(x); see <cit.> for propertiesunder LM. If the underlying trend f(x_t) diverges dramatically as a function of regressors x_t with increasing time t, then the residuals ũ_k in subsampling, particularly in subsamples 𝒮_i,b with starting point i close to the maximal index N-b+1 in (<ref>), can naturally become less precise approximations of the pure errors u_k.Hence, for greater flexibility and generality in the following we consider an implementation of subsampling whereby distributional estimators can potentially be based on the first M subsamples of length b, rather than simply all N-b+1 subsamples,for some selected number M ≤ N-b+1 satisfying M→∞ with b/M→ 0 as N→∞. Sections <ref> and <ref> provide subsampling results for the MHM and SNU test statistics, respectively. To establish subsampling under both LM and SLM structures, we also use a mild mixing assumption, described next.Under short range dependence, a standard α-mixing condition is often used to validate subsampling (e.g., ). On the other hand, under long range dependence, α-mixing may not hold. For LM Gaussian subordinated processes, <cit.> instead verified subsampling under a maximal correlation condition between blocks.Therefore, in order to unify conditions for verifying subsampling under either LM or SLM cases,we employ a between-block mixing coefficient denoted by α_ℓ,b, where ℓ is the distance between two blocks and b is the block size (cf. , Sec. 3.1), which is defined asα_ℓ,b=sup_j ∈ℤ{|P(A ∩ B)-P(A)P(B)|, A ∈ℱ_j+1^j+b, B ∈ℱ_j+ℓ+1^j+ℓ+b}, ℓ,b ≥ 1,where ℱ_m^n denotes the sigma field generated by {z_m,...,z_n} for underlying shocks/errors z_n ≡ (X(n),u_n) and integers n,m ∈ℤ, n ≥ m.If a process is α-mixing,as often associated with weak dependence,then the strong mixing coefficient α(ℓ)≡sup_b ≥ 1α_ℓ, b goes to zero as ℓ→∞ by definition, as does α_ℓ,b for any block sequence b. However, for certain LM processes, <cit.> established subsampling by imposing an implicit condition of ∑_ℓ=1^Nρ_ℓ,b=o(N), along a block size satisfying b →∞ with b/N → 0 as N →∞, where ρ_ℓ,b corresponds to a type of ρ-mixing (or correlation-based) coefficient. Such ρ-mixing-based coefficients are generally stronger than those in (<ref>) by α_ℓ,b≤ρ_ℓ,b≤ 1 for all ℓ , b ≥ 1; see <cit.> for details on mixing. In this sense, we may accommodate both weak and strong forms of dependence in the following mild condition, based on α_ℓ,b,for establishing subsampling: * (Subsampling Condition)For a block size satisfying b^-1+b/N → 0 as N →∞ and for any arbitrary ϵ∈ (0,1), it holds thatmax_[N ϵ] ≤ℓ≤ Nα_ℓ,b=o(1).This simply states that length b stretches of underlying errors which are “hugely distant," namely separated by at least [N ϵ] lags as a large distancerelative to a block size b, should act approximately independent.This condition is natural for many error processes and entails that data blocks in subsampling can provide a type of replication (i.e., well-separated blocks are approximately independent).The condition is equivalent to ∑_ℓ=1^Nα_ℓ,b=o(N) as N →∞, which is perhaps less transparent but also common in subsampling.§.§ Subsampling approximation of modified H-MstatisticTo compute subsampling distribution estimators for the MHM test statistic,we use the first M ≤ N-b+1 subsamples 𝒮_i,b, i=1,…,M of length b from (<ref>).For each subsample,we compute a version of the normalized test statistic τ_N^-1 T_N asτ_b^-1 T_i,b≡d_b/b h_b∫_ℝ{∑_j=1^b K[(x_i+j-1-x_i-1)-x/h_b] ũ_i+j-1}^2 π(x) dx, where the scaling τ_b ≡ b h_b/d_b represents the length b analog ofτ_N ≡ N h/d_N.The empirical distribution of the subsample test statistics is given by F̂_M,b(x)≡1/M∑_i=1^M𝕀{τ^-1_b T_i,b≤ x }, x ∈ℝ, where 𝕀 denotes an indicator function, and this provides a subsampling approximation of thedistribution of the test statistic τ_N^-1T_N under the null hypothesis f(x) = g(x,θ).We denote this null target distribution as F_N,H(x)≡Prob_H{τ_N^-1T_N ≤ x }, x∈ℝ, which converges as prescribed in(<ref>) or(<ref>) under LM or SLM, respectively.Theorem <ref> next justifies the use of subsampling estimation for the MHM test statistics. Suppose Assumptions <ref>-<ref> with(<ref>) for LM seriesor Assumptions <ref>-<ref> and <ref> with(<ref>) for SLM series.Further assume the subsampling condition along with M^-1+b/M → 0 and max_1 ≤ t ≤ Mτ_b|f(x_t) - f̂(x_t)|^2 = o_p(1)as N →∞, where M is the number of subsamples and f̂ is a full data estimator of the trend f in subsampling.Then, sup_x>0 |F̂_M,b(x)-F_N,H(x)|0 ;further, under a nonparametric estimator f̂, the above convergence holds even if the hypothesis f(x)=g(x,θ) fails to be true.From Theorem <ref>, subsamplingcaptures the nulldistribution of test statistics under either LM or SLM structures, and so provides a tractable reference for assessing evidence.While residuals in subsampling may involve either parametric or non-parametric estimators of f, non-parametric-based residuals yield valid subsampling estimators even when hypothesis f(x)=g(x,θ) fails,which can facilitatecontrol of size; see the numerical studies of Section <ref>.Theconditionmax_1 ≤ t ≤ Mτ_b|f(x_t) - f̂(x_t)|^2 = o_p(1)in Theorem <ref> is generally mild (recall τ_b ≡ (b h_b /d_b)) and this allowsadded flexibility incoordinating the block length b and number M of subsamples. For example, under parametric estimation with f̂(x)=g(x,θ̂_N), we might expectmax_1 ≤ t ≤ Mτ_b|f(x_t) - f̂(x_t)|^2 =O_p([b d_N ]/[N d_b]h_b d_M^δ) to hold for some δ>0, based on typical parametricratesθ̂_N-θ^2 = O_p( d_N/N) (cf. ) combined with partial sum growth max_1 ≤ t ≤ M |x_t| = O_p(d_m) (Assumptions <ref>-<ref>). Note[b d_N ]/[N d_b]→ 0 and h_b→ 0, while δ>0 here is a constant related to growth of g(x,θ), where δ=0 for bounded functions.Under non-parametric estimation, the rateO_p( d_N/N ) is replacedby O_p( d_N/[N h] ) (cf. , Sec. 2).In any theoretical case, max_1 ≤ t ≤ Mτ_b|f(x_t) - f̂(x_t)|^2 =o_p(1) can holdreadily under appropriate M and b.In practice, a block length b proportional to N^1/2 can provide a starting point, as this has been suggested in other subsampling developments under strong dependence(cf. ).§.§ Subsampling approximation of self-normalized U statisticTo describe the subsampling estimator for the SNU teststatistic we apply the same subsamples 𝒮_i,b, i=1,…,N-b+1, from (<ref>) based on (modified) length b data blocks.For purposes of generality, we again consider subsampling estimators defined by the first M subsamples for some number M (M≤ N-b+1) satisfying M→∞ with b/M → 0 as N→∞. On each subsample 𝒮_i,b,astatistic Z_i,b is computed as the length b analogof the test statistic Z_N in(<ref>).These subsample statistics lead to a subsampling estimator, F̂_M,b(x)≡1/M∑_i=1^M𝕀{Z_i,b≤ x }, x ∈ℝ,for approximating the distribution of thetest statistic Z_N under the hypothesis f(x)=g(x,θ); we denote this target distribution as F_N,H(x)≡Prob_H{Z_N ≤ x }, x∈ℝ.Theorem <ref> establishes subsampling for approximating the SNU statistic in testing.Suppose Assumptions <ref>-<ref> under LM series or Assumptions <ref>-<ref> and <ref> under SLM series with (<ref>). Further assume the subsampling condition along with M^-1+b/M → 0 and max_1 ≤ t ≤ M |f(x_t) - f̂(x_t)|^2b^2h_b /d_b= o_p(1)as N →∞, where M is the number of subsamples and f̂ is a full data estimator of the trend f in subsampling.Then, sup_x>0 |F̂_M,b(x)-F_N,H(x)|0 ;further, under a nonparametric estimator f̂, the above convergence holds even if the hypothesis f(x)=g(x,θ) fails to be true.Under Theorem <ref>, subsamplingis valid for estimating the null sampling distribution of the SNU test statisticunder both LM or SLM process forms. Theorem <ref> may be viewed as the subsampling counterpart of Theorem <ref>, and both results apply under similar assumptions.The rate conditionin Theorem <ref> regarding estimation f̂ of the trend function is the analogof the same condition in Theorem <ref>, where the scaling τ_bin the MHM test statistic is replaced bysimilar underlying scaling for the SNU test statistic.§ SIMULATION STUDIESIn this section, we investigatethe behaviorsof the three test statistics through the use ofMonte Carlo studies. To keep the computational burden feasible, we assume the sample size is N=500 and use M=2000 Monte Carlo replications in each situation examined. Further simulation results for a range of sample sizes N={50,100,200,500} are given in the Section B of the supplementary material. We generate data sets under the model with the form of (<ref>) as y_k = f (x_k) + σ u_k and assume σ=0.2. The regressor process x_k is defined for both LM and SLM settings followingSection <ref>. Let u_k follow an AR(1) structure such that u_k=ψ u_k-1+ϵ(k) with ψ=0.25.In Section <ref>, we extend the structure of u_k to other processes such as MA(1). Also, let𝔼(ϵ(0)^2)=1 and 𝔼(ξ(0) ϵ(0)) ≡ r={0.5,1} in the Σ matrix. Therefore, we have (ξ(k), ϵ(k)) are i.i.d. N(0,[ 1 r; r 1 ]). Additional results for r={-1,-0.5} are included in the supplementary material. A similar set-up has been previously used by <cit.>, <cit.>, and <cit.>.In construction of the test statistics (<ref>) and (<ref>), we use a Gaussian kernel, specifically,K(x) = 1/√(2 π)exp( - 1/2 x^2 ).Also in (<ref>) we take π(x) = 𝕀(-100 ≤ x ≤ 100) where 𝕀(·) is the indicator function, and when K is applied to (x_k-x)/h, we take the bandwidth h as h=N^-1/3. For generation of covariates, we examine values of the fractional differencing parameter as d = {0.1,0.2,0.3,0.4} and of the tempering parameter as λ={N^-1/3, N^-1/4, N^-1/5, N^-1/6}. §.§ Properties of self-normalized U and modified H-M statistics We first examine several properties of the SNU and MHM teststatistics that do not require the use of subsampling.The asymptotic distribution of the SNU test statistics is normal under short memory processes and exogeneity.We conducted a Monte Carlo simulation study as described previously using LM, SLM, and endogeneity, and determined p-values for the SNU test statistic using a standard normal reference distribution. For this exercise we used the hypothesized model, f(x_k) = θ_0 + θ_1 x_k, and for data generation used (θ_0,θ_1)=(0.0,1.0). Empirical distributions of the p-values are shown in Section B of the supplementary material for LM and SLM cases for several values of d and r that control endogeneity and memory properties.The observed distributionsdepart from what would be expected for samples from a uniform distribution on the unit interval, particularly as d and r increase together, indicating that a standard normal calibration is not appropriate for p-values with the SNU test statistic under LM.Distributions for the SLM cases appear a bit more uniform, but even these p-values are clearly far from uniformity for larger values of both d and r.These numerical results suggest that p-values for the SNU test statistic cannot effectively be approximated by a standard normal limit for LM or SLM cases, unlike the short-memory case, which motivates alternative subsampling approximations. Using the same simulation design just described, the SNU and MHM test statistics were computed for a variety of values of r and d.Of particular interest was the influence of the differencing parameter d and the level of endogeneity r on the realized distributions of these statistics. Smoothed empirical densities of the SNU statistic are shown in Figure <ref> and we see that when there is a substantial level of endogeneity (r=1.0) thedifferencing parameter d begins to influence the distributions, pulling them quite far from a center near zero. Smoothed densities for the MHM statistic are shown in Figure <ref> in whichthe differencing parameter dinfluences all of the distributions, though that effect is considerably more pronounced for the LM version of the statistic than it is for the SLM version. The value of d determines the location of these distributions, with larger values of d corresponding to larger locations.In total, these results provide further motivation for the use of subsampling to determine reference distributions.§.§ Empirical size and power of tests In this section we examine the behavior of the test statistics of interest, using subsampling methodology to approximate reference distributions for the SNU and MHM test statistics. There are two versions of the SNU test statistic represented in what follows, differingbased on whether the residuals used in the construction of subsamples were computed from a parametric or a nonparametric estimate of the regression function (see Section <ref>).The approximate chi-squared limit is used as a reference distribution for the P test statistic.To use subsampling we need to choose block lengths.<cit.> suggested a block length b proportional to N^1/2 in simulations under strong dependence, while <cit.> showeda block order N^1/2 to be optimal for subsampling variance estimation with many LM processes defined by Gaussian subordination.This block choice also agreeswith Remark 4.2 of <cit.> as well assubsampling theory of <cit.>under LM for d∈(0,1/2).Based on these guideposts, we examine block lengths b = [cN^1/2] for c = 0.5, 1, 2, and 4.With N=500, these turn out to be 11, 22, 44, and 89 observations long, resulting in 489, 478, 456 and 411 blocks, respectively. Test statistics were computed for each of these blocks for each simulated data set, and used to construct the subsampling distributions F̂_M,b as discussed in Section <ref>. We present results using the maximal number N-b+1 of subsamples, although findings with smaller numbers of blocks were qualitatively similar and appear in supplementary material. These blockswere then taken to approximate the finite sampling distributions of the test statistics for the purpose of determining p-values. In all cases, the hypothesis tested was f(x_k) = θ_0 + θ_1 x_k.To examine the size associated with these tests, data were generated as previously with θ_0=0.0 and θ_1=1.0, as well as r=0.5 and r=1.0. The SNU and MHM test statistics differ in the range of values possible, which influences determination of p-values as being from two-sided or one-sided test procedures.To determine the p-value for a given simulated data set, we used the following procedure for a nominal test level of α.Here, Z_N denotes the actual value of the SNU test statistic computed from the entire data set, and similarly for the MHM test τ^-1_N T_N under LM or under SLM.(i) For the SNU test statistic, we calculated a p-value by the proportion of subsample statistics Z_i,b exceeding the observedstatistic Z_N in absolute value:P_m≡1/N-b+1∑_i=1^N-b+1𝕀{|Z_N|< |Z_i,b|}. (ii) For the MHM test statistic (which is non-negative), we computed a p-value P_m ≡ 1 - F̂_N-b+1,b( τ_N^-1T_N) as the proportion of subsample statistics τ_b^-1T_i,b exceeding the observedstatistic τ_N^-1T_N.Therefore, for a given nominal size level of α, the observed size across simulation runs is given as∑_m=1^2000𝕀{P_m ≤α}/2000. In conducting the simulations reported in this section we identified a difficulty with the MHM statistic that appeared to be in the form of a finite-sample upward bias.An implication was that using the MHM statistic resulted in a nearly 100% rejection of the hypothesis when the hypothesis was, in fact, the true data generating mechanism and the test was conducted at a nominal 0.05 level.In order to mitigate the problem we suggest a de-biased version of the MHM test statistic based on subsampling, where the related steps are given in the Section C of the supplementary material. All further numerical results related to the MHM statistic are based on that de-biased version. In part due to this correction, we present results separately for a combination of the SNU and P statistics, and the MHM statistic alone.Empirical sizes of the SNU and P test statistics at the nominal level 5% are given in Table <ref>.Tests conducted with the SNU statistic and subsampling range froma bit conservative to a bit liberal, though there does not appear to be a clearly identifiable pattern across values of d and r that is consistent for all memory processes.It does seemthat, in general, size for the SNU statistic decreases as block size increases within any level of d, r, and memory process; as may be expected under LM/SLM, nominal size is often better maintained with longer block sizes for subsampling in Table <ref>, where non-parametric residuals may also help toward improving size control over parametric residuals in the subsampling approximation.Empirical size estimates for the P statistic hover around the nominal level of 0.05.To examine power under a local departure from the hypothesis, data were simulated using f(x)=θ_0+θ_1x+ρ_N |x|^ν with ρ_N=1/(N^1/4+ν/3h^1/4), with θ_0=0.0, θ_1=1.0, and ν=3. This same generating mechanism has been used in other studies (e.g., ) to study power. The procedure described previously was used to compute p-values.Results for the SNU and P test statistics are given in Table <ref>. The values of power for the SNU test statistic are excellent and are usually around 100% using either parametric or non-parametric residuals. The values of power for the P test statistic are smaller, particularly for d=0.1, but do increase somewhat for d=0.4 and are perhaps a bit greater when r=1.0 than when r=0.5.This differs from the results on size for the P statistic, in which there was no discernible effect of either d or r.The results on size and power for the de-biased MHM test statistic are given in Table <ref>. We observe that the values of size are small, and as the block size increases, the values become smaller.For a given level of endogeneity, size appears larger for the larger value of d=0.4 than for the smaller d=0.1.Power of the de-biased MHM statistic is quite low compared to either of the other two statistics, never reaching 0.40 and usually being below 0.30; such low power is an artifact of de-biasing, which necessarily offsets the magnitude of statistics to control size but then reduces power.Despite the rather lackluster values of Table <ref>, the size performance of the MHM statistic has been greatly improved by the de-biasing modification made possible by subsampling. In the next section we attempt to expand the situations under which we can assess the performance of these statistics.§.§ Extended simulations We wish to examine behavior of the statistics under a larger flexible set of both integrable and nonintegrable regression functions beyond the local alternative in Section <ref>. To this end, we simulated data from a set of 7 nonintegrable functions and a set of 3 integrable functions.We used the same sets of functions previously used by <cit.> and will not list them here.Results are provided in the supplementary material. For the nonintegrable functions, the hypothesis was y_k = θ_0+θ_1x_k.For the integrable functions, the hypothesis was y_k = exp(-θ_1 |x_k|) + σ u_k.In all cases, u_k was taken to follow an AR(1) structure as was used previously in Section <ref>.In general,the results of these simulations (deferred to the supplementary material) reinforce the patterns seen in Tables <ref>, <ref> and <ref>.The P statistic maintains size cross particular situations, while size for the SNU varied from case to case, often being a bit above the nominal level for the parametric residual version and often being below the nominal level for the nonparametric residual version, with nominal size being better maintained over larger block sizes with LM/SLM.Both theSNU statistic and the P statistic exhibit good power with one being a bit higher in some cases and the other being higher in other cases. The values for the de-biased MHM test statistic show that it pretty uniformly has the greatest size and lowest power among all the statistics considered.Next, in order to investigate the performance of test statistics for other generating processes for the regression errors u_k, we specified data generating mechanisms in whichsuch errors follow a MA(1) process,u_k=μ+ϵ(k)+θϵ(k-1) where (μ,θ)'=(0,0.8) as opposed to the previously used AR processes for errors. The rest of the simulation set-up is the same as given in Section <ref>. In particular, we are interested in the relative sizes of the SNU and P statistics, in part to examine the robustness of the P statistic and the flexibility of the SNU statistic. The results for size under the basic linear model used previously are given in Table <ref> with the update of now applying MA(1) errors. While values for empirical size of the SNU statistic are somewhat elevated over the nominal 0.05 value, they appear to stay under control far better than those for the P statistic, which explode to a value of 1.0 no matter what the values of d, r, or the type of memory process.It appears that the P statistic is highly sensitive to departures of the equation error process for the u_k in (<ref>) from an AR(p) structure, which may deteriorate performance. § APPLICATION TO CARBON KUZNETS CURVEThe Carbon Kuznets Curve (CKC) hypothesis was proposed by <cit.>, and attempts to explain an inverted-U shaped relationship between CO_2 emissions and gross domestic product (GDP) of a country as due to changes in air pollution that occur as a country develops technologically. <cit.> studied 31 countries over the period 1950–2006 and have shown the relationship between CO_2 emissions and GDP varies considerably among countries.They suggest that the relationship can followstraight line, quadratic or cubic functional forms, and that the use of cointegration techniques can help to prevent identification of a specious relationship between CO_2 emissions and economic activity (see <cit.> as an example). <cit.> suggested that the relation between log(CO_2) emissions and log(GDP) may be a straight line for Spain and a quadratic curve for France. Thus, we fit and test a model with a straight line regression function f(x_k) = θ_0 + θ_1 x_k for Spain and a model with a quadratic function f(x_k)=θ_0+θ_1 x_k + θ_2 x_k^2 for France using the procedure developed in this article. The data set that we have used is from 1950 to 2008 and contains 59 observations.The CO_2 emission data come from the Carbon Dioxide Information Analysis Center (), and the GDP data come from <cit.>. It is generally accepted that both CO_2 emissionsand GDP exhibitnonstationary behaviors over time, and it has been suggested that an assumption of exogeneity may not hold because of measurement error and other sources of errors; see <cit.> for a related discussion. In preparation for the application of the cointegrated regression model (<ref>), we used thepackage from the statistical softwaredeveloped by <cit.> to verify that log(GDP) processes for both countries exhibit SLM behavior (Whittle estimators of the ARTFIMA parameters were d=1.079 and λ=0.138 for Spain, while d=1.093 and λ=0.138 for France).We also apply a nonparametric regression smoother to these data which will provide a visual comparison with the parametric fits.We use a Nadaraya-Watson (N-W) kernel regression estimator of f(x),f̂(x)=∑_k=1^Ny_k K_h(x_k-x)/∑_k=1^NK_h(x_k-x),where K_h(s)=h^-1K(s/h) for bandwidth h.The kernel function K is chosen to be the Gaussian kernel (<ref>).We select a suitable bandwidth based on the use of a leave-one-out cross-validation procedure to minimize a least squares deviation criterion that is described more fully in Section D of the supplementary material.Values of the least squares cross-validation criterion for a range of bandwidths are plotted on the left hand side ofFigure <ref>.The optimal bandwidths are depicted as vertical lines and were chosen to be 0.151 for Spain and 0.073 for France.We focus on the SNU and P test statistics, excluding the MHM statistic here due to its bias issues. The results of our tests are contained in Table <ref>.Subsampling block sizes for the SNU test statistic were chosen based on the demonstration of p-values versus a continuous range of block sizes as given in Figure <ref>, which provides a useful tool in practice for selecting a block size.By then applying the block selection rule of minimal volatility (cf. ), an appropriate block size can be chosen by a point or region where p-values stabilize visually, for instance b=44 for Spain. For Spain, a hypothesis of a straight line regression function is not rejected using either the SNU statistic or the P statistic, at least with the larger values of ℒ.For France, however, the hypothesized quadratic model is solidly rejected using the SNU statistic, but is not even questionable according to any version of the P test procedure. Section B of the supplementary material provides some further numerical study of block selections via minimal volatility, suggesting that these are also in reasonable concert withappropriate block sizes in Table <ref>.The right hand side of Figure <ref> shows scatterplots of the data, thenonparametric kernel estimates, and the fitted parametric models.For Spain in the upper right panel, the fitted straight line regression appears visually reasonable.But for France in the lower right panel, the fitted quadratic model is clearly an inadequate description of the relation between CO_2 and GDP.This begs the question as to why the P statistic fares so poorly in this application, while in the simulation studies of Section <ref> it consistently maintained its nominal size, even if it was not always the most powerful against the data generating models examined.The difficulty is not with the P test statistic per se, but seemingly rather with use of the data to both determine the order of the AR(p) process and then compute the statistic.Residuals from a poorly fittingparametric regression function can exhibit the structure of an AR process even if the true data generating mechanism contains no such structure.To demonstrate this, we simulated data from a regression model with fixed covariates, independent errors, and a parametric nonlinear response function not entirely different in shape from the nonparametric carbon curve for France (see Section E of the supplementary material).Applying the P test procedure to these data with a quadratic hypothesis produces results analogous to those seen in Table <ref> in that the test is unable to detect the departure from the hypothesized model.This same phenomenon appears to be occurring with the data from France.In the simulations, the AR(1) structure of the equation error terms was assumed known and was not subject to this deleterious double-use of the data. § CONCLUDING REMARKSThere has been several attempts in the literature to study the adequacy of the form of the regression functions within the context of cointegration. In particular, it is of great interest to assume the class of endogenous regressors with LM or SLM input shocks as it is expected in real applications. In this article, we have developed and justified a subsampling procedure to determine finite sample reference distributions for the SNU and MHM test statistics. This approximation allows us to be able to effectively study these two statistics under a complex dependent structure in the data sets. In addition, we make comparisons with the P test statistic developed explicitly for regression errors that follow an autoregressive process. Simulation studies demonstrate that the MHM test statistic has a bias issue, and we have suggested a potential bias-correction procedure to greatly improve its behavior relative to size. Specifically, the SNU test statistic has a good performance and is flexible towards the generating process for the regression errors. Use of the P test statistic does not require subsampling as its limiting distribution can be approximated by a chi-squared distribution. The major drawbacks to this statistic are a lack of robustness to non-autoregressive error processes and the need to determine the order of the assumed AR structure used in construction. In the application, we have been able to successfully examine the form of regression functions with the SNU test statistic combined with subsampling.§ SUPPLEMENTARY MATERIALS Supplementary material available online includes proofs of the theorems and additional numerical results. A stand-alone package for implementing the methods described in this paper can be downloaded from <https://github.com/SepidehMosaferi/TestStatistics_Subsampling>.§ ACKNOWLEDGEMENTSWe would like to thank an associate editor and anonymous reviewers who made excellent comments and suggestions that helped us to improve the paper.Nordman's research was partially supported by NSF DMS-2015390. § DISCLOSURE STATEMENTNo potential conflict of interest was reported by the author(s). ims #1 1Supplementary Material for “Properties of Test Statistics for Nonparametric Cointegrating Regression Functions Based on Subsamples" 1.5 This supplementary material is structured as follows. We provide detailed proofs of theorems in Section A. Further simulation results are in Section B. Section C covers the related steps for the de-biased MHM test statistic. Section D provides a practical guidance on the selection of bandwidth. Section E covers a simulated example, where the regression errors do not follow an AR structure in order to explain the form of the regression function through the SNU and P test statistics. § A TECHNICAL DETAILS AND PROOFSIn this section, we provide detailed proofs of theorems. Proof of Theorem <ref>. Recall theestimatorF̂_M,b(x)≡1/M∑_i=1^M𝕀{τ^-1_b T_i,b≤ x },x ∈ℝ, involvessubsamples 𝒮_i,b≡{(x_i-x_i-1,ũ_i),…,(x_i+b-1-x_i-1,ũ_i+b-1)}, i=1,…,M ≤ N-b+1 of length b andcorresponding subsample test statisticsτ_b^-1 T_i,b≡d_b/b h_b∫_ℝ{∑_j=1^b K[(x_i+j-1-x_i-1)-x/h_b] ũ_i+j-1}^2 π(x) dxwith the scaling τ_b ≡ b h_b/d_b. Above ũ_i ≡y_i - f̂(x_i), i ≥ 1, denote residuals from a (full data) estimator f̂ of the trend f, which estimate pure errors as u_i=y_i-f(x_i). The target distributionF_N,H_0(x)≡Prob_H_0{τ_N^-1T_N ≤ x }, x∈ℝ, of the original-level test statistic τ_N^-1T_N under the null hypothesis H_0: f(x) = g(x,θ_0) is determined by substituting true errors u_k in (<ref>) and converges as prescribed in (<ref>) or(<ref>) under LM or SLM, respectively; see <cit.> and <cit.>. That is, if we let Z_0 (say) denotes the continuous limiting variable in (<ref>) or(<ref>), with distribution function denoted as F_0(x) ≡Prob(Z_0 ≤ x) for x∈ℝ, it holds thatsup_x∈ℝ|F_N,H_0(x) - F_0(x)| → 0 as N→∞.Hence, it suffices to establish Theorem <ref> with F_0(x) in place ofF_N,H_0(x). We first consider showing that sup_x∈ℝ|F̅_M,b(x) -F_0(x)|0holds as N →∞ for a modified subsampling estimatorF̅_M,b(x)≡1/M∑_i=1^M𝕀{τ^-1_b T̅_i,b≤ x },x ∈ℝ, where subsample statistics τ_b^-1T̅_i,b are defined by replacingresiduals ũ_j with true errors u_i in(<ref>). By strict stationarityof the modified subsample statistics(i.e. τ_b^-1T̅_1,bD=τ_b^-1T̅_i,b for i ≥ 1), it holds fora given real x that𝔼(F̅_M,b(x)) =Prob(τ_b^-1T̅_1,b≤ x) →F_0(x)as b→∞.Likewise, in considering the variance of F̅_M,b(x) for a fixed x, we haveVar(F̅_M,b(x)) =1/M^2∑_ℓ=-M^M (M-|ℓ|) Cov[𝕀{τ_b^-1T̅_1,b≤ x }, 𝕀{τ_b^-1T̅_1+ℓ,b≤ x }] ≤ 2 ϵ +max_[M ϵ] ≤ℓ≤ Mα_ℓ,bfor any given ϵ∈ (0,1), which followsby splitting the sum over |ℓ| ≤⌊ M ϵ⌋ and |ℓ|≥⌈ M ϵ⌉ applying the covariance bound |Cov[𝕀{τ_b^-1T̅_1,b≤ x }, 𝕀{τ_b^-1T̅_1+ℓ,b≤ x }]|≤min{1,α_ℓ,b}. Underthe subsampling condition with b/M → 0 and M→∞ (as N→∞),it follows thatlim sup_N→∞Var(F̅_M,b(x)) ≤ 2 ϵ and, as ϵ>0 is arbitrary, we then have lim_N→∞Var(F̅_M,b(x))=0. Consequently, we have shown|F̅_M,b(x) -F_0(x)|0 holds as N→∞ for any x∈ℝ.From this,we may establish (<ref>) using the correspondence between convergence in probability and convergence almost surely along subsequences.Namely,let {r_i}_i=1^∞ denotea dense but countable collectionof points in ℝ (i.e., continuity points of Z_0) and let N_k denote an arbitrary sequence of N.As |F̅_M,b(r_1) -F_0(r_1)|0 holds, there exists a subsequence N_k,1 of N_k such that |F̅_M_k,1,b_k,1(r_1) -F_0(r_1)| → 0 holds almost surely as N_k,1→∞. Continuing in this fashion, by |F̅_M,b(r_i) -F_0(r_i)|0,there exists a subsequence of N_k,i of N_k,i-1 such that |F̅_M_k,i,b_k,i(r_i) -F_0(r_i)| → 0 almost surely as N_k,i→∞ for each i >1.As {r_i} is countable, there then exists a set of probability 1 and a subsequence N_k,k (the diagonal of a matrix with entries N_k,i, k,i≥ 1) where|F̅_M_k,k,b_k,k(r_i) -F_0(r_i)| → 0holds as N_k,k→∞ for each i ≥ 1, which entails that sup_x∈ℝ|F̅_M_k,k,b_k,k(x) -F_0(x)| → 0 almost surely (i.e., continuous Z_0).As the originating subsequence N_k was arbitrary, the latter implies(<ref>).To ease notation in completing the proof, we assume the number of subsamples to be M-b+1 rather than M; this change is inconsequential as both sup_x∈ℝ|F̂_M,b(x) - F̂_M-b+1,b(x)| ≤ 2b/M → 0and sup_x∈ℝ|F̅_M,b(x) - F̅_M-b+1,b(x)|≤ 2b/M → 0 by b/M→ as N→∞.To finish the proof, let Y_b^* denote a random draw from the subsampling distribution F̂_M-b+1,b, whereby Y_b^* can be defined based on (<ref>) as Y_b^* ≡τ_b^-1 T_I^*,b = d_b/b h_b∫_ℝ{∑_j=1^b K[(x_I^*+j-1-x_I^*-1)-x/h_b] ũ_I^*+j-1}^2 π(x) dxusing an integer I^* drawn uniformly from {1,…,M-b+1}.Correspondingly, define Z_b^* ≡d_b/b h_b∫_ℝ{∑_j=1^b K[(x_I^*+j-1-x_I^*-1)-x/h_b] u_I^*+j-1}^2 π(x) dxas a random draw from the modified subsampling distribution F̅_M-b+1,b, so that|Y_b^* - Z_b^*| ≤ 2 |Z_b^* R_b^*|^1/2 + R_b^*holds by the Cauchy-Schwarz inequality applied for a remainderR_b^* ≡d_b/b h_b∫_ℝ{∑_j=1^b K[(x_I^*+j-1-x_I^*-1)-x/h_b] [f(x_I^*+j-1) -f̂(x_I^*+j-1)] }^2 π(x) dxinvolving estimated trends f̂.To establish Theorem <ref>, it now suffices to show 𝔼_*|R_b^*|0as N→∞, where 𝔼_* denotes expectation with respect to the resampling(i.e., I^*) conditional on the data.Then, from(<ref>) and (<ref>), we have that, for any subsequence N_k of N, there exists a further subsequence N_ℓ of N_k such thatZ_b_ℓ^*Z_0 and R_b_ℓ^*0 hold as N_ℓ→∞ almost surely.By Slutsky's theorem with (<ref>), it then follows that Y_b_ℓ^*Z_0as N_ℓ→∞ almost surely, so that sup_x |F̂_M_ℓ-b_ℓ+1,b_ℓ(x) - F_0(x) | → 0as N_ℓ→∞ almost surely.As the subsequence N_k was arbitrary, this gives the convergence in probability in Theorem <ref>.To show (<ref>), we first expand𝔼_*|R_b^*| = 1/M-b+1∑_i=1^M-b+1d_b/b h_b∫_ℝ{∑_j=1^b K[(x_i+j-1-x_i-1)-x/h_b] [f(x_i+j-1) -f̂(x_i+j-1)]}^2 π(x) dx≤ Δ_Nmax_1 ≤ j ≤ M [f(x_j) -f̂(x_j)]^2forΔ_N ≡1/M-b+1∑_i=1^M-b+1d_b/b h_b∫_ℝ{∑_j=1^b K[(x_i+j-1-x_i-1)-x/h_b]}^2 π(x) dx.Using stationarity of the adjusted subsamples, along with sup_x 𝔼( ∑_j=1^b K[(x_j-x)/h_b] )^2 ≤ C [b h_b/d_b]^2 (e.g. Lemma 8.2 of <cit.>) and ∫π(x) dx <∞, we have that𝔼Δ_N = d_b/b h_b∫_ℝ𝔼(∑_j=1^b K[(x_j-x)/h_b] )^2 π(x) dx = O(b h_b/d_b).It follows that [d_b/(b h_b)] Δ_N = O_P(1), while[(b h_b)/d_b] max_1 ≤ j ≤ M [f(x_j) -f̂(x_j)]^2 =o_P(1) by assumption, which then establishes(<ref>) from (<ref>). Proof of Theorem <ref>. The proof is similar to the proof of Theorem <ref>, and we briefly overview the details. For each subsamplei=1,…,M, recallsubsampleanalog Z_i,b≡ S_i,b/√(2 V_i,b) of the test statisticZ_N ≡S_N/√(2V_N) from (<ref>), whereτ_b^-1/2S_i,b ≡ ∑_k,j=1, k ≠ j^bũ_k+i-1ũ_j+i-1 K[x_k+i-1-x_j+i-1/h],τ_b^-1V_i,b^2 ≡ ∑_k,j=1, k ≠ j^bũ^2_k+i-1ũ^2_j+i-1 K^2[x_k+i-1-x_j+i-1/h],whereũ_i ≡y_i - f̂(x_i), i ≥ 1, are residuals from a (full data) estimator f̂ of the trend f that again estimate pure errors as u_i=y_i-f(x_i) and we define a scaling term τ_b ≡ b^2 h_b/d_b.By substituting error terms u_i for residuals ũ_i, we may define counterpart versions of subsample statisticsZ_i,b≡ S_i,b/√(2 V_i,b) as, say, Z̅_i,b≡S̅_i,b/√(2 V̅_i,b^2) with S̅_i,b and V̅^2_i,b. The resulting subsample copies (τ_b^-1/2S̅_i,b, τ_b^-1V̅_i,b^2) havethe same distribution for i=1,…,M by stationarity.Furthermore, a subsample copy (τ_b^-1/2S̅_1,b, 2τ_b^-1V̅_1,b^2)(Z_1,Z_2) converges in distribution as b→∞, involving a bivariate pair of random variables, and the associated statistic Z̅_1,b≡S̅_1,b/√(2 V̅_1,b^2) Z_0 ≡ Z_1/√(Z_2) converges to the continuous limit distribution described in (<ref>).If we let M-b+1 denote the number of subsamples andif (Y_1,b^*,Y_2,b^*)≡ (τ_b^-1/2S̅_I^*,b, 2τ_b^-1V̅_I^*,b^2) denotes a randomly selected subsampling pair, defined by a uniform I^* draw from {1,…,M-b+1}, then the subsampling distribution induced by(Y_1,b^*,Y_2,b^*) converges to the distribution of(Z_1,Z_2) (in probability) under the mixing/subsampling condition. If we similarly define, with the same variable I^*, a subsampling pair, say (W_1,b^*,W_2,b^*)≡ (τ_b^-1/2S_I^*,b, 2τ_b^-1V_I^*,b^2) based on a random draw from the paired original subsampling statistics{(τ_b^-1/2S_1,b, 2τ_b^-1V_1,b^2),…, (τ_b^-1/2S_M-b+1,b, 2τ_b^-1V_M-b+1,b^2)}(i.e., computed from residuals ũ_i), then it is enough to show thatτ_b^-1/2𝔼_* |W_1,b^* -Y_1,b^*| + τ_b^-1𝔼_* |W_2,b^* -Y_2,b^*| 0 as N→∞, where 𝔼_* denotes expectation with respect to the resampling(i.e., I^*) conditional on the data.By Slutsky's theorem and the probabilistic convergence of the distribution of Y_1,b^*/√(Y_2,b^*) to the distribution of the target limit Z_0 ≡ Z_1/√(Z_2), we can then conclude that the distribution of W_1,b^*/√(W_2,b^*) converges to the distribution of Z_0 (in probability). This yieldsconvergence in probability of the subsampling distribution estimator F̂_M-b+1,b, as F̂_M-b+1,b is the distribution of W_1,b^*/√(W_2,b^*). To show τ_b^-1/2𝔼_* |W_1,b^* -Y_1,b^*| 0, we write τ_b^-1/2𝔼_* |W_1,b^* -Y_1,b^*| =τ_b^-1/2/M-b+1∑_i=1^M-b+1 |S_i,b-S̅_i,b|≤[ δ+δ_N^2] Δ_Nfor δ_N ≡max_1 ≤ j ≤ M| f(x_j)-f̂(x_j)| andΔ_N≡2τ_b^-1/2/M-b+1∑_i=1^M-b+1∑_k,j=1, k ≠ j^b(1+|u_k+i-1| + |u_j+i-1| )K[x_k+i-1-x_j+i-1/h]. Using𝔼( ∑_j=i+1^b K[(x_j-x_i)/h_b] )^2 ≤ C [b h_b/d_b]^2 (e.g. Lemma 8.2 of <cit.>), 𝔼|u_i|^2=𝔼|u_1|^2<∞ and Holder'sinequality, we may bound 𝔼Δ_N ≤τ_b^-1/2 b^2 h_b/d_b =τ_b^1/2 so that Δ_N=O_p(τ_b^1/2).Hence, we haveτ_b^-1/2𝔼_* |W_1,b^* -Y_1,b^*| = O_P(Δ_n)O_P(δ_N + δ_N^2) = o_P(1) using that δ_N^2 τ_b = o_P(1) by assumption.This establishes τ_b^-1𝔼_* |W_1,b^* -Y_1,b^*| 0.An analogous argument also shows τ_b^-1/2𝔼_* |W_2,b^* -Y_2,b^*| 0.§ B FURTHER SIMULATION RESULTS In this section, we provide further simulation results divided into six subsections. §.§ B.1 Validity of Normal Assumption for the SNU Test Statistic We provide Monte Carlo histograms of p-values for the SNU test statistic for d={0.1,0.2,0.3,0.4} under the null hypothesis of y_k=θ_0+θ_1 x_k + σ u_k. The histograms are given in Figures <ref> and <ref>. The results confirm that the distribution of SNU test statistic does not follow a standard normal.§.§ B.2 Nonintegrable and Integrable Regression Functions We study the size and power of test statistics under a variety of nonintegrable and integrable regression functions as follows. * Nonintegrable regression function:y_k = θ_0+θ_1 x_k+ σ u_k,y_k = θ_0+ θ_1 x_k + 0.5 |x_k|^2 𝕀(|x_k| ≤ 10) + σ u_k,y_k = θ_0 + θ_1 x_k + 20 exp(-|x_k|^2) + σ u_k,y_k = θ_0 + θ_1 x_k + 0.1 |x_k| + σ u_k,y_k = θ_0 + θ_1 x_k + 0.1 |x_k|^2 + σ u_k. * Integrable regression function:y_k = exp(-θ_1 |x_k|) + σ u_k,y_k = exp(-θ_1 |x_k|) + 0.5 |x_k|^2 𝕀(|x_k| ≤ 10) + σ u_k, y_k = exp(- θ_1 |x_k|) + 20 exp(-|x_k|^2) + σ u_k, y_k = exp(- θ_1 |x_k|) + 0.1 |x_k| + σ u_k,y_k = exp(- θ_1 |x_k|) + 0.1 |x_k|^2 + σ u_k.For the nonintegrable regression functions, the generating model in (<ref>) is explained in Section <ref> of the main manuscript. This model is used for calculating the size, and the results are in the manuscript. We use the generating models in (<ref>)–(<ref>) to calculate the power of test statistics, and the results are given in Tables <ref>–<ref>. For the integrable regression functions, the generating model in (<ref>) is used for calculating the size of the tests, and the results are listed in Table <ref>. The empirical powers for all the test statistics under the generating models (<ref>)–(<ref>) are listed in Tables <ref>–<ref>. Note that models in(<ref>) and (<ref>) are not integrable unlike models given in (<ref>) and (<ref>). Since the null model in (<ref>) is integrable, we have used the title of integrable to refer to the models. §.§ B.3 Size for Integrable Regression Function with MA(1) Process In Table <ref>, we provide the empirical sizes for the SNU and P test statistics for the integrable regression function (<ref>) when u_k's follow an MA(1) process. The results show that the values of size are relatively small for the SNU test statistic, which is not the case for the P test statistic. §.§ B.4 Size for Thinned Block Sizes In Table <ref>, we provide the values of empirical size for the SNU and de-biased MHM test statistics under the generating models (<ref>) and (<ref>),where the number of blocks is thinned to M=N^0.9 and the level of endogeneity is equal to 0.5 (r=0.5); results with this number of subsamples are quantitatively similar to results presented earlier with the full subsample number of these generating models. §.§ B.5 Density and Size of Tests for Different Values of r and N Firstly, we provide Monte Carlo densities of SNU and MHM test statistics for different values of endogeneity r={-1, -0.5, 0.5, 1} and different sample sizes N={50, 100, 200, 500 }. We assume h=N^-1/3, and the tempering parameter λ is N^-1/6 for the SLM process. The results are listed in Figures <ref>–<ref>. Secondly, we calculate the size of all test statistics discussed in this paper for nonintegrable regression function (<ref>) and integrable regression function (<ref>) for different values of r and sample size N. We again assume h=N^-1/3, and the tempering parameter λ is N^-1/6 for the SLM process. Additionally, we allow the residuals for the subsamples to follow the parametric forms. The results are graphically displayed in Figures <ref>–<ref>. Note that the four values for the SNU and de-biased MHM test statistics from left to right correspond to the 4 block sizes in the paper, and the three values for the P test statistic from left to rightcorrespond to the 3 values of ℒ=(6, 12, 18). As the sample size and block size increase, the values of size for all the test statistics decrease. The nominal size level of α is assumed to be 0.05 for all the cases. Weinvestigated the power of test statistics, and they are relatively large (except for the de-biased MHM test statistic). Overall, the patterns of results are aligned with our other results given in both the main paper and the supplementary material. We also examined the size and power of all test statistics for higher order values of d such as d ∈ (0.5,2) and in particular for the SLM structure. We found that the results consistently remain within a reasonable range but have not given here for the brevity. §.§ B.6 Simulated Examples for Minimal Volatility Rule Here, we give some examples to illustrate the minimal volatility rule with simulated data sets. In the demonstration, we generated 50 time series with sample sizes N={50, 100, 200, 500 }, d=0.1, and r=0.5 under the null (f(x)=x) and alternative (f(x)=x+ρ_N |x|^ν) hypotheses for trend functions.For each generated time series, we applied the subsampling method to produce an empirical p-value for the SNU test statistic at each block length in a range of consecutive block sizes.The averages of these empirical p-values over 50 time series, per block length b, are depicted in Figure <ref>(null data generation) and Figure <ref> (alternative data generation).We observe that when the null hypothesis holds, the empirical p-value curves from subsampling vary away from zero in Figure <ref>, and also tend to exhibit local regions of stable fluctuation, where the latter are reasonably in line with a block length of b=[4N^1/2] as a blockformsuggested fromsimulated Tables <ref> and <ref>in the manuscript when f̂(.) follows a parametric form. On the other hand, when the alternative is true, the empirical p-value curves from subsampling tend to be centered around zero in Figure <ref>; further, block lengths b=[4N^1/2] again appear reasonable. These patternsin subsampling p-values over block sizealign with behavior observed in our data application concerning the CKC hypothesis and support the use of block choice by minimal volatility.§ C STEPS FOR THE DE-BIASED MHM TEST STATISTICEmpirically, the MHM test statistic τ_N^-1 T_N appears to exhibit some finite-sample bias or drift which can potentially be addressed with subsampling.The idea is to formulate a de-biased version τ_N^-1 T_N -B̂_N of the test statistic, for some bias correction B̂_N, and approximate its sampling distribution with a counterpart estimated by subsampling. * For a given block size b and associated subsample test statistics {τ_b^-1 T_i,b}_i=1^M, a bias adjusted version of subsampling is readily given by the subsampling distribution from{τ_b^-1 T_i,b - B̂_b}_i=1^M, where B̂_b ≡∑_i=1^M τ_b^-1 T_i,b/M is the sample average of subsampling statistics.This can be used to approximate the distribution of τ_N^-1 T_N -B̂_N, where we may also use subsampling to approximate the correction B̂_N. * While the form of bias B_N is unknown, this is linear on the log scale in log sample size, when the bias behaves as B_N∼ C N^a on original scale for constants C>0,a ∈ℝ. * Wetake two block lengths, say b_1 = ⌊ 3 N^1/2⌋ andb_2 = ⌊ 4 N^1/2⌋, then compute subsample test statistic averages asB̂_b_1 and B̂_b_2 as bias estimates for these sample sizes, and determine a straight line through the points(log b_i, logB̂_b_i), i=1,2.* A prediction, say P̂_N, from this line at log N estimates logB̂_N and we may define B̂_N = exp[P̂_N]. * P-values can be approximated by the proportion of de-biased subsampling statistics {τ_b^-1 T_i,b - B̂_b}_i=1^M, for a given block size b, which exceed τ_N^-1 T_N - B̂_N. § D PRACTICAL GUIDANCE ON THE SELECTION OF BANDWIDTHWe select a suitable bandwidth based on the cross-validation methoddescribed by <cit.>, <cit.>, and <cit.>. Each observation is temporarily removed from the data set, the regression is fitted using the remaining observations, and the deleted observation is predicted from that regression. The average of squared deviations between deleted observations and predictions is then used as a selection criterion for bandwidth.Running the procedure for a gradient of bandwidth values allows us to select the bandwidth value that results in the minimum mean squared prediction error.In a standard kernel regression framework without outliers in the data, this cross-validation procedure has been shown to produce bandwidths that are asymptotically consistent; see for instance<cit.> and <cit.>.In particular, we use least squares leave-one-out cross-validation as our objective function defined as followsLCV(h):=1/N∑_k=1^N (y_k- f̂_-k(x_k))^2,where f̂_-k(x) is the estimate of f̂(x_k) computed from the data omitting the k-th observation (x_k,y_k); i.e. f̂_-k(x)= {∑_j ≠ k K [x_j-x/h] }^-1∑_j ≠ k K [ x_j-x/h] y(x_j).An optimal bandwidth according to this criterion can be chosen as ĥ_LCV := argmin_h>0LCV(h).For our CKC data set and for a range of bandwidths, the values of LCV (objective function) from (<ref>) are calculated and plotted on the left hand side ofFigure <ref> for both countries.The optimal bandwidths and their related LCV are given in Table <ref>.§ E SIMULATED EXAMPLE WITHOUT AN AR STRUCTURE FOR U_K'STo illustrate the potential difficulty in using the P test statistic that was identified in the carbon curve application for France,we simulated a data set where the error process u_k is white noise but the regression function is highly nonlinear,g(x_k; α,β,γ)=γ + (α x_k - β) exp(- α x_k) for k=1,...,80,where (α,β,γ)'=(0.55,0.60,5). The regressor x_k is a range of constants (see Figure <ref>). The error term u_k simply follows a white noise process of Normal(μ=0,σ=0.025). See the available code “" for the details of generated data set. The data points are displayed in Figure <ref>, where we overlay the nonlinear regression function from (<ref>) and a quadratic regression function from (<ref>). Now, suppose that we did not know the data follows the regression function from equation (<ref>), and we fit a quadratic regression function with the form of f(x_k)= θ_0+θ_1 x_k+θ_2 x_k^2. By investigating the residuals from fitting model (<ref>), these seemingly follow an AR(2) process with p=2. We apply the P test statistic, and the results are given in Table <ref>.Despite the fact that the quadratic form is a quite poor description of the actual expectation function in the data, it is not rejected using the P test statistic for any value of ℒ. We conclude that the P test statistic is highly dependent on the assumed process structure for the u-terms, and one cannot necessarily depend on residuals from the hypothesized model to be tested in determining whether the error process has AR structure (and its order). In this example the p-values for the same null hypothesis of a quadratic regression curve are zero based on the SNU test statistic with subsampling and for all block sizes. § BIBLIOGRAPHYHärdle, W., Hall, P. and Marron, J. (1992) Regression smoothing parameters that are not far from their optimum. Journal of the American Statistical Association 87, 227–233. Härdle, W. and Marron, J. (1983) Optimal bandwidth selection in nonparametric functionestimation. Institute of Statistics Mimeo Series No. 1530. Univ. North Carolina, Chapel Hill. Mosaferi, S. and Kaiser, M. S. (2022) Nonparametric cointegrating regression functionswith endogeneity and semi-long memory. arXiv:2111.00972. Park, B. U. and Marron, J. S. (1990) Comparison of data-driven bandwidth selectors.Journal of the American Statistical Association 85, 66–72. Rice, J. (1984) Bandwidth choice for nonparametric regression. The Annals of Statistics12, 1215–1230. Wang, Q. and Phillips, P. C. B. (2016) Nonparametric cointegrating regression with endo-geneity and long memory. Econometric Theory 32, 359–401. Wong, W. H. (1983) On the consistency of cross-validation in kernel nonparametric regres-sion. The Annals of Statistics 11, 1136–1141. ]
http://arxiv.org/abs/2312.16162v1
{ "authors": [ "Sepideh Mosaferi", "Mark S. Kaiser", "Daniel J. Nordman" ], "categories": [ "stat.ME", "stat.AP", "stat.CO" ], "primary_category": "stat.ME", "published": "20231226185010", "title": "Properties of Test Statistics for Nonparametric Cointegrating Regression Functions Based on Subsamples" }
Autonomous Driving using Residual Sensor Fusion and Deep Reinforcement Learning Amin Jalal Aghdasian, Amirhossein Heydarian Ardakani, Kianoush Aqabakee, Farzaneh Abdollahi [email protected], [email protected], [email protected], [email protected] of Electrical Engineering Amirkabir University of Technology (Tehran Polytechnic)Tehran, IranJanuary 14, 2024 ======================================================================================================================================================================================================================================================================================================================= § INTRODUCTIONThe role of conformal field theories (CFTs) as fixed points of the RG flow is a basic building block of our understanding of quantum field theories. A large and interesting class of interacting CFTs can be obtained from string/M-theory, either through dimensional compactification or geometric engineering. Many of these theories lack a conventional Lagrangian description, therefore the study of their dynamics should involve an analysis of their stringy construction, aided by field theoretical constraints, for example those originating by symmetries. When, on top of the conformal symmetry, a CFT also enjoys supersymmetry then field theoretical results can strongly constrain a theory. As an example it is widely believed that in 4d with 16+16 conserved supercharges all the CFTs are classified by =4 SYM theories with arbitrary gauge group, possibly with the addition of topological terms in the action. With a lower amount of supersymmetry such a complete classification is not available, although promising progress has been made in the last two decades for SCFTs with ≥2 SUSY <cit.>.An important ingredient that renders a classification program feasible is the existence of a Coulomb Branch(CB), an r-complex-dimensional space of vacua, with r the rank of the SCFT, where on general points the low energy dynamics is that of a ≥2 U(1)^r gauge theory where all the charged states are massive. On non-general singular points of the CB some charged states become massless and give rise to non-trivial dynamics in the IR.Then the analysis of the interesting physics of ≥2 SCFTs boils down to what happens at singularities of the CB. The theory arising on a codimension-n singularity is a theory with rank n<r, making it possible to study ≥2 SCFTs “by induction” on the rank: the properties of a rank-r theory are related to the properties of the theories supported on its singularities, which have rank less than r.This procedure has been referred as CB stratification <cit.> and in the following we will borrow this terminology. In this paper we apply this general idea to the charge lattice Γ of ≥2 theories which is the lattice of electromagnetic charges under U(1)^r of the massive states in a generic vacua of the CB, with the associated Dirac pairing J.The study of the charge lattices intertwines with the study of the generalized symmetries of the SCFT <cit.>, as seen for example from the analysis of 1-form symmetries in lagrangian theories <cit.>.In =2 theories the charge lattice is well defined even in the absence of a lagrangian description and this relationship has been exploited for example in <cit.>. In particular in =2 theories there is a close connection between the charge lattice Γ and the 1-form symmetry group G^(1).Indeed the objects charged under 1-form symmetries, which are Wilson-'t Hooft lines <cit.> in a generic CB vacua, are constrained by the spectrum of charged local states through the Dirac quantization condition. More precisely, given a basis of the charge lattice, the Dirac pairing matrix J in this basis is related to the order of G^(1) as follows <cit.>:| G^(1)| = | [J] | .2-form symmetries, on the other hand, are related to discrete gauging <cit.>. In 4d gauging a discrete 0-form symmetry generates a magnetic 2-form symmetry, whose topological operators are the Wilson lines of the discrete gauge group. On top of that, if the 0-form symmetry acts non-trivially on the CB, this gauging generates singularities at the CB points that are fixed under the action of the symmetry. There are no states becoming massless at these new singularities, therefore there are no BPS states with vanishing central chargethere. It then follows that 2-form symmetries can be used as indicators of discrete gauging, and the same idea applies also for CB singularities with no massless charged state. Summarizing, in ≥2 SCFTs the structure of higher form symmetries gives information about the local dynamics, namely on the charge lattice and BPS condition of charged states, and viceversa. Motivated by this discussion we study a class of 4d SCFTs with =3 SUSY denoted as exceptional S-fold theories, first constructed in <cit.> (see also <cit.> for general properties of =3 SCFTs).We analyze the structure of higher-form symmetries from the analysis of the charge lattice and possiblediscrete gaugings for each case.It is challenging to study CFT data from the stringy definition of such theories, a notable exception is the CB geometry computed in <cit.>. Our analysis heavily relies on the approach and results of <cit.>. More technically =3 S-fold SCFTs arelabelled by an integer k=3,4,6, called the order of the S-fold, and a simply-laced Lie algebra 𝔤, and we will denote them as “S_k-folds of type 𝔤”.The (A_r,k), denoted here as “regular” S-folds, engineer the theories of <cit.>, and are equivalent to Type IIB setups. The 1-form symmetries of regular S-fold theories were computed in <cit.> and the 2-form symmetries and possible discrete gauging were analyzed in <cit.>. We will use regular S-fold SCFTs as a testing ground for developing our prescriptions. The(D_r,k) and (E_r,k) are called “exceptional” S-fold theories. In this paper we only discuss the 𝔤=E_6,7,8 case, but our procedures can be straightforwardly applied to the (D_r,k) as well. We thus consider a total of nine theories, the (E_6,7,8,k), which are so far candidates for being interacting =3 SCFTs with rank varying from 2 to 4.We show that all but one of these theories, the (E_8,4), are discrete gauging of free theories. This is essentially due to the fact that they do not admit acharge lattice consistent with the CB stratification, the Dirac quantization condition as well as the constraints on the central charges by<cit.>. In this paper we will denote a charge lattice as consistent if it satisfies each of these three conditions, while we will denote as inconsistent the lattices that fail to satisfy one or more of them. Exceptional S-folds can be considered sporadic even when compared to the “regular” (A_r,k), nevertheless we find that the obstruction to having a consistent charge lattice that they exhibit generalizes nicely to all =2 SCFTs with characteristic dimension[The characteristic dimension ϰ, introduced in <cit.>, is an invariant of =2 SCFTs that can be computed from the scaling dimensions of the CB invariants.]ϰ different from 1 or 2 <cit.>.The rest of this paper is organized as follows: in Section <ref> we outline our procedure by generalizing the known case of (A_r,k). We review the computation of the CB of S-folds by<cit.> and we give formulae for computing the charge lattice. We analyze the relation between possible discrete gaugings and the presence of BPS charged states and we apply it to show that some strings that cross “regular” S-folds do not produce BPS states. In Section <ref> we study the main theories of interest of this paper, the exceptional (E_6,7,8,k).We compute their charge lattices and show that most of them are inconsistent. In Section <ref> we apply the ideas developed in the rest of the paper to the general case of =2 SCFTs with characteristic dimension ϰ≠{1,2} <cit.>. We obtain a consistency condition on the CB stratification of these theories and an upper bound for the order of the 1-form symmetry group in the rank-2 case. For the sake of readability, before delving into the main body of this paper, we find it useful to sketch our procedure and present our results. §.§ General strategyOur approach to the study of charge lattices of S-fold theories boils down to two main ideas. The first idea is based on the results of <cit.>, where it was shown that the moduli space of an S-fold SCFT of type 𝔤 can be obtained as a slice of the moduli space of a “parent” =4 SYM theory with gauge algebra 𝔤. In analogy with this result, we compute the charge lattice of an S-fold theory as a sublattice of the charge lattice of the parent =4 SYM. The second ingredient is the consistency of the structures of =2 SCFTs along the CB stratification. In our case this boils down to the fact that the charges that become massless at some codimension-n singularity on the CB must generate the charge lattice of some rank-n theory supported on the singularity. If this is not the case, then the singularity can not support an interacting theory and must be empty. The singularity itself then supports a discrete gauging of a free theory, and the SCFT can be considered as a discrete gauging of a parent theory.In most exceptional (E_6,7,8,k)we find that none of the codimension-1 singularities can support an interacting theory, signaling that the exceptional S-fold theory itself is a discrete gauging of a free =4 theory. This procedure is particularly powerful when considering the codimension-1 singularities of a maximally strongly coupled theory. If the singularity is non-empty then it must support a rank-1 =2 SCFT. We have a full classification of these theories <cit.> and their charge lattices are characterized by the absolute value of the Pfaffian of the Dirac pairing J <cit.>:| [J] | = 2 (discrete gauging of) 𝒩=2^* SU(2) SYM1 otherwiseFor any other values of | [J] | on a codimension-1 singularity, the corresponding states can not be BPS and the singularity must be empty. Given an exceptional S-fold theory our analysis roughly follows these steps: * Determine the CBgeometry as in <cit.>.* Compute the charge lattice and Dirac pairing from the parent theory.* Compute the sublattice of charges that should become massless on all codimension-1 singularities.* If these lattices are compatible with one of the options in (<ref>) then there is a SCFT supported there, otherwise the singularity is empty.* Impose the constraints on the central charges from<cit.>.At the end of these steps if there are some singularities which support an interacting SCFT we claim that the S-fold SCFT is non-trivial. Instead, if all the singularities are empty, we claim that the S-fold theory is a discrete gauging of a free theory.§.§ Results We find that all but one of the exceptional S-fold SCFTs of type E_r are discrete gauging of free theories, the exception being the S-fold of type E_8 with k=4, also called the G_31 theory. In particular, the S-fold theories of type E_6 and E_8 with k=3,6 do not have consistent charge lattices. In these theories, on any codimension-1 singularity the charges that should become massless span a rank-2 sublattice where the Dirac pairing is such that | [J^(1)] | = 3, and comparing with eq. (<ref>) there is no rank-1 SCFT that can be supported there. Therefore all codimension-1 singularities are empty, and the S-fold theories themselves are discrete gauging of free theories. All S-fold theories of type E_7 and the S-fold theory of type E_6 with k=4 admit a consistent charge lattice, but this lattice is incompatible with the constraints coming from the central charge formulae of <cit.>. The only S-fold theory of type E_6,7,8 that has a well defined charge lattice compatible with the formulae of <cit.> is the G_31 theory. We claim that this is an interacting SCFT. The theory has rank equal to 4 and the CBand central charges are those computed in <cit.>, see Table <ref>. Furthermore we find that the 1-form symmetry group of this theory is trivial.By applying similar ideas to =2 SCFTs with characteristic dimension ϰ≠{1,2} we find the following: The order of the 1-form symmetry group G^(1) of an =2 rank-2 SCFT with ϰ≠{1,2} satisfies 1≤|G^(1)| ≤ 4. The upper bound can only be saturated by stacks of lower rank theories. An =2 SCFT with ϰ≠{1,2} and rank r≥2that is not a stack of lower rank theories must have at least one codimension-1 singularity that supports (a discrete gauging of) =2^* SU(2) SYM. We show that our results regarding(E_6,7,8,k), namely that most of them are discrete gauging of free theories, boils down to the constraint of Claim <ref>, possibly in conjunction with the constraints on the central charges given by the formulae of <cit.>. § S-FOLDS SCFTS In this Section we outline our procedure for analyzing various properties of =3 S-fold theories. We do so by studying explicit examples of S-folds SCFTs engineered in Type IIB <cit.>, which we denote as “regular” S-folds, providing various prescriptions that will apply to the general cases of exceptional S-folds <cit.> engineered in M-theory discussed in Section <ref>.All the results contained in this Section have already appeared in the literature and most of the techniques are well known, with the exception of the discussion given in Subsection <ref>. There we leverage the stratification of the CBand the classification of =2 rank 1 SCFTs to constrain the BPS spectrum and ultimately understand the 2-form symmetries of these theories. This argument, to the best of our knowledge, is original.We begin with a quick review of S-folds in Type IIB. S-fold SCFTs can be engineered in Type IIB string theory as the low energy theory on the worldvolume of a stack ofD3-branes that probe an S-fold singularity (see <cit.> for details). This singularity is obtained by a _k quotient of Type IIB which involves both a spacetime orbifold and an S-duality action, which becomes a symmetry for particular values of the axiodilaton. The spacetime orbifold is ^3,1× (^3/_k), where D3-branes are extended along ^3,1, and the S-duality action is given by an element ρ_k ∈ SL(2,) of the S-duality group of Type IIB. One can think about this non-geometric spacetime as follows: looping around a cycle in^3/_k every object in string theory is acted upon by the S-duality transformation ρ_k.This Type IIB non-geometric singularity can alternatively be described by a geometric singularity in F-theory, where the F-theory torus has aρ_k monodromy around the ^3/_k singularity. The F-theory picture will not be relevant in this paper, and we refer the reader to the original literature on this topic <cit.>. The S-duality element ρ_k must generate a _k subgroup of SL(2,), which is only possible for k=1,2,3,4,6.Furthermore, the axiodilaton τ must be fixed by ρ_k in order for the subgroup generated by ρ_k to be a symmetry of the theory. The S-duality elementsρ_k with the corresponding values of τ are listed in Table <ref>. In the absence of the S-fold the stack of D3 brane preserves sixteen supercharges in 4d: Q_i, i=1,2,3,4[Each Q is a four dimensional Dirac spinor with four components. ]. The S-duality transformation ρ_k acts on the supercharges as <cit.>:ρ_k: Q_i → e^π i/k Q_ii=1,2,3,4On the other hand the spacetime orbifold corresponds to an R-symmetry transformation r_k ∈ SU(4)_R and can be chosen such that its actionon the supercharges is:r_k: { Q_i → e^-π i/k Q_ii=1,2,3 Q_4 → e^3 π i/k Q_4 .Under the combined action ρ_k · r_k the supercharges Q_1,2,3 are preserved while Q_4 transforms as:ρ_k · r_k: Q_4 →e^2 π i/k Q_4For k=1,2 this supercharge is preserved as well and the resulting 4d theory has =4 supersymmetry. The case k=1 corresponds to no projection at all, and engineers 𝔰𝔲(N) =4 SYM, while the case k=2 corresponds to the orientifold plane O3 and engineers =4 SYM with gauge algebra 𝔡_n, 𝔟_n or 𝔠_n depending on the discrete torsion to be discussed briefly. The cases of interest in this paper are k=3,4,6 where generally only twelve supercharges are preserved and the low energy theory on the stack of D3-branes is an =3 SCFT. It was shown in <cit.> that generally one has the possibility to introduce a discrete torsion in the S-fold background, that is a non-trivial flux for the Type IIB 2-form fields B_2 and C_2 around a non-contractible 2-cycle of the holographic background AdS_5 × (S^5/_k). The 2-form fields transform in the two dimensional representation of the S-duality group SL(2,), therefore their flux on this 2-cycle is classified by the second twisted cohomolgy groups H_2(AdS_5 × (S^5/_k); (⊕)_ρ_k) = H_2(S^5/_k; (⊕)_ρ_k). These groups were computed in <cit.>, see also <cit.> for a review. One finds:H_2(S^5/_k; (⊕)_ρ_k) = { _2 ×_2 , _3, _2, 1,k=2 k=3 k=4 k=6 .where 1 is the trivial group. Therefore there are four choices of discrete torsion for the orientifold (k=2) corresponding to the O3^-, O3^+, O3^- and O3^+ orientifold planes respectively. For k=3 there are three choices, one with trivial discrete torsion and two with non-trivial discrete torsion. The two choices with non-zero discrete torsion are related by charge conjugation so there are only two physically different choices: trivial or non-trivial discrete torsion. Finally for k=4 one can have trivial or non-trivial discrete torsion and for k=6 the only choice is to have trivial discrete torsion.In summary the S-fold setup of <cit.>, briefly reviewed above, gives rise to an infinite family of =3 SCFTs parametrized by the number r of D3-branes, the order of the quotient k and, when allowed, the choice of discrete torsion. There are five variants of =3 S-folds which we denote as S_k,ℓ following the notation of <cit.>. Here ℓ=1 corresponds to the absence of discrete torsion and ℓ=k corresponds to non-trivial discrete torsion. The five variants are therefore S_3,1, S_4,1, S_6,1, S_3,3 and S_4,4.This concludes our brief review of S-folds in Type IIB, in the remainder of this section we will review some properties of the corresponding SCFTs, namely the moduli space, charge lattice, 1-form and 2-form symmetries. For a more in-depth analysis of the string theory setup we refer the reader to the original literature <cit.>.§.§ Moduli space The S-fold theories have a moduli space of vacua parametrized by the motion of the N D3-branes on the transverse space ^3/_k which is given by (^3)^N / G(k,1,N) <cit.>, where G(k,1,N) is a crystallographic complex reflection group (CCRG).By choosing an =2 subalgebra of the =3 superalgebra the R-symmetry group is broken to (SU(2)× U(1))_R and the moduli space splits into a N-dimensional CB , an 2N-dimensional Higgs branch and a mixed branch with respect to the choice of subalgebra. Of particular interest in this paper is the CB , where the U(1) R-symmetry is broken and the SU(2) R-symmetry is preserved. In the brane picture the CBcan be identified with the space parametrized by the positions z_i of the N D3-branes on a 1-complex-dimensional slice /_k of the transverse space.Here z_i is a complex number that parametrize the position of the i-th D3-brane on this slice. The CBis then ^N / G(k,1,N), where G(k,1,N) is generated by the transformations:z_i → e^2π i/k z_i z_i ↔ z_j i,j= 1,…, NThe ring of polynomials in the z_i that are invariant under G(k,1,N) is freely generated, meaning that there are no non-trivial relations between the generators. There are N generators whose degrees are:Δ = (k, 2k, 3k, …, Nk) On a generic point of the CBthe low energy theory is a U(1)^N gauge theory, while at less generic points, namely the fixed points of some of the transformations (<ref>), some charged states become massless, and give rise to non-trivial dynamics in the IR. It can be shown that the S-fold theories are maximally strongly coupled (mSC), which means that when a state become massless then there must another state which is mutually non-local with respect to the first one and that becomes massless as well. We recall that two states are mutually non-local if the Dirac pairing between them is a non-vanishing integer. An interesting quantity to consider in this regard is the characteristic dimension ϰ of an =2 SCFT, introduced in <cit.>. The characteristic dimension is defined by writing the degrees of the invariants Δ_i as:( Δ_1, …, Δ_N ) = λ (d_1, …, d_2),d_i ∈, gdc(d_1, …, d_N) = 1Then the characteristic dimension is:ϰ=1/{λ^-1}where {x} is defined as the unique real number such that {x} = xmod 1 and 0< {x}≤ 1.The characteristic dimension can only take eight values ϰ∈{ 1, 6/5, 4/3, 3/2, 2,3,4,6 }, furthermore when ϰ∉{1,2} the corresponding SCFT satisfies stringent constraints <cit.>. This is relevant to this paper because all the =3 S-fold and exceptional S-fold SCFTs have ϰ∉{1,2}. Here we quote some of the results that will be relevant in this paper. If an =2 SCFT has ϰ∉{1,2} then: * On a generic point of the CBthe U(1)_R symmetry is broken to _k with: k = { 3 ϰ = 3, 3/2 4 ϰ = 4, 4/3 6 ϰ = 6, 6/5 . * The SCFT is maximally strongly coupled (mSC). In particular if a state | ψ⟩ with central charge Z is present in the spectrum then the spectrum include also the state e^2π i R/k | ψ⟩, which has central charge e^2π i/k Z and the same quantum numbers as | ψ⟩. Here k is defined as in (<ref>). These two states have the same mass and are never mutually local. * By the statement above, whenever some state with Z which doesn't identically vanish on the CBbecomes massless, then other states that are mutually non-local with respect to the first one become massless as well. Therefore, on any point of the CBwhere some charged state is massless, the IR dynamics is described by an interacting SCFT.The analysis of moduli space given so far has relied upon the brane picture for regular S-fold. Such a picture will not be available when we consider the generalization to exceptional S-fold, so an alternative approach is desirable.A possible approach was presented in <cit.>, where the authors studied the moduli spaces of exceptional S-fold theories. Here we briefly review their results, more details can be found in the original paper. The idea is to start with a stack of Nk D3-branes in flat space. The low energy theory is then =4 SYM with gauge algebra 𝔰𝔲(Nk) and CB ()^Nk/𝒲(𝔰𝔲(Nk)) parametrized by the scalars Φ. Here 𝒲(𝔤) is the Weyl group of the Lie algebra 𝔤. Introducing an S_k-fold imposes the identification:w ·Φ =𝒪_k ΦHere 𝒪_k is the action of the S-fold on the scalars, which is given by the R-symmetry transformation:𝒪_k = e^2π i/k. w is the Weyl element corresponding to the permutation of branes that maps each D3-brane to its first image under the S-fold and it is equal to the N-th power of the Coxeter element c: w =c^N c =s_1 · s_2 ·…· s_Nk-1where s_i is the reflection along the i-th simple root. Then (<ref>) becomes:c^N ·Φ = e^2π i/kΦ Notice that (<ref>) only requires to know the moduli space of the low energy field theory in the absence of the S-fold, which in this case is =4 SU(Nk) SYM, and does not rely on a brane picture. This allowed the authors of <cit.> to generalize this procedure to exceptional S-fold where the “parent” =4 theory has gauge algebra 𝔢_n or 𝔡_n.This generalization requires a choice of an element w of the Weyl group, we will see that this choice is unique under a technical but reasonable assumption.We can choose an =2 subalgebra of =4 SU(Nk) SYM with a corresponding CBparametrized by Φ_C ∈^Nk/𝒲(𝔰𝔲(Nk)). The Weyl group acts irreducibly on ^Nk, and the CBof the S-fold theory is given by those Φ_C that satisfy:c^N ·Φ_C = e^2π i/kΦ_C. Mathematically Φ is a point in a space acted upon by a real reflection group (the Weyl group), and(<ref>)identifies the eigenspace of the element w of the Weyl group (in this case c^N) with eigenvalue e^2π i/k.The action of the Weyl group on this eigenspace is called a reflection subquotient, and has been studied in generality in the mathematical literature see <cit.> and references therein for a comprehensive review of this topic.Here we report some results on reflection subquotient that are relevant for the study of moduli spaces of S-fold theories. Proofs and discussions regarding these mathematical results can be found in <cit.>, see Theorem 11.24, Corollary 11.25, Theorem 11.28 and Theorem 11.38 in that reference.* The rank r of an S_k-fold =3 theory is given by the number of degrees of CB invariants of the “parent” =4 SYM that are divisible by k.* The CBof an S_k-fold =3 theory is ^r/𝒞 with 𝒞 a complex crystallographic reflection group.* The degrees of the generators of the S_k-fold CBinvariants are the degrees of 𝒞, and are given by the degrees of invariants of the “parent” =4 theory that are divisible by k. * The codimension-1 singularities in the CBof the S_k-fold =3 theoryare given by the intersection of the codimension-1 singularities of the “parent” =4 theory with the =3 CB . In the case of regular S-folds the “parent” theory is SU(Nk) =4 SYM, and the degrees of the generators of CBinvariants are:2,3, …, NkThere are N degrees that are divisible by k:k, 2k, …, Nkthat correspond to the degrees of the complex crystallographic reflection group G(k,1,N). This is consistent with the brane picture analysis, where the CBwas found to be ^N/G(k,1,N). When a brane picture is not available one needs to specify the element w of the Weyl of the “parent” =4 theory that is involved in the S-fold projection (<ref>). Modulo a technical assumption[Here, following <cit.>, we assume that the rank of the =3 theory is the highest possible.] such an element is unique up to conjugation and is characterized by having an r-dimensional eigenspace with eigenvalue e^2π i/k. Therefore the analysis of the CBof exceptional S-fold theories boils down to finding an element w of the Weyl group that has an r-dimensional eigenspace with eigenvalue e^2π i/k.§.§ Charge lattice and 1-form symmetriesThe low energy theory in a generic point of the CBis a U(1)^r gauge theory with no massless charged states. There are massive states with electric and magnetic charges under the various U(1) factors. The set of EM charges of these massive states form a 2r-dimensional lattice Γ called the charge lattice.This lattice is endowed with a Dirac pairing, an antisymmetric bilinear map ⟨·, ·⟩ taking values in the integers:⟨·, ·⟩: Γ×Γ→ It is convenient to choose a basis {γ_i}_i=1,…, 2r of the charge lattice Γ: Γ = Span_( {γ_i}_i=1,…, 2r) In this basis the Dirac pairing is represented by a 2r× 2r antisymmetric matrix J. In an SCFT the Dirac pairing must be non-degenerate, which is equivalent to: Pf (J) ≠ 0There is a close relationship between the charge lattice and the 1-form symmetry of an =2 SCFT <cit.>. In a generic point of the CB any Wilson-'t Hooft line ℓ must have integer Dirac pairing with respect to the massive charged states γ:⟨ℓ, γ⟩∈∀γ∈ΓThis stems from the fact that due to the Aharonov-Bohm effect, moving the line with charge ℓ around the worldline of a particle with charge γ produces a phase e^2π i ⟨ℓ, γ⟩.The phase must be a multiple of 2π in order for the line operator to be well defined, therefore the Dirac pairing between the line and all the local operators must be integer.The set of all charges ℓ that satisfy the consistency condition (<ref>) is called the line latticeand is a refinement of the charge lattice Γ⊂. Two lines indo not necessarily have an integer Dirac pairing between themselves, and therefore generally they can not be simultaneously included in the theory. One has to specify a choice of a maximal sublattice of lines ⊂ such that:⟨ℓ_1, ℓ_2 ⟩∈∀ℓ_1, ℓ_2 ∈Γ_maxand no additional line charge can be added to Γ_max without breaking this consistency condition. Generally there are different choices ofcorresponding todifferent global structures, which are theories with the same local dynamics that only differ in the spectrum of extended line operators.Once a global structure has been chosen the 1-form symmetry group G^(1), sometimes referred to as the defect group 𝔻^(1),can be found as:G^(1) = Γ/ An interesting quantity to study in this regard is the absolute value of the Pfaffian of the Dirac pairing (J) because it is equal to the order of the 1-form symmetry group:|(J)| = | G^(1)|Indeed one can show that the Dirac pairing J can be put in the following standard form with a change of basis (see Appendix B of <cit.>):J=([ D; -D])D=diag{d_1, …, d_n},d_i ∈ℕ and the invariant factors d_i can be chosen such that d_i | d_i+1. The line lattice in this basis is spanned by = Span_( d_1^-1 e_1, …, d_n^-1 e_n, d_1^-1 m_1, …, d_n^-1 m_n ), where {e_i} and {m_i} are respectively the first n and the last n basis vectors of the charge lattice. For any choice of global structureit is true that:| /| = | /Γ| = ∏_i=1^n d_i = |(J)| When the Pfaffian of the Dirac pairing is a prime number p the 1-form symmetry group must be _p, which is the only abelian group of order p[Higher form symmetry group are always abelian<cit.>.]. When the Pfaffian is not a prime number then the 1-form symmetry group can be any of the groups ∏_p_j with ∏ p_j = |(J)|, and in general will depend on the choice of global structure . As already discussed the value of |(J)| is an invariant of any =2 SCFT that equals the order of the 1-form symmetry group. One can intuitively think of this invariant as a measure of “how spread out” the charge lattice is. Indeed the number ofelectromagnetic charges that can be added to the charge lattice Γ without breaking the Dirac quantization condition is given by|(J)|-1. In this sense, the charge lattice Γ can not be arbitrarily dense, because |(J)| is at least 1. In Section <ref> we will see that for maximally strongly coupled SCFTs the charge lattice can not be arbitrarily spread out either. This idea will be discussed in more generality in Section <ref>.§.§.§ Charge lattices of S-fold theoriesIn this Section we review the computation of the charge lattice of regular S-fold SCFTs of <cit.> and we give a field theoretic prescription to generalize the analysis to exceptional S-folds. Consider a stack of N D3-branes probing an S_k-fold without discrete torsion together with the (k-1)N image D3-branes. The local states of the SCFT are associated to finite length (p,q)-strings stretched between the D3-branes, plus their images under the S_k-fold. Denote as |(p,q)⟩_i,j a state associated to a (p,q)-string between the i-th and j-th D3-brane. The first image of this string is a (p',q')-string stretched between the π(i)-th and π(j)-th D3-brane. Here (p',q') are related to (p,q) by the S-duality transformation involved in the S-fold:(p',q') = ρ_k · (p,q)And the π(i)-th D3-brane is the first image of the i-th D3-brane. Let us number the D3-branes such that π(i) = i+N, and i∼ i+kN. The following states are invariant under the S-fold action:|(p,q)⟩_i,j = 1/√(k)∑_t=0^k-1 |(ρ_k)^t · (p,q) ⟩_π^t(i), π^t (j) The electromagnetic charges of the state |(p,q)⟩_i,jcan be written as a 2Nk-dimensional vector[The factor 1/√(k) is consistent with the charge lattices of SO(2N) =4 SYM and the fluxless S-fold SCFTs <cit.> for k=2 and k=3,4,6 respectively.]:Q[|(p,q)⟩_i,j] = 1/√(k)(e_1, …, e_Nk, m_1,… ,m_Nk )where e_i and m_i are the electric and magnetic charges under the i-th D3-brane, respectively. The Dirac pairing between two states ϕ and ψ with charges e_i, m_i and e'_i, m'_i is then:⟨ϕ, ψ⟩ = 1/k∑_i=1^Nk (e_i m'_i - e'_i m_i)One can show that despite being represented by 2Nk-dimensional vectors the set of states invariant under the S-fold action (<ref>) only span a 2N-dimensional lattice, which is the charge lattice of the rank-N S-fold SCFT.In order to generalize this analysis to the exceptional S-fold case, let us express the various quantities of the S-fold theory (<ref>),(<ref>) and (<ref>) in terms of field theoretical data of the “parent” =4 SU(kN) SYM theory, namely the roots α_i,j ofSU(kN) and the Cartan matrix 𝒜_SU(Nk). This can be done as follows. The string state |(p,q)⟩_i,j correspond to a dyonic state with electric charge p and magnetic charge q with respect to a root α_i,j of SU(kN):|(p,q)⟩_i,j→ |α_i,j, (p,q)⟩The S-fold acts with a matrix ρ_k on the electric and magnetic charges (p,q) and acts as a permutation on the indices i,j. As discussed in the previous Section the permutation corresponds the the action of the Coxeter element c to the N-th power on the root α_i,j. Suppresseing the indices i,j the S-fold action can be written as:S_k: [α, (p,q)] →[c^N ·α, ρ_k· (p,q)]The states invariant under the S-fold (<ref>) can be written in the following form in terms of the states of the “parent” =4 theory:[α, (p,q)] = 1/√(k)∑_t=0^k-1[c^tN·α, (ρ_k)^t · (p,q)] The charge lattice of the S-fold theory is spanned by these states for all choices of root α∈Δ[SU(Nk)] andfor any p,q∈.The electromagnetic charge of a state |α;(p,q)⟩ is given by;Q[|α;(p,q)⟩] =1/√(k)∑_t=0^k-1 (w ⊗ρ_k )^t · Q[|α; (p,q)⟩]where Q[|α; (p,q)⟩] is the electromagnetic charge of the corresponding state ofSU(Nk) =4 SYM.Finally, the Dirac pairing defined on the charge lattice of the S-foldss theory is obtained as a restriction of the Dirac pairing of SU(Nk) =4 SYM. Explicitly the Dirac pairing between two states of the S-fold theories with charges q_i and q_j is given by:⟨ q_i, q_j⟩=q_i · J_SU(Nk)· q_j^Twhere:J_SU(Nk)=([0 (𝒜_SU(Nk))^T;-𝒜_SU(Nk)0 ])is the Dirac pairing of the parent SU(Nk) =4 gauge theory (see for example <cit.> for a derivation).Then the charge lattice Γ_N,k of the S_k-fold theory is:Γ_N,k={Q[|(p, q)⟩_i, j] | p, q ∈ℤ,i, j=1, …, N k}and the associated Dirac pairing is given by (<ref>) and (<ref>).§.§.§ Discrete torsionSome S-fold backgrounds can admit a non-zero flux for the Type IIB 2-form fields around cycles of the transverse space. This is possible for k=2, giving rise to the orientifold O3^+, O3^+ and O3^-, and for k=3,4 giving rise to fluxful S-folds denoted as S_3,3 and S_4,4. When the discrete torsion is non-zero the S-fold is magnetically charged under the corresponding 2-form field, and strings can end on the S-fold itself. Strings stretched between the S-fold and a D3-brane generate additional states in the SCFT with respect to the fluxless case, and the charge lattice is more dense.The states corresponding to strings stretched between the S-fold and a D3-brane can not be written in the form (<ref>). In order to include them we are led to consider states of the general form: | α, (p,q) >_{p_i,j} = 1/√(k)∑_i=0^k-1∑_j=0^1 p_i,j|c^ i N·α ;(ρ_k)^j ·(p, q)⟩where p_i,j are integers such that the state (<ref>) are invariant under the S-fold action (<ref>). The sum over i runs from 0 to k-1 because c^N satisfies its characteristic equation, which is an order k polynomial equation, and similarly the sum over j runs from 0 to 1 because ρ_k satisfies an order 2 polynomial equation.One can show that such states correctly reproduce the strings stretched between a D3-brane and the S-fold itself. We can therefore see that the presence of a non-trivial discrete torsion can be accounted for by considering the charge lattice spanned by the states (<ref>) rather than (less general) states (<ref>). In the context of exceptional S-folds the states (<ref>) will only play a minor role, therefore we will not discuss them further.§.§ Discrete gauging and 2-form symmetries The S-fold theories obtained from the Type IIB setup can sometimes have a non-trivial 2-form symmetry and can be seen as a discrete gauging of a “parent” theory <cit.>. Gauging a discrete 0-form symmetry of the parent theory gives rise to a magnetic 2-form symmetry, and viceversa. One can go from the parent theory to the daughter theory by gauging the relative discrete symmetry. This operation is therefore reversible, and one may choose to study either of the two theories without loosing information. When this is the case it is convenient to study the parent theory itself, for example the Shapere-Tachikawa formula for the central charges is believed to hold only in the absence of 2-form symmetries. In this Section we show how to detect 2-form symmetries that arise from the discrete gauging of a 0-form symmetry that acts on the CB . We also give a consistency constraint for the BPS spectrum of =2 SCFTs based on the classification of rank-1 theories. We elaborate this analysis in the cases of the O3^- and the flux-less S_3-fold. In Section <ref> similar considerations will lead us to claim that some exceptional S-fold theories are discrete gaugings of free theories.§.§.§ Strings across the flux-less orientifoldAs a familiar example, consider the O3^- plane, which corresponds to the S-fold with k=2 and trivial discrete torsion. The low energy theory on a stack of N D3 branes on top of the O3^- is =4 SYM with gauge algebra 𝔰𝔬(2N), and is believed to be a _2 discrete gauging of the =4 theory with gauge group Spin(2N). Indeed the space parametrized by the transverse motion of the D3-branes is ℂ^N/ ( 𝒲[𝔰𝔬(2N)]⋊_2), which is compatible with the moduli space of=4Spin(2N) with an additional _2 identification given by gauging charge conjugation. In this example the “parent” theory has trivial 2-form symmetry and has a _2 0-form symmetry, namely charge conjugation. The theory on the stack of D3-branes is obtained by gauging this _2 0-form symmetry, and therefore has a _2 2-form symmetry.The 2-form symmetry can be detected by looking at the moduli space ℂ^N/ ( 𝒲[𝔰𝔬(2N)]⋊_2). In particular the singularities on moduli space given by the additional _2 identifications correspond to configurations where one D3-brane is on top of the orientifold. There are no massless BPS charged states associated to this singularities because the ground state of strings connecting the D3-brane and its image, which have zero length, are projected out by the orientifold. This is consistent with the fact that the _2 identification on moduli space is due to a discrete gauging of a _2 0-form symmetry. In general a discrete gauging of a 0-form symmetry that acts non-trivially on the CBproduces singularities where no BPS state becomes massless. Suppose that an S-fold theory 𝒯 has a CB :𝒞 = ℂ^N/( 𝒢⋊𝒢' )and charge lattice Γ. Suppose that on the fixed loci of 𝒢 some state γ∈Γ has zero central charge 𝒵, and therefore becomes massless, while on the fixed points of 𝒢' all the states in the charge lattice are massive. Then 𝒯 has a non-trivial 2-form symmetry G^(2) = 𝒢' and can be regarded as a 𝒢' discrete gauging of a “parent” theory 𝒯' with CB :𝒞' = ℂ^N/( 𝒢)and with the same charge lattice Γ. The “parent” theory 𝒯' has a 0-form symmetry which contains 𝒢' as a discrete subgroup. Therefore we are able to detect the presence of a non-trivial discrete 2-form symmetry G^(2) from the knowledge of the CBand charge lattice if G^(2)arises from a discrete gauging of a 0-form symmetry that acts non-trivially on the CB .In the example of the O3^- given above the absence of charged massless states on the fixed points of the _2 identification can be explained from string theoretical considerations, but one would like a field theoretical argument as well. Consider a point p in ℂ^N that is fixed under _2 and is generic otherwise.The prescription given in Section <ref> to compute the charge lattice Γ predicts massless states on this singularity corresponding to (p,q)-strings stretched between a D3-brane and its image. Denote as Γ^(1) the sublattice of Γ spanned by these states. Γ^(1) should be the charge lattice of a rank-1 QFT 𝒯^(1) whose CBis given by the slice transverse to the singular locus in a neighbor of p, namely ℂ/_2. A basis of Γ^(1) is given by the states associated to an F1 and a D1 string which we denote as ψ and ϕ respectively. The Dirac pairing between these states is (J^(1) )= ⟨ψ, ϕ⟩ =4, therefore they are not mutually local and 𝒯^(1) must be an interacting CFT. We have denoted as J^(1) the matrix representing the Dirac pairing of the rank-1 theory in this basis. Furthermore by the argument given in <cit.> (see Section <ref>) 𝒯^(1)should have a non-trivial 1-form symmetry group of order 4.A full classification of rank-1 =2 SCFTs is available<cit.>, and a theory such as 𝒯^(1) does not exist. In particular the maximum order for the 1-form symmetry group of a rank-1 SCFT is 2 <cit.>, saturated for example by =4 SU(2) SYM. We conclude that the states in Γ^(1) can not be BPS, therefore on the fixed locus of the _2 identification there are no massless states, consistently with the string theory prediction.§.§.§ Strings across the flux-less S_3-foldWe have shown that the analysis of the charges of states becoming massless on a singularity of the CBimposes non-trivial constraints on the BPS spectrum of a theory. This is especially interesting to study in non-lagrangian theories, where discrete gaugings and 2-form symmetries are not readily apparent. As an example, we now show that in the flux-less S-folds with k=3 the strings stretched between one D3-brane and its image are not BPS. Consequently, these theories are discrete gaugings of other =3 “parent” theories, as originally discussed in <cit.>.A similar analysis in Section <ref> will show that some exceptional S-fold theories, for example the G_5 theory discussed in Section <ref>, are discrete gauging of free theories.The CBof the regular S_k-fold SCFT at rank r theories has two types of codimension 1 singularities on the CB : singularities where two D3-branes coincide and singularities where one D3-brane is on top of the S-fold. When two D3-branes coincide the associated rank-1 theory is always SU(2) =4 SYM, which is a consistent rank-1 SCFT, therefore we will focus on the other singularities. When one D3-brane is on top of the S-fold the corresponding rank-1 theory is the rank-1 version of the S-fold theory under consideration. In the case of flux-less S-folds the rank-1 theories are believed to be discrete gaugings of U(1) =4, with no massless states charged under the U(1) <cit.>.Let us show that this must indeed be the case for the k=3 S-fold. Consider the codimension-1 singularity that arises when the i-th D3 brane is on top of the S_3-fold.In the absence of discrete torsion the charge sublatticeΓ^(1), associated to the rank-1 theory supported on this singularity, is spanned by (p,q)-strings stretched between the i-th D3-brane and its image. A possible basis for this lattice is given by the states associated to an F1 and a D1 string, let us denote them as |f1⟩ and |d1⟩ respectively. The Dirac pairing matrix in this basis is:J^(1) = ( [ 0⟨ f1, d1 ⟩; -⟨ f1, d1 ⟩ 0 ])The order of the 1-form symmetry group is give by the Pfaffian of the Dirac pairing:|G^(1)|=| [J^(1)]|= ⟨ f1, d1 ⟩ = {[ 3k=3; 2k=4; 1k=6 ].Where we used (<ref>) and (<ref>) to compute the charges of |f1⟩ = |(1,0)⟩_i,i and |d1⟩ = |(0,1)⟩_i,i, and we used (<ref>) to compute their Dirac pairing. The CBof these rank-1 theories is ℂ/_k. For k=3 the putative theory on this singularity is inconsistent because, as discussed above, the maximum order for the 1-form symmetry group of a rank-1 =2 SCFT is 2. Therefore the states associated to strings stretched between a D3-brane and its images can not be BPS.The CBcan thus be written as:𝒞 = ℂ^N/(G(3,1,r) ) =ℂ^N/(G(3,3,r) ⋊_3)where there are massless charged states on the fixed points of G(3,3,r) and there are no massless charged states on the fixed points of _3. This CBis consistent with the CBof a _3 discrete gauging “parent” theory with CB𝒞'= ℂ^N/G(3,3,r) and 2-form symmetry group _3, reproducing the M-theory results of <cit.>. Furthermore we have shown that in a flux-less S-fold background with k=3 the states associated to strings stretched between a D3-brane and its image are not BPS, because otherwise the CBstratification would be inconsistent. This further strengthens the analysis of the BPS spectrum of the rank-2 S-fold theories performed in <cit.>.One could perform a similar analysis in the case of flux-full S-folds. The resulting rank-1 theories on the codimension-1 singularities are all consistent in this case, therefore the CBof these theories is ℂ^N/(G(k,1,r) ) and there are massless charged states on all singularities. §.§.§ Strings across the flux-less S_4-foldAs a final example of our techniques before delving into the topic of exceptional S-folds we consider the fluxless S-fold with k=4. Similarly to the case of the S_3-fold we will show that the strings stretched between a D3-brane and its images do not produce BPS states. In addition to the analysis of the charge lattice we will also consider constraints on the central charges of the theories supported on codimension-1 singularities. These two computations give incompatible results unless the strings across the S_4-fold are not BPS and in return the S_4-fold SCFTs must be discrete gaugings, reproducing the M-theory results of <cit.>.A similar phenomena happens in exceptional S-fold theories, for example the G_8 theory discussed in Section <ref> turns out to be a discrete gauging of a free theory.Consider the rank-r fluxless S_4-fold SCFT. The CBof the rank-1 SCFT on the singularity that arises when a D3-brane coincides with the S_4-fold is parametrized by the motion of the D3-brane on a 1-complex-dimensional slice of the transverse space, and is therefore /_4. The order of the 1-form symmetry group of the rank-1 SCFT on this singularity was computed in (<ref>):|G^(1)| = | [J^(1)] | =2The only candidate for the rank-1 theory supported on this singularity is the =3 preserving _4 gauging of SU(2) =4 SYM.We may now consider the central charge of the S_4-fold SCFTs. Assuming that the rank-1 theories on all the singularities are not empty and the S_4-fold SCFTs are not discrete gaugings the central charges can be computed with the Shapere-Tachikawa formula <cit.>:2(2 a-c) = ∑_j=1^r Δ_j-r/2 = 2 r^2 + 3/2 rwhere {Δ_i } = {4,8,… 4r} are the degrees of invariants of G(4,1,r). The central charges may also be computed using the formulae of <cit.> that relate data of the rank-1 theories supported on the codimension-1 singularities to the central charge of the rank-r theory:12 c=2 r+h_ECB+∑_i ∈ℐΔ_i^sing b_iwhere b_i is a quantity associated with the rank-1 theory supported on the i-th codimension-1 singularity as follows:b_i:=12 c_i-h_i-2/Δ_iIn our theory the extended CBdimension ish_ECB = r and the set ℐ consists of the two codimension-1 singularities. The scaling dimensions Δ_i^sing of these singularities can be found for example in <cit.>, Appendix B. It turns out that the singularity associated to the collision of two D3-branes has scaling dimension Δ_1^sing=4 r (r-1) and parameter b_1 = b_SU(2) = 3. The other singularity has scaling dimension Δ_2^sing=4r, therefore (<ref>) reduces to:12c = 3r + 12 r (r-1) + b_2 4rComparing with (<ref>) with a=c and solving for b_2 one finds:b_2 = 9/2Which is compatible with having a rank-1 fluxfull S_4-fold SCFT on the singularity corresponding to a D3-brane on top of the S_4-fold. This is incompatible with the charge lattice computed above, indeed the only possible SCFT on this singularity compatible with the charge lattice is a discrete gauging of =4 SU(2) SYM, which would require b_2=3. We conclude that the states associated to string stretched between a D3-brane and its images do not produce BPS states. Then the rank-1 theory arising when one D3-brane approaches the fluxless S_4 fold is not an interacting SCFT, but rather a discrete gauging of free =4 Maxwell theory. The S_4-fold SCFTs can then be thought as a _4 discrete gauging of a parent theory with moduli space ^3r/G(4,4,r), reproducing the results of <cit.>. § EXCEPTIONAL S-FOLDS In this Section we study exceptional S-fold =3 SCFTs <cit.>. We apply the techniques spelled out in the previous Sections to compute the charge lattice of these theories, the order of the 1-form symmetry group and we determine when such SCFTs can be built as discrete gauging of a parent theory. The exceptional S-fold setup of <cit.>, briefly reviewed below, engineers a set of SCFTs labelled by an algebra of type D_n or E_r, r=6,7,8, and the order of the S-fold k=3,4,6. The analysis can be generalized by including a suitable outer automorphism <cit.>. For simplicity in this paper we focus on the exceptional algebras E_6,7,8 with k=3,4,6 and without outer automorphism twists. Extending our methods to the full set of exceptional S-fold SCFTs should not present any major technical or conceptual difficulty, but we leave this task to future work.We are thus interested in 9 theories labelled by k=3,4,6 and by the exceptional algebra E_r. We find compelling arguments that suggest that many of these theories do not admit a well defined charge lattice and are discrete gauging of free theories. In particular we are left with only 2 theories that, given our current understanding, are proper interacting =3 SCFTs. The first is usually denoted as G_8, from the associated exceptional complex crystallographic reflection group, and can be engineered as the k=4 S-fold with either the E_6 or E_7 algebra. The second theory is G_31 and can be engineered as the k=4S-fold with the E_8 algebra. Our results are summarized in Table <ref>.In <cit.> the authors presented an alternative M-theory construction of regular S-fold theories dual to the Type IIB setup described above. This allowed them to generalize the S-fold construction of <cit.> to a wider class of theories parametrize by an ADE algebra and the order of the S-fold projection k=3,4,6. In this classification the regular S-folds are associated to the A_n algebras, while the theories associated to the D_n and E_n algebras are new =3 SCFTs that, up to now, have no known geometric construction in Type IIB.Let us briefly review the results of<cit.>. The S-fold projection involves an element of the S-duality group as well as an element of the R-symmetry SO(6) of =4 SU(N) SYM, which is a rotation transverse to the D3 branes in the Type IIB setup. By compactifying two directions T^2_E transverse to the D3-brane stack we generally break the R-symmetry to SO(4)×_2, although for particular values of the complex structure of the torus the R-symmetry enhances to SO(4)×_4 for τ_E=i or SO(4)×_6 forτ_E=e^2π i /3. These subgroups of the original SO(6) R-symmetry are enough to perform the S-fold projection.Now we may T-dualize along one compact transverse direction, giving a Type IIA setup, and then uplift to M-theory. By carefully tracking the action of the various symmetries along these manipulations, it was shown that the regular S-fold setup is dual to M-theory on ^1,3× ( S_M^1 × S_T^1 × S_E^1 ×^2)/_k, with a stack of N M5-branes along^1,3× S_M^1 × S_T^1. The radii of the various circles are related by:R_M = R R_T = Im(τ) R R_E = 1/Im(τ) R^2And the _k quotient act as a rotation on ^2 and on the torus S_M^1 × S_T^1, as well as a non-geometric quotient on S_M^1 × S_T^1 × S_E^1 fixing the ρ parameter of this torus to be order 1:ρ = ∫_T^3 C+ i √(det G)We now have an S-fold construction that involves a stack of N M5-branes. Famously, on flat spacetime, this stack engineers the (2,0) 6d theory of type A_N once the center of mass motion is decoupled. It is natural to ask wether it is possible to generalize this setup to the other(2,0) 6d theories, namely the type D and type E theories. In <cit.> it was shown that such a construction is possible and involves a non-geometric setup, meaning that there is no duality frame where the system is described by string theory in a geometric background. By contrast, the regular S-fold setup is dual to F-theory on a geometric terminal singularity.In this paper we will study exceptional S-fold theories as a particular projection of the corresponding =4 SYM theories obtained by compactifying the (2,0) theory on a torus. Indeed both the R-symmetry and the S-duality involved in the S-fold quotient are present in the 4d theory, allowing us to understand some properties of the S-fold theories directly in 4d. There are some subtleties in this approach given by the fact that quantities of interest, for example the moduli space and the charge lattice, are only defined up to Weyl transformations of the gauge algebra, as explained in <cit.>. We expand upon this approach in the rest of the paper while we refer the reader to the original literature <cit.> for the M-theory construction of exceptional S-fold theories.§.§ S-folds from the (2,0) E_6 theory The six-dimensional (2,0) theory of type E_6 on torus T^2×ℝ^4 engineers =4 SYM with gauge algebra E_6 in the 4d limit. When this compactification is complemented with the S-fold projection spelled out above one obtains the exceptional S-fold theories of interest. The strategy we adopt, introduced in full generality in Section <ref>, is to compute the effect of the S-fold projection directly on the four-dimensional charge lattice. This approach allows us to compute the charge lattice of the =3 S-fold theories from the charge lattice of the =4 SYM with gauge algebra E_6. The analysis parallels the one in <cit.>, where the moduli space of exceptional S-fold theories was computed as a subquotient of the moduli space of the =4 SYM parent theory. The charge lattice of =4 E_6 SYM is spanned by the W-bosons, which are valued in the root lattice Δ of E_6 and by the magnetic monopoles, which are valued in coroot lattice Δ^∨. Choose a basis for the root and coroot lattices given by a set of simple roots and the corresponding coroots respectively. In this basis the metric on the root lattice is given by the Cartan matrix 𝒜_E_6 of E_6, see Figure <ref>, and the roots are represented by integer vectors with length √(2). The simple roots are represented by vectors with one entry equals to 1 and the other entries equal to 0. A charge Q̃ in the charge lattice Γ = Δ⊗Δ^∨ is represented by an integer twelve-dimensional vectors, where the first six entries are electric charges and the last six entries are magnetic charges:Q̃ = ( e_1,e_2,…,e_6, m_1, m_2, …, m_6 )The Dirac pairing between two charge Q̃ and P̃ is given by Q̃· J_E_6·P̃^T where the Dirac pairing J_E_6 is given by <cit.>:J_E_6 =( [0 ( 𝒜_E_6)^T; -𝒜_E_60 ]) In the following it will be more convenient to write the charges in a basis where we alternate electric charges and magnetic charge, namely:Q = ( e_1, m_1, e_2, m_2, …, e_6,m_6 )We will distinguish the charges in the two basis by using tildes for vectors in the first basis (<ref>) and symbols without tildes in the second basis (<ref>). The Weyl group of E_6 is generated by the reflections along the simple roots, we denote the reflection along the i-th simple root as s_i. A useful element of the Weyl group is the Coxeter element c_E_6, defined as:c_E_6 = s_1 · s_2 · s_3 · s_4 · s_5 · s_6which has order equal to the Coxeter number h_E_6 = 12:(c_E_6)^12 = IdThe eigenvalues of the Coxeter element are λ_i = e^2π i / (m_i-1) where m_i, i=1…,6 are the degrees of the invariants of E_6, tabulated in <ref>.In the basis given by the simple roots the Coxeter element is represented by the matrix: -1 c_E_6= ([ ­ 0 0 1 0-1-1; 1 0 1 0-1-1; 0 1 1 0-1-1; 0 0 1 0-1 0; 0 0 0 1-1 0; 0 0 1 0 0-1 ])Consider now the exceptional S-fold setup that engineers an =3 SCFT in four dimensions.In Section <ref> we studied the S-fold projection along the lines of <cit.> and discussed how the rank, CB , charge lattice and associated Dirac pairing can be computed directly from the =4 parent theory, in this case =4 E_6 SYM. Here we summarize the main results for ease of readibility. The CBof the =3 S-fold theory is given by the solutions to:w·ϕ_𝒞 = e^2π i/kϕ_𝒞where ϕ_𝒞 are elements of the CBof E_6 =4 SYM.The element w∈𝒲[ E_6 ] encodesthe projection induced by the S-fold on the CBand on the charge lattice of the E_6 =4 theory. The rank r of the =3 theory is given by the complex dimension of the eigenspace associated to the eigenvalue e^2π i/k of w and we choose w such that the =3 has maximum rank, following <cit.>. The degrees of basic CBinvariants is then given by the degrees of invariants of E_6 that are divisible by k and the =3 CBitself is ^r/G with G the complex reflection group with the correct degrees, see Table <ref>.The charged states |α;(p,q)⟩ of the =3 theory are given by:|α;(p,q)⟩ =1/√(k)∑_t=0^k-1|w^t ·α ;(ρ_k)^t ·(p, q)⟩where |β; (p,q)⟩ is a (p,q)-dyonic states ofE_6 =4 SYM associated to the root β of E_6. The electromagnetic charge of a state |α;(p,q)⟩ is given by;Q[|α;(p,q)⟩] =1/√(k)∑_t=0^k-1 (w ⊗ρ_k )^t · Q[|α; (p,q)⟩]where Q[|α; (p,q)⟩] is the electromagnetic charge of the corresponding state ofE_6 =4 SYM, expressed as in (<ref>). As an example the W-boson associated to the first root α_1 of E_6 has charge Q[|α_1; (1,0)⟩]= (1,0;0,0; …) while the magnetic monopole associated to the first coroot has charge Q[|α_1; (0,1)⟩]= (0,1;0,0; …).One can consider more general states that are invariant under the S-fold action, see for example (<ref>). In the case of regular S-folds some of these states appear in the presence of discrete torsion and correspond to strings stretched between the S-fold and a D3 brane. We checked that in the case of exceptional S-folds the states (<ref>) can never be included consistently because they break the Dirac quantization condition, therefore in the remainder of this paper we will only mention them briefly.Finally, the Dirac pairing defined on the charge lattice of the =3 theory is obtained as a restriction of the Dirac pairing of E_6 =4 SYM (<ref>). Explicitly the Dirac pairing between two states of the S-fold theories with charges q_i and q_j is given by:⟨ q_i, q_j⟩=q_i · J_E_6· q_j^TNotice that it is not guaranteed that ⟨ q_i, q_j⟩ gives an integer result and one should check case by case that the Dirac pairing between any two charges of the S-fold theories is integer. In the following we do not consider any charge lattice where the Dirac pairing can take fractional values.§.§ The k=6 S-fold: G_5 The first exceptional S-fold theory that we consider is obtained as a _6 S-fold compactification of the (2,0) six-dimensional E_6 theory to four dimension.The compactification preserves =3 supersymmetry in four dimension and involves an S-duality transformation ρ_6∈ SL(2,) and an R-symmetry twist. The CBis given by the solutions to (<ref>) with k=6, namely:w·ϕ_𝒞 = e^π i/3ϕ_𝒞There are 2 invariants of E_6 whose degrees are divisible by 6, namely the invariants with degrees 6 and 12, and therefore we expect that the =3 theory has rank r=2. We choose an element w∈𝒲[ E_6 ] which has a two-dimensional eigenspace associated to the eigenvalue e^π i/3:w = ( c_E_6)^2which is the basis given by the simple roots is represented by the matrix:-1 w =([010 -100;011 -1 -1 -1;111 -1 -1 -1;011 -10 -1;001 -100;0100 -10 ])Then the CBof the =3 theory, given by the solutions to (<ref>), is ^2/G_5 where G_5 is the CCRG with degrees 6 and 12. Similarly the charge lattice of the =3 theory can be obtained from the charge lattice of the =4 E_6 SYM. Given a state of =4 E_6 SYM associated to the root α with electric and magnetic charges (p,q) one can build a state |α,(p, q)⟩ that is invariant under the S-fold action:|α,(p, q)⟩=1/√(6)∑_t=0^5|w^t·α ;(ρ_6)^t ·(p, q)⟩Consider the six states |α_i,(1, 0)⟩ obtained with this projection from the W-bosons associated to the simple roots α_i, i=1,…,6 of E_6. The electromagnetic charges q_i of these states can be computed using (<ref>):-1[q_1 = 1/√(6)(2,-1, 1,-2, 2,-4, 0,-3, 1,-2, 0, 0); q_2= 1/√(6)(1,-2, 3,-3, 2,-4, 1,-2,-1,-1, 2,-4); q_3= 1/√(6)(1, 1, 2,-1, 4,-2, 3,-3, 2,-1, 0, 0); q_4= 1/√(6) (-2, 1,-1, 2,-2, 4, 0, 3,-1, 2, 0, 0); q_5= 1/√(6) (-1, 2,-3, 3,-2, 4,-1, 2, 1, 1,-2, 4); q_6= 1/√(6)(0, 0,-2, 4,-2, 4,-2, 4, 0, 0, 2, 2) ]Notice that q_1=-q_4 and q_2 = -q_5, therefore these charges span a four-dimensional lattice Γ:Γ = Span_ℤ{q_1, q_2, q_3, q_6 } The charges of states obtained from W-bosons associated to other roots of E_6 are included in Γ because the other roots are linear integer combinations of the simple roots and (<ref>) is linear in the charges. One can also check that the charges of states |α_i,(0, 1)⟩obtained from monopoles of E_6 are included in Γ as well, therefore by linearity Γ includes the charges of all the states (<ref>).One may also consider the more general states (<ref>). We checked explicitly that including some or all of these states either leaves Γ unchanged or produces fractional Dirac pairing between the states, which is inconsistent. Then (<ref>) is the candidate for the charge lattice Γ of the =3 G_5 exceptional S-fold theory. In the remainder of this section we will show that Γ is actually incompatible with a consistent CBstratification, and we will argue that the low energy field theory is given by a discrete gauging of free U(1)^2 =4 gauge theory.Having computed a candidate Γ for the charge lattice of the G_5 theory we now study the Dirac pairing defined on this lattice and the sublattices of states that become massless on some CBsingularity. The Dirac pairing between two states with charges q_i and q_j is given by:⟨ q_i, q_j ⟩ = q_i · J_E_6· q_j^Twhere J_E_6 is the Dirac pairing of the =4 theory (<ref>). In the basis of Γ given by q_1, q_2, q_3 and q_6 the Dirac pairing J_G_5 of the =3 theory is represented by the matrix:J_G_5 = ([0 -13 -4;10 -14; -310 -2;4 -420 ])If Γ is the charge lattice of the G_5 theory then the order of the 1-form symmetry group is given by the absolute value of the Pfaffian of J_G_5:|G^(1)_G_5| = |[J_G_5] | = 6 Let us consider the states becoming massless on some codimension-1 singularity on the CB . We can parametrize the CBof the =4 E_6 SYM with six complex scalars ϕ_i, i=1,…,6, with identifications given by the Weyl group of E_6. The CB𝒞_G_5 of the =3 G_5 theory is given by the eigenspace of w with eigenvalue e^π i/3 and can be parametrized as follows as en embedding in 𝒞_E_6:𝒞_G_5 =( ϕ_3 - ϕ_4, e^iπ/3ϕ_3 + e^-iπ/3ϕ_4, ϕ_3, ϕ_4,e^2π i/3 (ϕ_4 - ϕ_3), √(3) e^i π /6ϕ_3 + 2 e^4π i /3ϕ_4 )𝒞_G_5 = { v_3ϕ_3 + v_4 ϕ_4 , ϕ_3, ϕ_4 ∈}∩𝒞_E_6v_3 =( 1,e^1π/3, 1,0,-e^2π i/3 , √(3) e^iπ/6) v_4 =( -1, e^-iπ/3,0,1,e^2π i/3 2e^4π i/3)The codimension-1 singularities of 𝒞_G_5 correspond to fixed points under the reflection of G_5 acting on this slice. As discussed in Section <ref> can be obtained as the intersections of the codimension-1 singularities of the E_6 =4 SYM with the slice (<ref>). As an example consider the singularity ℋ_s_1^E_6 of 𝒞_E_6 corresponding to the fixed locus under s_1, the reflection along the first simple root of E_6, which is the 5-dimensional hyperplane:ℋ_s_1^E_6 = {( ϕ_1, 2ϕ_1,ϕ_3,ϕ_4,ϕ_5,ϕ_6 ), ϕ_i ∈}The intersection of ℋ_s_1^E_6 with the slice 𝒞_G_5 gives a codimension-1 singularity ℋ_s_1^G_5 of the CBof the G_5 theory:ℋ_s_1^G_5 = ℋ_s_1^E_6∩𝒞_G_5= ( 1,2 ,1/2(5-i √(3)),1/2(3-i √(3)),-(-1)^2/3,1/2(3-i √(3)))ϕ _3The states that can become massless on ℋ_s_1^G_5 are those whose central charge Z vanish identically on ℋ_s_1^G_5.On a generic point ϕ of the CBof E_6 =4 SYM the central charge Z of a state with charge q is given by:Z[q] = ∑_i,j=1^6ϕ_i(A_E_6)_ij (e_j + τ m_j)The central charges Z[q_1] and Z[q_3] of |α_1,(1, 0)⟩ and |α_3,(1, 0)⟩ identically vanish on the singularity ℋ_s_1^G_5, therefore the corresponding BPS states become massless on this singularity. One can also check that the sublattice Γ^ℋ_s_1^G_5 of charges of states that become massless on this singularity ℋ_s_1^G_5 is generated by q_1 and q_3. The lattice Γ^ℋ_s_1^G_5 should correspond to the charge lattice of the rank-1 CFT supported on the singularity ℋ_s_1^G_5.The Dirac pairing restricted to the sublattice Γ^ℋ_s_1^G_5, which we denote as J^ℋ_s_1^G_5, has Pfaffin given by:|[J^ℋ^G_5_s_1] | = | ⟨ q_1, q_3 ⟩| = 3Which should be equal to the order of the 1-form symmetry group of the rank-1 theory supported on the singularityℋ_s_1^G_5. Then the theory on this singularity would be a rank-1 ≥ 2 SCFT with a 1-form symmetry group of order 3. All the rank-1 theories with =2 or higher supersymmetry have been classified, and such a theory does not exist. In particular the maximum order of the 1-form symmetry group for a rank-1 =2 SCFT is 2. We conclude that the theory living on this singularity of CBis not a CFT, but rather a discrete gauging of free U(1) =4 Maxwell theory, which is the only other possibility[Remember that the exceptional S-fold theory are maximally strongly coupled, therefore the theories living on the singularities of the moduli space can not be IR-free theories.]. In particular this implies that there are no states becoming massless on the singularity, therefore the states with charges lying on the sublatticeΓ^ℋ_s_1^G_5 are not BPS. As another example, consider the singularity corresponding to the reflection s_6 along the sixth root of E_6. The locus of the singularity ℋ_s_6^G_5 can be parametrized as:ℋ_s_6^G_5=( 1,1/6(9-i √(3)),(2-2 i/√(3)),(1-2 i/√(3)),-(-1)^2/3 ,(1-i/√(3)) ) ϕ _3and the sublattice of charges becoming massless on this singularity is spanned by q_6 and (q_2+q_3-q_1). The Dirac pairing restricted to this sublattice has Pfaffian equal to:|[J^ℋ^G_5_s_6] | = | ⟨ q_6, q_2+q_3-q_1 ⟩| = 6Then the rank-1 CFT on this singularity should have a 1-form symmetry group of order 6. As was the case for the previous singularity, such a CFT does not exist, and the theory on this singularity must be a discrete gauging of free U(1) =4 Maxwell theory. We conclude that there are no states becoming massless on this singularity.One can perform similar computations on all the singularities of the CBof the G_5 =3 theory. It turns out that all the codimension-1 singularities are equivalent, up to G_5 transformations, either to ℋ_s_1^G_5 or to ℋ_s_6^G_5. It follows that the rank-1 theories supported on every codimension-1 singularities of the CB𝒞_G_5 are discrete gaugings of free U(1) =4 Maxwell theory. Then there are no charged states becoming massless an any codimension-1 singularity, and the G_5 theory itself must be a discrete gauging of a free theory, namely free U(1)^2 =4 gauge theory.Indeed, if any charged state with charge q become massless at the origin of the CB , which is a codimension-2 singularity, then it satisfies the BPS bound and is massless whenever its central charge vanishes, namely on the codimension-1 hypersurface identified by Z[q]=0. As we just discussed there are no charged states that become massless on any codimension-1 singularities, therefore there are no massless charged states on any point of the CB , including the origin. In section <ref> we give additional evidence for this claim and show that it is in factimpossible to define a consistent charge lattice on a CBwith geometry ^2/G_5.§.§ The k=4 S-fold: G_8 In this Section we consider the exceptional S-fold SCFT obtained as a k=4 S-fold of the E_6 (2,0) six-dimensional theory, called the G_8 SCFT. We find that the charge lattice is not consistent with the stratification proposed in <cit.>. In more detail, our analysis suggests that the theory supported on condimension-1 singularities in the CBis the =3 preserving _4 gauging of SU(2) =4 SYM, while the constraints from the central charge formulae are compatible with this theory being the rank-1 S_4,4-fold SCFT. Therefore we claim that the G_8 theory is a discrete gauging of free U(1)^2 =4 gauge theory. The CBof the G_8 theory is given by the solutions of (<ref>) with w an element of the Weyl group of E_6:w = (c_E_6)^3 which satisfies w^4=Id and has a two-dimensional eigenspace with eigenvalue e^π i /2. The =3 theory then has rank r=2 and the degrees of invariants on the CBare given by the degrees of E_6 that are divisible by 4, namely Δ_i = 8,12. The CBis given by ^2/G_8 where G_8 is the exceptional complex reflection group with the correct degrees of invariants. The states that are invariant under the S-fold action can be computed using (<ref>), (<ref>). In particular the states obtained from the W-bosons corresponding to the simple roots of E_6 have charges:-1[ q_1=1/√(2)(1, -1 , 0, -2 , 0, -2 ,-1, -1 , 0,0 , 0,-2); q_2=1/√(2)(0,0 , 1, -1 , 0, -2 , 0, -2 ,-1, -1 , 0, 0); q_3=1/√(2)(1, -1 , 1, -1 , 2, -2 , 1, -1 , 1, -1 , 0,-2); q_4=1/√(2) (-1,1 , 0,2 , 0,2 , 1,1 , 0,0 , 0, 2); q_5=1/√(2)(0,0 ,-1,1 , 0,2 , 0,2 , 1,1 , 0, 0); q_6=1/√(2)(0,2 , 0,2 , 0,4 , 0,2 , 0,2 , 2, 2) ]Notice that q_4 = -q_1 and q_5=-q_2, therefore these charges span a four-dimensional lattice. By computing the charges of the states obtained from the magnetic monopoles of the E_6 =4 theory and by linearity argument one shows that the candidate Γ for charge lattice Γ of the G_8 theory is:Γ = Span_𝕀{q_1, q_2, q_3, q_6 }The charge lattice in the basis {q_1, q_2, q_3, q_6 } is represented by the matrixJ_G_8:J_G_8= ([01 -12; -101 -2;1 -102; -22 -20 ])And the order of the 1-form symmetry group is given by:|G_G_8^(1)|=|Pf(J_G_8)|=2Next we can study the sublattices of charges of states becoming massless at codimension-1 singularities. All the codimension-1 singularities are related by G_8 transformations and the slices transverse to these singularities are locally /_4. Furthermore through similar computations to the ones spelled out in the previous section one finds that the charge lattice of the rank-1 theory supported on these singularities is generated by two charges Q_1 and Q_2 with |⟨ Q_1, Q_2 ⟩| = 2.Then the rank-1 theory supported on the codimension-1 singularities is a ≥ 2 SCFTs with CB/_4 and a non-trivial _2 1-form symmetry. The only candidate is the =3 preserving _4 descrete gauging of =4 SU(2) SYM <cit.>. This is in contradiction with the analysis of the central charge of the G_8 theory performed in <cit.>, where the theory supported on the codimension-1 singularities was found to be the rank-1 S_4,4-fold SCFT, denoted also as 𝒮_∅, 4^(1). Let us briefly review this analysis.Assuming that the G_8 theory is not a discrete gauging, the central charges a = c can be computed with the Shapere-Tachikawa formula <cit.>:2(2a-c) =∑_j=1^rΔ_j -r/2where Δ_i are the degrees of the fundamental invariants on the CB . In the case of the G_8 theory we have {Δ_1, Δ_2} = {8,12}. On the other hand the formulae of <cit.> allow us to relate the central charges of the G_8 theory with the data of the rank-1 theories supported on the codimension-1 singularities:12 c=2 r+h_ECB+∑_i ∈ℐΔ_i^sing b_iwhere b_i is a quantity associated with the rank-1 theory supported on the codimension-1 singularities as follows:b_i:=12 c_i-h_i-2/Δ_iIn our theory we have h_ECB = r = 2 and the set ℐ of strata consist of only one singularity with scaling dimension Δ^sing = l.c.m.(8,12) = 24 and parameter b. Then, remembering that a=c for any =3 theory, one may solve for b and finds:b=9/2which is compatible with the rank-1 fluxfull S_4-fold SCFT. In contrast, if the theory supported on the codimension-1 singularities was a discrete gauging of SU(2) =4 SYM, we would have b_SU(2)=3. We found that the charge lattice (<ref>) is incompatible with analysis of the central charges performed via the stratification of the CB [Another possibility is that the Shapere-Tachikawa formula does not hold for the G_8 theory. In that case the G_8 theory could be an interacting SCFT with 12c=78. We do not consider this possibility further in this paper and trust the Shapere-Tachikawa formula for any theory that is not a discrete gauging.].Therefore we claim that charged states can not become massless on the singularities of the CB . Similar to the case of the G_5 theory, studied in Section <ref>, we thus conclude that the G_8 theory is not an interacting SCFT but rather a discrete gauging of free U(1)^2 =4 gauge theory. In Section <ref> we will give additional evidence for this claim by showing that any well defined charge lattice on the CB^2/G_8 is only compatible with having (a discrete gauging of) SU(2) =4 SYM supported on the codimension-1 singularities. §.§ The k=3 S-fold: G_25In this section we study the theory obtained with a k=3 exceptional S-fold from the E_6 (2,0) six-dimensional theory, denoted as the G_25 theory. By similar argument to the ones spelled out in the previous cases we find that this theory is a discrete gauging of U(1)^3 =4 gauge theory.In particular the charge is incompatible with any choice of rank-1 SCFTs on the codimension-1 singularities of the CB . The CBand charge lattice can be found respectively with (<ref>) and (<ref>) with: w=(c_E_6)^4which satisfies w^3=Id and has a three-dimensional eigenspace with eigenvalue e^2π i /3. The =3 theory then has rank r=3 and the degrees of invariants on the CBare given by the degrees of E_6 that are divisible by 3, namely Δ_i = 6,9,12. The CBis given by ^3/G_25 where G_25 is the exceptional complex reflection group with the correct degrees of invariants.The lattice of electromagnetic charges associated to the rank-1 theory supported on the codimension-1 singularities can be computed with the techniques spelled out in the previous Sections.The result is that these lattices are generated by two charges Q_1 and Q_2 with ⟨ Q_1, Q_2 ⟩=3, indicating that the rank-1 theory on these singularity should have a 1-form symmetry group of order 3. This is not possible and we conclude that the theory on the codimension-1 singularity is a discrete gauging of =4 Maxwell theory. Therefore the G_25 theory itself must be a discrete gauging of free U(1)^3 =4 gauge theory, because charged states can not become massless anywhere on the CB . In Section <ref> we give additional evidence for this claim and show that it is impossible to define a consistent charge lattice on a CB^3/G_25. §.§ S-folds from the (2,0) E_7 theory In this section we consider the exceptional S-fold theories obtained from the (2,0) six dimensional theory of type E_7. All the techniques that we use were spelled out in details in Section <ref> and were applied to the E_6 case in Section <ref>. Therefore in this section we will only provide the informations that define the S-fold projection, namely the element w∈𝒲[E_7], and the final results.The main result is that all the exceptional S-folds SCFTs obtained from the E_7 theories are discrete gauging of free U(1)^r =4 gauge theory, where r is the rank of the theory, see Table <ref>.We work in a basis of the algebra E_7 given by simple roots α_i such that the Cartan matrix is the one in Figure <ref>. The reflections along the simple roots are denoted as s_i and the corresponding Coxeter element of E_7 is:-1 c_E_7 = ∏_i=1^7 s_i = ([00010 -1 -1;10010 -1 -1;01010 -1 -1;00110 -1 -1;00010 -10;00001 -10;000100 -1;])which satisfies:(c_E_7)^18 = 1In defining the elements w involved in the S-fold projections we will also use the Coxeter element of the E_6 subalgebra:-1 c_E_6⊂ E_7 = ∏_i=2^7 s_i= ([1000000;10010 -1 -1;01010 -1 -1;00110 -1 -1;00010 -10;00001 -10;000100 -1;])which satisfies:(c_E_6⊂ E_7)^12 = 1 The degrees and codegrees of E_7 are tabulated in Table <ref>. Let us now consider the exceptional S-fold theories parametrized with k=3,4,6.* Case 𝐤=3, 𝐆_26:The CBand charge lattice can be computed with (<ref>) and (<ref>) respectively with:w=(c_E_7)^6The theory is a rank 3 SCFT with CB^3/G_26 where G_26 is the ECCRG with degrees 6,12 and 18. There are two independent codimension-1 singularities that correspond to two rank-2 CB es with geometry ^2/G_5 and ^2/G(3,1,2), respectively. The slice transverse to the G_5 singularity is /_2 while the slice transverse to the G(3,1,2) singularity is /_3. One can compute the order of the 1-form symmetry groups of the rank-1 theories supported on these singularities from the charge lattice, and we find:_2singularity:G^(1) = _2 _3singularity:G^(1) = _3There is no rank-1 =2 SCFT with a _3 1-form symmetry, therefore we conclude that the _3 singularity is empty and supports a discrete gauging of free U(1) =4 Maxwell theory.Comparing with the analysis of the central charges performed in <cit.> the only option is that the _2 singularity is empty as well and therefore the G_26 theory is itself a discrete gauging of free U(1)^3 =4 gauge theory.* Case 𝐤=4, 𝐆_8:The CBand charge lattice can be computed with (<ref>) and (<ref>) respectively with:w=(c_E_6⊂ E_7)^3The theory is a rank 2 SCFT with CB^3/G_8 where G_8 is the ECCRG with degrees 8 and 12. This theory is believed to be the same theory as the exceptional S-fold SCFT of type E_6 with k=4 studied in Section <ref>. The rank-1 theory supported on the codimension-1 singularity has a _2 1-form symmetry. Then following the same arguments as in Section <ref> we find that this theory must be a discrete gauging of free U(1)^2 =4 gauge theory. * Case 𝐤=6, 𝐆_26:The CBand charge lattice can be computed with (<ref>) and (<ref>) respectively with:w=(c_E_7)^3The theory is a rank 3 SCFT with CB^3/G_26 where G_26 is the ECCRG with degrees 6,12 and 18. Performing the same computations as in the k=3 case we find that this theory must be a discrete gauging of free U(1)^3 =4 gauge theory as well. We argued that all the exceptional S-fold theories of type E_7 are not interacting SCFTs but rather discrete gauging of free theories. In Section <ref> we will give additional evidence for this claim by showing that it is not possible to define a consistent charge lattice on the CB es of these theories.§.§ S-folds from the (2,0) E_8 theoryIn this section we consider the exceptional S-fold theories obtained from the (2,0) six dimensional theory of type E_8.Our main result is that the exceptional S-folds SCFTs obtained from the E_8 theories with k=3,6 are discrete gauging of free U(1)^r =4 gauge theory, where r is the rank of the theory, see Table <ref>. On the other hand, the exceptional S-fold SCFT of type E_8 with k=4, also known as the G_31 theory, passes all consistency checks, therefore we expect it to be a non-trivial interacting =3 SCFT. Considering also our results for the exceptional S-fold theories of type E_6 and E_7 the G_31 theory is the only exceptional S-fold SCFT of type E which is a proper interacting theory. We also compute the 1-form symmetry group of the G_31 theory and find it to be trivial.We work in a basis of the algebra E_8 given by simple roots α_i such that the Cartan matrix is the one in Figure <ref>. The reflections along the simple roots are denoted as s_i and the corresponding Coxeter element of E_8 is:-1 c_E_8 = ∏_i=1^8 s_i= ([000010 -1 -1;100010 -1 -1;010010 -1 -1;001010 -1 -1;000110 -1 -1;000010 -10;000001 -10;0000100 -1;])which satisfies:(c_E_8)^30 = 1We also report the explicit expressions for s_7 and s_8, which are used in defining the elements w, see Table<ref>.-1 s_7 =([10000000;01000000;00100000;00010000;00001000;00000100;000001 -10;00000001;]),s_8 = ([10000000;01000000;00100000;00010000;00001000;00000100;00000010;0000100 -1;])* Case 𝐤=3𝐨𝐫6, 𝐆_32:The exceptional S-folds of type E_8 with k=3 and k=6 give rise to the same field theory.The CBand charge lattice can be computed with (<ref>) and (<ref>) respectively with:w= {[ (c_E_8)^10, k=3;(c_E_8)^5, k=6 ].The CBis ^4/G_32 with G_32 the ECCRG with degrees 12,18,24 and 30. There is only one codimension-1 singularity up to G_32 transformation. The transverse slice to this singularity is /_3 and the 1-form symmetry group of the theory supported on this singularity is _3. There is no rank-1 =2 theory compatible with a /_3 CBand with a _3 1-form symmetry group, therefore this singularity must be empty. Then the G_32 theory itself must be a discrete gauging of free U(1)^4 =4 gauge theory.* Case 𝐤=4, 𝐆_31:The CBof the S-fold theory of type E_8 with k=4 can be computed with (<ref>)where:w = (c_E_8(s_7 s_8)^-1 c_E_8 c_E_8(s_7 s_8)^-1)^6The CBis ^4/G_31 where G_31 is the ECCRG with degrees 8,12,20 and 24. Notice that in order for the S-fold theory to be rank 4, w must have a four eigenvalues i. w is real, therefore it must also have four eigenvalues -i, and thus w^2 = - 1. Then one can consider the states:|α,(p, q)⟩_short=1/√(2)∑_t=0^1|w^t ·α ;(ρ_4)^t ·(p, q)⟩where α is a root of E_8. The states (<ref>) are invariant under the S-fold action for any α and (p,q), therefore we consider the charge lattice Γ spanned by the charges of the states (<ref>). A basis for this lattice is given by the charges q_i of the states |α_i,(1, 0)⟩_short obtained from the W-bosons associated with the simple roots α_i of E_8 with i=1,…,8:-1[q_1 = 1/√(2) (1,0,0,0, 0 ,0,0,0, 0 , 1 ,0, 1 ,0,0, 0 , 0 ); q_2 =1/√(2)( 0, -1,1, -2,0,-3 ,0, -4,0,-5 , 0 ,-4 , 0, -20, -2);q_3 = 1/√(2) (0,1,0,2,1,3,0, 4 , 0 , 4 ,0,3,0,1,0,2);q_4 = 1/√(2) (0,0,0, -1, 0 , -2,1, -2,0, -2,0, -1,0,0,0, -1);q_5 = 1/√(2) (0, -1,0, -1,0, -1,0, -2,1, -3,0, -2,0, -1,0, -2);q_6 = 1/√(2) (0,0,0,1,0,1,0,2,0,3,1,2,0,1,0,2); q_7 =1/√(2) (0,1,0,1,0,1, 0 ,0,0,0, 0 ,0,1,0,0,0); q_8 =1/√(2) (0,1,0,1,0,2,0,3,0,4,0,2,0,1,1,2); ] The Dirac pairing J_G_31 in this basiscan be computed with (<ref>):-1 J_G_31 =([0001 -1 -111;0000010 -1;000 -11 -110; -101000 -10;10 -100001;1 -110000 -1; -10 -110000; -1100 -1100;])The order of the 1-form symmetry group of the G_31 is given by the absolute value of the Pfaffian of J_G_31:| G^(1)| = | [J_G_31]| = 1Therefore the G_31 theory has trivial 1-form symmetry. Let us now consider the stratification of the CB^4/G_31. There is one codimension-1 singularity up to G_31 transformations and the sublattice of Γ corresponding to the states becoming massless on this singularity is compatible with SU(2) =4 SYM. The transverse slice to this singularity is /_2 which is compatible with the CBof SU(2) =4. This stratification is consistent with the central charge formulae of <cit.>, as already checked in <cit.>. We briefly review the relevant computations here for ease of readibility. The Shapere-Tachikawa formula with a=c allow us to compute the central charges:2c = ∑_j=1^r Δ_j-r/2 = 8 + 12+20+24 - 2 = 62While the formulae of <cit.> relate the central charges with data of the theory supported on the codimension-1 singularity. For the G_31 theory this formula reads:12 c=3 r+ Δ^sing b = 12 + 120 bwhere Δ^sing = lcm (Δ_i) is the scaling dimension of the codimension-1 singularity and b is associated to the data of the rank-1 theory supported on this singularity by (<ref>). Solving for b we find:b=3which is consistent with having SU(2) =4 SYM as the theory supported on the codimension-1 singularities. To summarize, we have found that the exceptional S-fold theories of type E_8 with k=3,6 are discrete gauging of free theories. The exceptional S-fold theory of type E_8 with k=4, denoted as the G_31 theory, passes all the consistency checks that we have at our disposal. We are then lead to claim that the G_31 theory is the only exceptional S-fold theory of type E_6,7,8 that is a proper interacting SCFT. We have also found that the 1-form symmetry group of the G_31 theory is trivial. § CHARGE LATTICES IN N=2 SCFTS WITH Κ≠{1,2} One of the main features of S-fold SCFTs it that they are maximally strongly coupled theories. This means that whenever a charged state become massless then another charged state which is mutually non local with respect to the first one becomes massless as well. Then at any non-generic point of the CBthe low energy theory is strongly coupled and does not admit a conventional lagrangian description.On the one hand this fact renders the study of S-fold SCFTs challenging, because the only vacua where perturbative techniques are viable are the most generic points of the CB , where the low energy theory is simply U(1)^r with no massless charged states.On the other hand, as was shown in <cit.>, maximally strongly coupled theories have to satisfy a series of non-trivial constraints, and are quite restricted as a result. Motivated by the results of <cit.> in this Section we study the charge lattices ofa large class of =2 SCFTs that are maximally strongly coupled, namely =2 SCFTs with characteristic dimension ϰ≠{1,2}. All the regular and exceptional S-fold SCFTs belong to this class of theories except for the known cases where SUSY enhances to =4 <cit.>. This Section is meant to be readable independently from the rest of this paper, therefore there are some repetitions between this and the other Sections.For the rest of this section we only consider interacting rank-r SCFTs with ϰ≠{1,2}. Our main results are:The order of the 1-form symmetry group G^(1) of an =2 rank-2 SCFT with ϰ≠{1,2} satisfies 1≤|G^(1)| ≤ 4. The upper bound can only be saturated by stacks of lower rank theories.An =2 SCFT with ϰ≠{1,2} and rank r≥2that is not a stack of lower rank theories must have at least one codimension-1 singularity that supports (a discrete gauging of) =2^* SU(2) SYM. We also apply the techniques that we develop to the exceptional S-fold theories, which provides an independent argument for claiming that most of these theories are not interacting SCFTs.Let us review the definitions and results of <cit.> that will be useful in this Section. The characteristic dimension of an =2 SCFT is defined as follows. Write the degrees of CBinvariants Δ_i as:( Δ_1, …, Δ_N ) = λ (d_1, …, d_2),d_i ∈, gdc(d_1, …, d_N) = 1Then the characteristic dimension is defined as:ϰ=1/{λ^-1}where {x} is defined as the unique real number such that {x} = xmod 1 and 0< {x}≤ 1.The characteristic dimension can only take eight values:ϰ∈{ 1, 6/5, 4/3, 3/2, 2,3,4,6 } An SCFT with ϰ≠{1,2} is maximally strongly coupled and for any state with charge q, central charge Z[q] ≠ 0 there is another state with charge p and central charge Z[p]=ζ Z[q] where:111111111ζ = {[ e^2π i/3 ϰ = 3, 3/2;i ϰ = 4, 4/3; e^2π i/6 ϰ = 6, 6/5;].therefore the charge lattice Γ is mapped to a lattice [ζ]^r by the central charge Z. The Dirac pairing between two charges p and q can be written as:⟨ q,p ⟩ = 1/ζ - ζ( H(q,p) - H(p,q) )here and in the remainder of this Section, with an abuse of notation, we denote with the same symbol the electromagnetic charges in Γ and the corresponding element in the lattice[ζ]^r.Here H is a positive definite Hermitian form:H: [ζ]^r×[ζ]^r →[ζ]An important implication of having ϰ≠{1,2} is that when a state with charge q becomes massless then also a state with charge p=ζ q becomes massless. Then we can always choose a basis of the lattice of charges becoming massless at a codimension-1 singularity that is of the form { q, ζ q}.§.§ 1-form symmetries of rank-2 =2 SCFTs with ϰ≠{1,2}Consider a rank-2 =2 SCFTs with ϰ≠{1,2}. We choose a basis of the charge lattice Γ of the form { q_1, ζ q_1, q_2, ζ q_2}:Γ = Span_{ q_1, ζ q_1, q_2, ζ q_2}Were q_1 and ζ q_1 become massless at some codimension-1 singularity ℋ_1 and q_2 and ζ q_2 become massless at some other codimension-1 singularity ℋ_2. q_1 and q_2 must be linearly independent for Γ to have dimension 4. An interesting quantity to consider is the absolute value of the Pfaffian of the Dirac pairing J, which is an invariant of the charge lattice and intuitively tells us how sparse the charge lattice is. More precisely in <cit.> it was shown that this quantity is equal to the order of the 1-form symmetry group, which in turn is related to how much the charge lattice can be refined without breaking the Dirac quantization condition. The number of charges that can be added in the fundamental domain of the charge lattice while preserving the Dirac quantization condition is given by | [J] | minus 1.Consider the rank-1 theory 𝒯_i supported on the codimension-1 singularity ℋ_i. Its charge lattice is spanned by { q_i, ζ q_i} and the Dirac pairing J_ℋ_i is such that:| [J_ℋ_i] | = | Pf( [0 ⟨ q_i, ζ q_i ⟩; - ⟨ q_i, ζ q_i ⟩0;]) | = | ⟨ q_i, ζ q_i ⟩| = H(q_i, q_i)where in the last equality we used (<ref>). 𝒯_i is an =2 rank-1 SCFT, therefore the 1-form symmetry is either _2, if this theory is (a discrete gauging of) =2^* SU(2) SYM, or trivial in any other case. Therefore we found:H(q_i, q_i) = { 2 𝒯_iis (a discrete gauging of) =2^*SU(2) SYM1 otherwise. Now let us compute the Pfaffian of the Dirac pairing J^(2) of the rank-2 theory itself. We find:|[J^(2)] |= | Pf( [0 ( q_1, ζ q_1 ) ( q_1, q_2 )( q_1,ζ q_2 );(ζ q_1,q_1)0(ζ q_1,q_2)(ζ q_1,ζ q_2);⋯⋯0(q_2,ζ q_2);⋯⋯(ζ q_2,q_2)0 ]) | = | ( q_1, ζ q_1 ) (q_2,ζ q_2) - ( q_1, q_2 ) (ζ q_1,ζ q_2)+(ζ q_1,q_2) ( q_1,ζ q_2 ) | =H(q_1,q_1) H(q_2,q_2)- |H(q_1,q_2)|^2where we dropped the absolute value in the last line because the Cauchy-Schwarz inequality ensures that the last expression is positive. We are now able to determine upper and lower bounds for this quantity:1≤|[J^(2)] | ≤H(q_1,q_1) H(q_2,q_2) ≤ 4The first inequality holds because the Dirac pairing is integer and non-degenerate, while the last inequality follows from the analysis of the rank-1 theories supported on the codimension-1 singularities (<ref>). The inequality:|[J^(2)] | ≤H(q_1,q_1) H(q_2,q_2)is saturated only if H(q_1,q_2) vanishes. Then in order to have a rank-2 SCFT with ϰ≠{1,2} with |[J^(2)] |=4 it is necessary that for every choice of codimension-1 singularities ℋ_1, ℋ_2 the theories supported on the singularities is (a discrete gauging of ) =2^* SYM andthat H(q_1,q_2)=0. This means that, for every choice of ℋ_1, ℋ_2, the charges becoming massless on ℋ_1 are mutually local with respect to the charges becoming massless on ℋ_2. The rank-2 theory then must be a stack of the rank-1 theories supported on ℋ_i.We are not interested in theories that are stacks of lower rank theories, therefore we can drop the equal sign in (<ref>), and (<ref>) reduces to:1≤|[J^(2)] | ≤3As already discussed the absolute value of the Pfaffian of the Dirac pairing is equal to the order of the 1-form symmetry group, therefore we find our first claim: The order of the 1-form symmetry group G^(1) of an =2 rank-2 SCFT with ϰ≠{1,2} satisfies 1≤|G^(1)| ≤ 4. The upper bound can only be saturated by stacks of lower rank theories.Consider now a theory where H(q_i,q_i)=1 for every choice of ℋ_i.As we just discussed equation (<ref>) is only saturated for rank-2 theories that are stacks of rank-1 theories. Then for a =2 rank-2 SCFT with ϰ≠{1,2} that is not a stack of lower rank theories we have:1≤|[J^(2)] | < 1This is a contradiction and signals that on such a CBit is not possible to define a consistent charge lattice. We can then formulate our second claim in the case of rank-2: A rank-2 =2 SCFT with ϰ≠{1,2} that is not a stack of lower rank theories must have at least one codimension-1 singularity that supports (a discrete gauging of) =2^* SU(2) SYM. As we will see in the next Section this second claim generalizes to arbitrary rank. In the context of =3 exceptional S-fold SCFT the second claim already rules out some of the CBgeometries. The most straightforward to study are the G_5 theory that we studied in Section <ref> as well as the G_4 theory that can be constructed from the D_4 (2,0) six-dimensional theory with an S-fold procedure in the presence of an outer automorphism twist. Both these theories are maximally strongly coupled and only have codimension-1 singularities with a transverse slice /_3, which can not support a discrete gauging of =2^* SU(2) SYM.The G_8 theory, studied in Section <ref>, is more subtle. There is one codimension-1 singularity with transverse slice /_4. Our second claim then imposes that the theory supported on this singularity is a _4 gauging of =4 SU(2) SYM. On the other hand the analysis of the central charges with the formulae of <cit.> is not consistent with this choice, as already computed in <cit.> and as we discussed in Section <ref>. To summarize, we find that the exceptional S-fold theories G_4, G_5 and G_8can not be interacting SCFTs because it is not possible to define a consistent charge lattice on their CB es. Therefore these theories must be discrete gaugings of free U(1)^r =4 gauge theories, which is the only other possibility.§.§ A constraint for the stratification of =2 SCFTsConsider a rank-r =2 SCFT with ϰ≠{1,2} such that the rank-1 theories supported on all codimension-1 singularities are SCFTs with trivial 1-form symmetries. We can choose a basis of the charge lattice Γ such that:Γ = Span_{q_1, ζ q_1, …, q_r, ζ q_r }where q_i and ζ q_i become massless at some codimension-1 singularity ℋ_i and generate the charge lattice of the rank-1 theory supported there. Then we have:H(q_i, q_i) = 1 ∀ i=1,…,rbecause H(q_i,q_i) is equal to the order of the 1-form symmetry group of the theory supported on ℋ_i, which is trivial by hypotesis. The Chaucy-Schwarz inequality together with the fact that all the q_i are linearly independent imposes:| H(q_i, q_j) |^2 < H(q_i,q_i) H(q_j,q_j) = 1 ∀ i≠ jOn the other hand | H(q_i, q_j) |^2 must be an integer because it can be written as an integer linear combination of Dirac pairings:| H(q_i, q_j) |^2 = ⟨ q_i, q_j⟩⟨ζ q_i, ζ q_j⟩ - ⟨ζ q_i, q_j⟩⟨ q_i, ζ q_j⟩Then | H(q_i, q_j) |^2 must vanish and H(q_i, q_j) vanishes as well as a consequence. The resulting Dirac pairing matrix is block diagonal with only 2× 2 blocks:J^(r) = diag{([01; -10 ]), ([01; -10 ]), …, ([01; -10 ]) }This is the case for every choice of codimension-1 singularities ℋ_i, therefore the states becoming massless at any singularity ℋ_i are mutually local with respect to the states becoming massless at any other singularity. Then the rank-r theory must be a stack of r rank-1 theories. Therefore in order to have a rank-r SCFT that is not a stack of lower rank theories, one must allow for codimension-1 singularities ℋ_i that support either (a discrete gauging of) =2^* SU(2) SYM, which would imply that H(q_i,q_i)=2, or an IR-free theory. This is our second claim: An =2 SCFT with ϰ≠{1,2} and rank r≥2 that is not a stack of lower rank theories must have at least one codimension-1 singularity that supports (a discrete gauging of) =2^* SU(2) SYM. This claim rules out the exceptional S-fold theories G_25 and G_32 as possible interacting SCFTs. Indeed in both cases all codimension-1 singularity have transverse slice /_3 which can not support a discrete gauging of =2^* SU(2) SYM, and both theories are maximally strongly coupled and can not have IR-free theories supported on any singularity. In the case of the G_26 theory, studied in Section <ref>, there is only one singularity with transverse slice /_2 which could support =2^* SU(2) SYM, while the other singularity has transverse slice /_3. However, as computed in <cit.> and discussed above, this choice is inconsistent with central charge formulae of <cit.>. To summarize, we find that the G_25, G_26 and G_32 theories can not be interacting SCFTs because it is not possible to define a consistent charge lattice on their CB es. Therefore these theories must be discrete gaugings of free theories, which is the only other possibility.The only exceptional S-fold SCFT obtained from the (2,0) E_n six-dimensional theories that satisfies our constraint is the G_31 theory. This is the SCFT obtained from the E_8 (2,0) theory with an exceptional S-fold of order k=4. The CBis ^4/G_31 and has one codimension-1 singularity with transverse slice /_2. Our constraint then imposes that the rank-1 theory supported on this singularity is SU(2) =4 SYM, which is consistent with the analysis of the central charges performed in <cit.>. We claim that this theory is a proper interacting rank-4 SCFT, but it must be noted that the possibility of having a discrete gauging of U(1)^4 =4 gauge theory is also consistent with all the constraints that are available. To solve this ambiguity it would be desirable to analyze the spectrum of charged operators directly from the M-theory setup of <cit.>, but we leave this problem to future work.§.§ Other exceptional S-foldsIn this paper we have analyzed the exceptional S-fold theories obtained from the (2,0) E_n six-dimensional theories, but it is also possible to define similar M-theory setups involving the (2,0) D_n theories, with or without outer automorphism twists, and generalizations to non-simply laced algebras have also been considered. We leave the detailed analysis of the charge lattices of the resulting theories to future work, but we can easily check if the CB es of such theories, computed in<cit.>, satisfy our consistency conditions.The CB es of the S-fold SCFTs obtained from the (2,0) D_n theories are ^r/G(k,m,r) for k=4,6 and m a divisor of k. In all these cases there is at least one codimension-1 singularity with transverse slice /_2 which can support SU(2) =4 SYM, therefore these theories satisfy our consistency condition. The case of non-simply laced algebra generates one CBgeometry that does not appear in the exceptional S-fold theories of type E_n and D_n, namely ^2/G_12. Here G_12 is the exceptional complex reflection group with degrees 6 and 8. In this geometry there are codimension-1 singularity with transverse slice /_2 that can support SU(2) =4 SYM, therefore this theory satisfies our consistency checks.As already discussed in <cit.> there are four possible geometries associated to ECCRGs that do not appear in any known construction but are consistent CB es for putative =3 SCFTs. These geometries are ^3/G_24, ^4/G_29, ^5/G_33 and ^6/G_34 where G_i are ECCRGs. In all cases there are codimension-1 singularities that have transverse slice /_2 and could support SU(2) =4 SYM, therefore all these CB es satisfy our consistency condition. § CONCLUSIONIn this paper we studied the exceptional S-fold SCFTs discovered in <cit.> and their associated charge lattices. In Section <ref> we analyzed explicitly all the exceptional (E_6,7,8,k), computing their charge lattices by generalizing the techniques of <cit.>.Furthermore we considered the sublattices of charges that become massless at codimension-1 singularities of the CB .These sublattices must correspond to the charge lattices of some rank-1 =2 SCFT because S-fold SCFTs are maximally strongly coupled. Moreover this match must be consistent with the constraints on the central charges from<cit.>. We find that the charge lattices of most of the (E_6,7,8,k) do not satisfy these constraints, therefore they can not be interacting SCFTs. For exceptional (E_6,7,8,k) the only theory that admits a consistent charge lattice is the S-fold of type E_8 with k=4, called the G_31 theory.Thus we claimed that the G_31 theory is an interacting SCFT, while all the other (E_6,7,8,k) are discrete gauging of free theories.In this Section <ref> we provided additional evidence for this claim by studying the charge lattice of =2 SCFTs with characteristic dimension ϰ≠{1,2}. By exploiting the results and the formalism developed in <cit.> we computed an upper bound for the 1-form symmetries of rank-2 theories with =2, denoted as Claim <ref> throughout this paper, and we found a consistency constraints for the CBstratification for such SCFTs at any rank, denoted as Claim <ref> throughout this paper. When applied to the case of exceptional S-fold SCFTs this constraint in combination with other constraints from <cit.> corroborates our results spelled out above.There are multiple directions our work can be extended towards.As we already commented in main body of this paper the G_31 theory passes all our consistency checks, but this does not guarantee that this theory is indeed an interacting SCFT and it may be a discrete gauging of a free theory. A possible way to determine whether this is the case would be to compute the 2-form symmetries of this theory directly from the M-theory construction. Indeed if the 2-form symmetry group is not trivial and can be gauged then the G_31 theory should be a discrete gauging of some “parent” theory. This is the approachadopted in <cit.> to understand the presence of discrete gauging in regular S-folds. Our procedure could be applied to study the generalized symmetries in various classes of 4d SCFTs, including the (D_n,k), =2 S-folds <cit.> and =2 SCFTs withϰ≠{1,2}. In particular in <cit.> some putative rank-2 =3 CB geometries were found to posses complex singularities, which are usually generated by discrete gauging [We are grateful to Mario Martone for suggesting us this possibility.]<cit.>. It would be interesting to perform a similar analysis to the one spelled out in this paper in order to understand whether these singularitiescan support an interacting theory or not, and therefore arise from discrete gauging.Our results on =2 SCFTs with ϰ≠{1,2} could be expanded upon in different directions. It would be nice to generalize Claim <ref> to arbitrary ranks, and understand wether a bound for the 1-form symmetry group exists also at higher ranks. Another interesting perspective would be to relax the condition on the characteristic dimension, and study theories with ϰ equal to 1 or 2.This would require a different set of tools than the ones we used in Section <ref>, for example considering the monodromies around CBsingularities along the lines of <cit.>.§ ACKNOWLEDGMENTS We are extremely grateful to Mario Martone and Philippe Argyres for precious comments and for carefully reading the draft. We are also grateful to Antoine Pasternak for discussions. The work of A.A. and S.R.has been supported in part by the Italian Ministero dell'Istruzione,Università e Ricerca (MIUR) andin part by Istituto Nazionale di Fisica Nucleare (INFN) through the “Gauge Theories, Strings, Supergravity” (GSS) research project. § 1-FORM SYMMETRIES OF REGULAR S-FOLDSIn this Appendix we compare the formalism developed in this paper to the 1-form symmetry groups of regular S-fold SCFTs computed in <cit.>. In particular we compute the order of the 1-form symmetry group and find agreement with the results in the literature. The S-fold setup is the one of <cit.> and reviewed in Section <ref>.Let us consider an (A_kN,k) with trivial discrete torsion. A possible choice of basis for the charge lattice is given by fundamental strings ((1,0)-strings)or D1-strings ((0,1)-strings), plus their images, stretched between the 1st and i-th D3-brane with i=2,3,…,N,N+2. In the notation introduce above these states are |(1,0)⟩_1,i and |(0,1)⟩_1,i respectively, withi=2,3,…,N,N+2. Their charges are:Q[|(1,0)⟩_1,i]= 1/√(k)(1,0; …; -1,0^i-th;…;ρ_k·(1,0)^N-th;…;ρ_k·(-1,0)^(N+i)-th; …) Q[|(0,1)⟩_1,i]=1/√(k)(0,1; …; 0,-1^i-th;…;ρ_k·(0,1)^N-th;…;ρ_k·(0,-1)^(N+i)-th; …)for i=2,3,…,N and:Q[|(1,0)⟩_1,N+2]= 1/√(k)(1,0; ρ_k^k-1·(-1,0);…;ρ_k·(1,0)^N-th;-1,0^(N+2)-th; …) Q[|(0,1)⟩_1,N+2]=1/√(k)(0,1; ρ_k^k-1·(0,-1);…;ρ_k·(0,1)^N-th;0,-1^(N+2)-th; …)The elements ρ_k of SL(2,) are reported in table <ref>. In the ordered basis ℬ for the charge lattice:ℬ ={|(1,0)⟩_1,N+2,|(0,1)⟩_1,N+2, .|(1,0)⟩_1,i|_i=2,3,…,N, .|(0,1)⟩_1,i|_i=2,3,…,N}the Dirac pairing is represented by the antisymmetric matrix:J_k,N= (0 2 a 0 0…b+1 1 1…-2 0 c-1 -1 -1…d 0 0… 2c|⋮ 4c|0_(N-1)× (N-1) 4cM 2c|⋮ 4c|-M^T 4c0_(N-1)× (N-1))where:M = (2 1 1 … 1 2 1 1 …1 1 2c2*⋱⋮ 1 1 2 )and:(a b c d ) = ((ρ_k)^k-1)^T·(0 1 -1 0 )As discussed above the absolute value of the Pfaffian of J_k,N equals the order of the 1-form symmetry group for the corresponding S-fold SCFTs in the absence of discrete torsion. These 1-form symmetry groups were computed in <cit.>:G^(1) = {_3k=3_2k=41 k=6 .and are independent on the rank N. Therefore one expects that:| (J_k,N) |= { 3k=3 2k=4 1k=6 . ∀ NWe have checked numerically that Equation (<ref>) holds up to rank N=100. It would be interesting to compute | (J_k,N) | at arbitrary rank, we leave this problem to future work.JHEP
http://arxiv.org/abs/2312.16608v1
{ "authors": [ "Antonio Amariti", "Simone Rota" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20231227152043", "title": "Exceptional S-fold SCFTs are almost trivial" }
^1Institut für Materialphysik im Weltraum, Deutsches Zentrum für Luft- und Raumfahrt (DLR), 51170 Köln, Germany^2Department of Physics, Heinrich-Heine-Universität Düsseldorf, Universitätsstraße 1, 40225 Düsseldorf, Germany ^3Université Grenoble Alpes, CNRS, Grenoble INP, SIMaP F-38000 Grenoble, France ^4Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, F-38000 Grenoble, FranceNeural network potentials are a powerful tool for atomistic simulations, allowing to accurately reproduce ab initio potential energy surfaces with computational performance approaching classical force fields. A central component of such potentials is the transformation of atomic positions into a set of atomic features in a most efficient and informative way. In this work, a feature selection method is introduced for high dimensional neural network potentials, based on the Adaptive Group Lasso (AGL) approach. It is shown that the use of an embedded method, taking into account the interplay between features and their action in the estimator, is necessary to optimize the number of features. The method's efficiency is tested on three different monoatomic systems, including Lennard-Jones as a simple test case, Aluminium as a system characterized by predominantly radial interactions, and Boron as representative of a system with strongly directional interactions. The AGL is compared with unsupervised filter methods and found to perform consistently better in reducing the number of features needed to reproduce the reference simulation data. In particular, our results show the importance of taking into account model predictions in feature selection for interatomic potentials. Feature Selection for High-Dimensional Neural Network Potentials with the Adaptive Group Lasso Johannes Sandberg^1,2,3, Thomas Voigtmann^1,2, Emilie Devijver^4, Noel Jakse^3 January 14, 2024 ============================================================================================== § INTRODUCTION During the last decade, Machine Learning Interaction Potentials (MLIPs) have become a commonplace method for Molecular Dynamics simulations in material science and chemistry <cit.>, following a broader trend of data-driven approaches in material science <cit.>. Ab initio simulations, using for instance Density Functional Theory (DFT) force calculations <cit.>, have good accuracy and broad applicability, but suffer from poor scalability. Being trained to reproduce ab initio forces and energies, MLIPs were shown to combine many of the benefits of ab initio with the scalability and performance of classical force fields <cit.>, thereby opening up new avenues of research into nucleation <cit.>, structure-property relationship in alloys <cit.>, and amorphous solids <cit.> to name a few. A wide variety of MLIPs have been proposed, often relying on a local decomposition of the high dimensional potential energy into a sum of local contributions. Methods such as the spectral neighbor analysis potential <cit.> rely on a linear regression over a set of nonlinear descriptors of the local atomic environment. Nonlinear dependencies can be added by the use of kernel regression, as in the Gaussian approximation potential <cit.>, or by using Neural Networks (NN) as in the deep potential framework <cit.> and the high dimensional neural network potential <cit.>. More recently, methods based on graph neural networks have seen a lot of traction <cit.>, including methods based on equivariant transformations <cit.>. Attempts have also been made to go beyond local interaction in what has been referred to as the third and fourth generations of machine learned potentials <cit.>.For most MLIPs, it is necessary to transform the bare atomic coordinates into a set of atomic descriptors <cit.> describing the local environment of each atom. The purpose of this transformation is to enable a local description, ensure invariance to local symmetry transformations, and to guarantee that the input to the Machine Learning (ML) model is of constant dimension, even as the number of atomic neighbors can change during a simulation.Computing the descriptors is often the main time consuming part of applying a NN Potential (NNP), compared to the NN evaluation and backpropagation. As such, care is needed when designing the set of atomic features, and in particular one has to weight the need for a detailed description of the atomic environment against the additional computational cost of having a large feature space. There is also some evidence that larger feature sets can negatively impact generalization <cit.>. Feature selection <cit.> allows for a data driven way of designing such feature sets by identifying those features out of a larger collection that are the most relevant, and discarding redundant ones. The simplest approach to feature selection are filter methods. Such methods select features by looking only at the dataset, before training takes place, and are as such model independent. Imbalzano et al. <cit.> proposed three such methods for use with MLIPs. Two of these are based on minimizing the Pearson Correlation (PC), and maximizing the Euclidean distance,respectively between the selected features. The third one is based on the CUR decomposition <cit.>, which can be regarded as an analogue of the singular value decomposition, constructing a low-dimensional representation of the data matrix but using only rows (columns) of the original matrix chosen such that the reconstruction error is minimized.Filter methods can be contrasted with embedded methods, wherein the feature selection process is integrated into the training of a specific model. Such an embedded approach allows for explicitly taking into account model predictions, as well as interaction between different features <cit.>. A famous embedded method is the lasso <cit.>, based on regularization using the L1 norm of the input parameters of a linear model. Lasso has previously been used to construct MLIPs for a variety of elements based on ridge regression <cit.>, and has been applied beyond MLIPs to predict directly material properties starting from large sets of material descriptors <cit.>. The latter led to the development of the SISSO method <cit.> in the framework of materials discovery, where features are subjected to an initial screening based on their correlation to the target property, before being further selected using the lasso, allowing for selection from more than billions of candidate material descriptors. However, as it induces sparsity at the level of individual parameters, lasso is not applicable as a feature selection method for NNPs.While much of the focus for feature selection has traditionally been on linear regression, likely owing to the nonlinear nature of NNs, recent works have tried to extend methods to the nonlinear case. Methods based on the Group Lasso (GL) has been applied to NNs as early as 2017 <cit.>. It has, however, been shown that this direct application of GL to NNs cannot consistently discard truly irrelevant features, a problem that can be avoided by using an adaptive penalty for an Adaptive GL (AGL) approach <cit.>. Another recent method is LassoNet <cit.>, adding bypass connections from each input variable to the NN output, applying a lasso penalty on the bypass weights and using them to constrain the maximum values of the input weights. This change in architecture, however, deviates from the simple networks used in most common NNP implementations, while also introducing an additional hyperparameter that in principle needs to be tuned. For these reasons the AGL might be more directly suitable for NNPs. In this article, we introduce an approach of feature selection based on the AGL method applied to High Dimensional NNPs (HDNNPs), with the aim of showing that the use of a method that takes into account the interplay between features in the specific estimator allows for better selection of atomic fingerprints. This type of NNP model is known to work well for many systems, and has been well studied, making it a natural framework for our study. We consider three different systems: Lennard-Jones (LJ), serving as a simple and well known generic model whose analytic expression has no explicit angular dependence; Aluminium (Al), which serves as a relatively simple sp bonding metal; Boron (B), which is known to have a particularly complex structure with a high degree of directional covalent bonding <cit.>. We find that for Al the AGL method is competitive with filter methods. For the other systems it is explicitly shown by example how the filters can fail to select features that are necessary, while they are discovered by our method, illustrating the advantage of an embedded feature selection approach.The remainder of the article is as follows. Section <ref> provides background on our datasets, the HDNNP approach, the AGL method, and the computational tools used. Section <ref> covers the results of training HDNNPs with AGL, comparing to the CUR and PC methods, as well as simulations used to test the effect of the reduced feature sets in production. Finally, section <ref> provides the main conclusions and outlook of the paper. § METHOD§.§ Datasets A first step of training a HDNNP is to construct a dataset of reference structures. The dataset for LJ was extracted from a set of LAMMPS <cit.> simulations of 256 atoms at temperatures ranging from 0.5 to 1.5 (LJ units), and densities 0.9 to 1.1, in both solid (fcc) and liquid configurations. We use the standard LJ pair potential, given for interatomic distance r<r_c byV = 4ϵ((σ/r)^12 - (σ/r)^6).All the simulations are performed with parameters σ=ϵ=1, particle mass m=1, and cutoff radius r_c=2.8. Figure <ref>(a) shows the thermodynamic states included in the dataset. Each thermodynamic state was sampled 1333 times, with an interval of 0.3 time units (300 timesteps), for a total of 28000 configurations. Note that the coexistence lines in figure <ref>, reproduced from <cit.>, are valid in the limit of infinite cutoff, and merely included as visual guide.In the case of Al, our reference data is the same as in our previous article <cit.>. This dataset consists of 24300 configurations extracted from DFT-based Ab Initio Molecular Dynamics (AIMD) simulations performed in VASP <cit.> using the LDA functional <cit.> in an augmented plane wave framework with a cutoff of 241 eV. Configurations in the dataset cover fcc, bcc, and hcp crystalline states, and the liquid, at a variety of temperatures and pressures the details of which we refer to the original article <cit.>. Figure <ref>(b) shows the thermodynamic points sampled to construct the dataset. Liquid states, and fcc crystals at ambient pressure were sampled 1000 times each. The remaining crystal states were each sampled 100 times. For B, we extract reference configurations from the AIMD trajectories used in <cit.>, complemented with additional simulations for α-rhombohedral, α-tetragonal, and β-rhombohedral crystals at temperatures ranging from 10K to 2000K in steps of 200K, extracted from the Materials Project database <cit.>. Additional high-pressure simulations were also included, to probe the short-range interaction. Figure <ref>(c) shows the thermodynamic state of each simulation trajectory, with the number of configurations drawn from it. Each trajectory was sampled with an interval of 45 fs (30 timesteps), for a total of45000 configurations. These simulations were performed using the Perdew Wang GGA functional <cit.> with a 300 eV augmented plane wave cutoff sampling only the Γ point,for consistency with <cit.>. In all cases, the simulations were performed in an NVT ensemble with a Nosé thermostat controlling the temperature, and pressure controlled by fixing the volume of the simulation box. To ensure sampling of equilibrium states, each trajectory was preceded by an equilibration period ranging from 500 time units for LJ, and 100 to 200 ps for Al and B.§.§ HDNNPsThe interaction between atoms in a material is frequently described in terms of a potential, depending in principle on the positions of all atoms in the many-particle system. This interaction is often short-sighted, and can be treated as sum of atomic contributions depending only on the local structure of each atom, within an appropriate cutoff radius r_cE_total = ∑_i=1^N_atoms E_i.A HDNNP <cit.> is constructed from this decomposition by assigning a NNP to each species of atom, mapping between the local environment and the corresponding atomic energy contribution E_i. The input to the HDNNP are the atomic positions, which are transformed into a fingerprint vector for each atom, serving as input to the atomic NNP. Training then consists of fitting the full HDNNP to the total potential energy obtained from ab initio. Often the derivative of the HDNNP is fitted to the ab initio forces as well, but for simplicity in focusing on the feature selection and following our previous work <cit.>, we train only to the energies in this work. There are many options in choosing atomic descriptors, with <cit.> offering a brief overview of some common types. In this work, we use the Behler-Parrinello symmetry functions (SF) <cit.>, which is the conventional choice for HDNNPs. These consist of the radial G^2 and angular G^5 SFs defined byG^2_i = ∑_j e^-η(R_ij-R_s)^2f_c(R_ij) G^5_i = 2^1-ζ∑_j,k(1 + Λcosθ_ijk)^ζ e^-η(R_ij^2+R_ik^2+R_jk^2)f_c(R_ij)f_c(R_ik)f_c(R_jk) .Here, R_ij is the distance between atoms i and j, θ_ijk is the angle between atoms j and k with respect to atom i, and f_c(R_ij) is defined as 0 for R_ij > r_c and for R_ij<r_c as a polynomial going smoothly to 0 at the neighborhood cutoff R_ij = r_c. The parameters η, ζ, Λ, and R_s allow for defining a set of features by assigning these parameters different values.Here the initial featuresets are generated by selecting parameter values on a grid, akin to the procedures described in <cit.>, with the aim of being sensitive to a range of interatomic radii and angles. The exact SF parameter values used can be found in the supplementary material <cit.>.§.§ Feature SelectionThe main hindrance in applying feature selection methods based on the L1 norm to NNs is the fact that the L1 norm acts on individual weights. In a NN, several weights are associated with each feature, and so to do feature selection we need to penalize these weights as a group. The GL replaces the L1 norm with Euclidean norms over groups of parameters. As the Euclidean norm of a parameter group vanishes if and only if all those parameters vanish, this allows for selecting or discarding groups of parameters simultaneously. To select features for NNs using GL we take the groups to be the input weights of feature i, ω_i,[:]^0, with the corresponding Euclidean norm |ω_i,[:]^0|.During training we then optimize the objective functionobj(W) = L(W) + λ/N∑_i=1^N |w^0_i,[:]|,with L being some loss function, in our case the Mean Square Error (MSE), W being the weights of the neural network, N being the number of inputs, and λ being a regularization parameter used to tune the relative strength of the feature selection. A challenge in performing this optimization is the fact that the second term in (<ref>), called the penalty, is non-smooth. In <cit.> a smoothed approximation of (<ref>) is used, but here the non-smooth optimization problem is instead solved directly using a proximal gradient descent algorithm, following <cit.>. The adaptive version of the algorithm <cit.> uses a separate regularization parameter for each individual weight group. This adapted penalty is constructed from an initial training run using the non-adaptive penalty. The training is then redone with the new penalty, optimizingobj(W) = L(W) + λ/N∑_i=1^N |w^0_i,[:]|/|ŵ^0_i,[:]|with ŵ^0_i,[:] being the values of w^0_i,[:] obtained during the initial training run with the non-adaptive penalty. Depending on the value of λ, some features will have their weights go to zero during training, and can thus be discarded. This allows for selecting features by performing a search over this single parameter. §.§ Computational ToolsTraining of HDNNPs were performed using our own code, with the SF calculations being performed using N2P2 <cit.>. For the CUR selection we use the code implementation from <cit.>. Simulations with the trained potentials were performed in LAMMPS <cit.> using the ml-hdnnp plugin provided by N2P2. As mentioned in section <ref> we use VASP <cit.> for reference ab initio calculations. OVITO <cit.> was used for some post-processing, calculating the Radial Distribution Functions (RDFs).§ RESULTS AND DISCUSSION§.§ Lennard Jones SystemAs a first test of our method we apply the AGL to the LJ system, where the exact interactions are perfectly known. In particular, they are perfectly spherically-symmetric pair interactions, so that one might expect a feature-selection method to successfully discard features pertaining to angular directionality. The initial feature set contains 12 radial SFs, 6 of which are centered on r_ij=0 with varying widths η, with the remaining 6 being centered on regularly spaced r_s having constant width. In addition to the radial SFs, 10 angular ones are included, using the same wide centered radial component, with varying angular width ζ in pairs of +1 and -1 for the Λ parameter. All the SFs use the same cutoff radius, set to the cutoff used in the reference LJ potential, r_c = 2.8. The NNP consists of two hidden layers with 10 neurons each.For the feature selection, we apply the AGL method described in section <ref> by defining a sequence of regularization parameters λ, training an initial model with the non-adaptive GL (<ref>). This is then used to construct and retrain the model using the adaptive penalty given by (<ref>). Each of these models has its weights randomly chosen at the beginning of the training,referred to as cold initialization, and is trained using the ADAMW optimizer <cit.> with learning rate set using a learning rate finder <cit.>, and a small weight decay parameter γ=10^-6 applied only to the internal weights so as to not interfere with the feature selection. The batch size was fixed at 256 configurations, and standard input normalization was used, shifting and scaling each feature to have mean 0 and standard deviation 1 over the training dataset. We let aside 10% of the training data as a hold-out validation set to monitor the model performance during training for early stopping. Crucially, for the sake of early stopping we do not monitor just the loss function, but the relevant objective function given by (<ref>) or (<ref>), ending training if it has not improved for 10 epochs by more than 10^-7. In the absence of early stopping, the training was capped at 1000 epochs for the non-adaptive part, and 10000 during the adaptive part. During training with the adaptive penalty, the weights corresponding to some of the inputs will vanish. Following the training for each λ we identify these weights and freeze them before continuing training without the penalty. This is to avoid the bias that is otherwise known to occur for L1 regularized models <cit.>. Figure <ref>(a) shows the validation Root Mean Square Error (RMSE) for each model along this path, plotted against the number of selected features, both at the end of training with AGL (blue circles) and after continuing without the penalty (orange dots). We note that the regularization introduces a noticeable overestimation of the error associated to the selected feature sets, and so continuing the training is necessary to make an informed decision on which set of selected features to choose. In figure <ref>(a) one can observe an initial plateau in the lowest error reached during continued training when going from 22 selected features down to 7. We interpret this as the regime where the AGL method discards unnecessary features that lead to little decrease in performance. Going below 7 features, the model suffers a large increase in error, as the result of having to discard more and more important features.Based on figure <ref>(a), we select the model with 7 features, of which 1 is of the angular type given by (<ref>). The selected feature set is tested by training over four different random initializations, with the same training dataset, to ensure the features are not suited for just one part of the weight space. Unlike the models on the regularization path, in order to speed up convergence, these models were trained using the cosine annealing with warm restarts learning rate schedule <cit.>. With this schedule the learning rate is annealed with a cosine from a large initial value to a small value (10^-8) over a number of weight updates, before resetting the learning rate to its initial value and repeating the process. Here the initial period of the scheduler is set to coincide with one epoch, and to double after each reset, ending training after a total of 12 resets (8190 epochs). We likewise test the starting feature set, as well as 7 features selected with the PC and CUR methods of <cit.>. The resulting test errors, evaluated on a held out test set, are presented in figure <ref>(b), together with the total number of features N and the number of angular features N_G^5. Additionally, we perform a benchmark simulation with each potential, consisting of 2048 atoms simulated in an NVT ensemble for 10000 timesteps. These simulations ran on a single 2.5 GHz Intel Cascade Lake 6248 cpu core, and the average number of simulated timesteps per second of wall time is recorded and shown in figure <ref>(b). We note that the models trained on the features selected with CUR did not allow for a successful benchmark simulation on account of their large error, which will be discussed in more detail below.It can be seen that there is a strong preference for radial SFs, as one would expect considering the lack of angular dependence in the reference LJ potential. Despite this, a single angular feature was selected by both the AGL and the PC filter. This is not unreasonable, since we train the LJ system with high-density configurations as reference data, where steric repulsion leads to the emergence of certain short-ranged angular order. The features selected with CUR greatly underperform those selected with the other methods, but we note that CUR performs much better for a larger number of features <cit.>. CUR selected two angular features, which could allow for a better reconstruction of the atomic environment overall by taking better into account the angles, but at the cost of a reduced radial resolution. As the CUR approach acts on the descriptors alone, it is largely incapable of knowing the lack of angular dependence of the energy in the ground truth. It should however be mentioned that this information could still be, to some extent, indirectly available through what configurations appear in the sampled MD trajectory used to construct the dataset. To better illustrate the differences between the feature selection methods, we show in figure <ref>(c) a matrix representing the features selected by each method. The G^2 SFs selected by AGL and CUR are also plotted in figure <ref>, along with the Radial Distribution Function (RDF) extracted from one of the reference simulations. Of note is that CUR discarded three consecutive shifted radial SFs in a regime where the other methods kept at least one. This raises the question of whether adding one of these SFs to the CUR features would recover a good performance. In order to test this, we create two new sets by adding to the CUR features one of the shifted radial SFs selected by AGL but discarded by CUR, marked 7 and 8 in figure <ref>. Adding feature number 8 reduced the test RMSE to 18.4×10^-3ϵ/atom, which is a modest improvement, but still nowhere near the performance of the other sets. Instead, adding feature number 7 lowers the test RMSE to 9.40×10^-3ϵ/atom, a clear indication that this is indeed a vitally important feature for this system that the CUR method failed to detect. With this feature added, the resulting model also allowed for stable simulations to be performed. §.§ AluminiumTo test the method in a more practical setting, we turn to the case of Al. The SF parameters and network architecture is chosen as in <cit.>. We proceed as for LJ, training a sequence of models on increasing values of λ, using cold initialization, continuing the training after selecting the features. The resulting validation errors are plotted against the number of selected features in figure <ref>(a). We find 10 features to be a good compromise between few features and low error. The set is again evaluated by training a set of four models on the selected features, with different initialization, likewise for the staring features and features selected with CUR and PC. The test errors are shown in figure <ref>(b), along with the number of angular features selected, and number of timesteps per second in a benchmark simulation identical to the one for LJ. We see a significant increase in computational speed for the feature-selected potential, at a relatively small increase in error. For this system, CUR and PC seem to perform equivalently. In particular the CUR features perform much better than in the LJ case, presumably because it is asked to select more features and so the method is not forced to compromise on the radial resolution. The features selected with AGL, on average, outperform those chosen by the filters, although there is not a large difference in this case, especially considering the deviations. The feature sets are visualized in figure <ref>(c). We observe, somewhat different from the LJ case, a great overlap between the methods, and presumably the one or two features that differ between each set are not enough to cause a significant difference in the test error. In particular we notice that each model selected each shifted radial SF. Feature number 6 in figure <ref>(c), being also selected by each model, is identical to the shifted ones, but centered on r_ij=0. Taken together these features can be argued to cover the entire range of interatomic distances up to the cutoff radius, allowing for a rough representation of the RDF. This preference for shifted radial SFs has also been indicated elsewhere in the literature <cit.>.Like in the case of LJ, there is here a preference towards radial features, with only two angular ones being chosen. We suggest a physical explanation for this preference for radial features, noting the tendency of Al to adopt a close-packed short range order and to maximize the number of nearest neighbors, due to the weakly directional sp bonding type electronic structure. While the 10 features selected are a sensible choice, based on the training errors reported in figure <ref>(a), the threshold is not rigorous. From the RMSE values obtained, a selection of 8 or even only 7 features could also be argued for. Hence we also show in figure <ref>(a) errors of models trained with the best performing set of 7 features, as well as corresponding sets selected with PC and CUR. A noticeable increase in the test error is observed, with only a modest improvement in benchmark performance primarily due to the additional discarded angular SF. We note that the CUR features show a significant reduction in performance, reminiscent of what was observed for LJ. In the present case, this is presumably due to the deselection of both features number 6 and 7 by CUR, seen if figure <ref>(b). As this is the same number of features selected for LJ, one can also compare the two sets of features. We note that the selected angular feature is the same in both cases. As a test, we train a model for Al with the set of features selected by AGL for LJ, denoted AGL (LJ) in figure <ref>(a). Interestingly the LJ set seems to slightly outperform the other 7 features selected by AGL. §.§ BoronWe turn to boron as a stringent test system. Due to the complicated structure of boron, induced by strong covalent directional bonding <cit.>, we expect this to be a significantly more difficult task, and to require a more complex set of features compared to Al and LJ. For our initial set of descriptors we use a set of 12 radial SFs, and 48 angular SFs, with a cutoff of 5.3 Å corresponding roughly to the outer edge of the third neighbor shell. This relatively wide cutoff was chosen in order to hopefully be able to more adequately take into account the medium-range structure known to appear in boron, primarily the open icosahedra and the bonds between them <cit.>. Furthermore, to allow for a potentially more complex mapping we use a larger network than for LJ and Al, with two layers of 25 hidden nodes each, providing a slight improvement in error compared to smaller network sizes. As for the previous systems, figure <ref>(a) shows the validation RMSE as a function of the selected features. In this case the best-performing model, apart from the one with the full set of features, is for 16 features. We select these 16 features, and again train a set of four models to test, with the results shown in figure <ref>(b). In this case we not only selected a larger number of features, but the majority of features selected were of the angular type. Unlike in the previous cases, we also observe an inability of the filter methods to adequately select features for this system, with a significant increase in error for the set selected with PC and CUR. In fact, we were unable to perform even a benchmark simulation using the models trained on the PC and CUR sets, with the simulations becoming unstable. For the AGL set there is a noticeable increase in the error compared to the full set of features, but this comes with a significant improvement in the computational performance of the potential. The selected features for each set is shown in figure <ref>(c). We see, as for Al, that the shifted radial SFs are seemingly the most important radial ones. For the angular features the picture is, however, not very clear. In particular it is not a priori evident why the features selected by CUR and PC lead to worse performance than the ones selected by AGL. Like for the LJ case we tried adding individual features from the set selected by AGL to the set selected by CUR, to see if this improves the performance. Adding features number 29 and 31 to the CUR set changes the error to 8.95 meV/atom, and 9.68 meV/atom respectively, neither of which allowed for stable simulation. A model with both of these added did not reduce the error any further, and was likewise unstable. None of the other features we tried adding managed to reduce the error below 10 meV/atom.§.§ Validation of the MLIP modelsWhile looking at the RMSE of the models on a held-out set of test configuration is useful, the true test of the quality of a MLIP is in simulations and the accurate prediction of physical quantities. For each set of features we pick out the model with the best test error and perform an NVT simulation, aiming to obtain the diffusion constant for comparison to ab initio, and the reference potential in the case of LJ. Each simulation uses a box of 256 atoms, in order to match the finite size effect in the reference systems. The temperature is 1500K for Al, 2600K for B, and 1.5k_B/ϵ for LJ, and in each case the simulation box is chosen such that the density is the same as in the respective reference system. For Al and LJ the system is initialized in an fcc crystal configuration and evolved until it melts, while for B we initialize with a liquid configuration taken from the AIMD dataset. Each simulation consists of 10 measurements of 1M timesteps, each preceded by a 10^5 timestep equilibration after randomly reassigning the atomic velocities. Over these trajectories we calculate the Mean Square Displacement (MSD), shown in figure <ref> for each system. In none of the cases do we observe any clear difference induced by the featureset. To give a clearer picture, the diffusion constants are extracted from the MSD using the Einstein relations. The predicted diffusion constants for each feature set, and reference values, are shown in table <ref>. In all cases the diffusion constants are within acceptable bounds of the reference values, with no discernible effect resulting from the difference in number of variables. We note that, as with the benchmark, the models trained for B with the filter methods did not allow for successful simulations, but were too unstable. The same held true for the features selected for LJ using the CUR method.From these simulations we also extract the RDF, shown for each system in figure <ref>. One point that should be stressed here is that our aim is to evaluate the feature selection, rather than how well any of the models reproduce the AIMD reference system results. For both the Al and LJ cases we observe very little difference between the different NNP models, as both the initial large feature set and also the reduced sets following feature selection reproduce the AIMD results fairly well. In the case of B, already the initial large feature set turns out to be not powerful enough to reproduce the boron RDF faithfully. But the feature selection by AGL does not deteriorate the agreement further, indicating that no significant performance is lost – the feature selection can be only as good as the initial starting point. The failure to reproduce the AIMD RDF emphasizes that boron is a challenging system for the training based on Behler-Parrinello SFs and potential energies as targets. Irrespective of this, the agreement with the AIMD MSD is very good also for the reduced feature set. We rationalize this as a result of the dynamics in boron being not predominantly determined by the radial structure encoded in the angle-averaged RDF.This additionally points to the possibility of the standard BP SFs being not well suited for this system, as previously suggested in the literature <cit.>. §.§ Confounding FeaturesFilter methods such as PC and CUR aim to reduce the number of features by looking for subsets that minimize the overlap between those features that are kept. However, this makes them potentially vulnerable to confounding features that are uncorrelated to the relevant input, but by themselves irrelevant. This requires the initial selection of features one starts with to be carefully chosen, in order to minimize irrelevant input. However, in a system with complex structure this might not be obvious to achieve. We demonstrate in the following, that AGL performs much better in the presence of irrelevant input.For this purpose we return to the LJ system, modifying the starting featureset by adding 10 new features consisting of random noise drawn independently from a set of Gaussian distributions, with means and variances chosen to mimic those of the real features. We note that these fake features were sampled once for each atom and configuration, and as such the values do not vary between epochs. To ensure these fake features are nonnegative, like the real ones, we only work with their absolute values. While the situation considered here is a rather implausible one to occur in a practical setting, where features are unlikely to be truly uncorrelated to the potential energy, it could potentially have implications in situations where there is noise in the training dataset.Having nothing to do with the real data generating process, these features are truly independent from the other features as well as the target energy. Ideally these features should be discarded, but as they are independent from the real features as well as each other, we expect that neither the PC nor CUR should be able to correctly discard them. This is indeed the case, as illustrated in figure <ref>, showing the features selected by AGL, PC, and CUR, as well as for comparison, the set selected by AGL in the absence of fakes. The PC method clearly did not succeed, as beyond the manually selected feature it only picked out fake features. With CUR we selected some real features, indicating that the method might be more robust compared to the PC in this regard, but still it selected more fakes than real features. In contrast to the filters, the AGL managed to discard the fakes, and select a set of features. And interesting observation is that the set selected by the AGL is slightly different to that selected in the absence of fakes. In fact, the error obtained on this set was 6.97×10^-3, below that of the set selected in absence of fakes. This is reminiscent of machine learning methods where the deliberate addition of noise helps increasing the performance in training.§ CONCLUSION AND OUTLOOKWe have applied the AGL as an embedded feature selection method for choosing atomic features in HDNNPs. This allows for selecting features as part of the training process, taking into account the action of the features in the resulting potential during the selection. In order to evaluate the method we have compared it to previously used unsupervised filter methods that take only into account the features themselves, aiming to minimize redundancy in the description of the local atomic environment. We find that for three test systems, ranging from a simple LJ system, to the highly complicated and directional boron system, that the AGL manages to perform as good as, or better than the other methods. This we consider the main outcome of this work. By utilizing a method that takes into account the NNP predictions, we can reduce the number of atomic features further than methods taking only into account the features themselves.While we have applied our method to training on only energies, the next step would be to apply the method to the more common setting of fitting also forces during training. A natural question in this case is whether the inclusion of forces changes the features that are selected. It would also be a natural direction to use the method for different types of descriptors. Although the BP SFs are largely in use, and have seen plenty of success, since their introduction many other alternative descriptors have been developed. This is especially relevant considering the difficulty of even our full set of features to reproduce the RDF of boron, which could be an indication that the SFs are not ideally suited for this system. One can further consider applying the method to multicomponent systems, where a traditional SF approach sees a combinatorial increase in the number of features, which could potentially be counteracted by feature selection. In view of recent concerns regarding the stability of MLIPs <cit.>, it would also be interesting to study the extent to which input dimensionality affects the stability of models, and whether this can be alleviated by careful feature selection, or indeed regularization in general.§ CODE AND DATA AVAILABILITYThe authors will make the data of this study available upon reasonable request. The code used in this article is available at <https://github.com/JohannesSandberg/HDNNP-AGL>§ ACKNOWLEDGMENTS We acknowledge the CINES and IDRIS under Project No. INP2227/72914, as well as CIMENT/GRICAD for computational resources. This work was performed within the framework of the Centre of Excellence of Multifunctional Architectured Materials “CEMAM” ANR-10-LABX-44-01 funded by the “Investments for the Future” Program. This work has been partially supported by MIAI@Grenoble Alpes (ANR-19-P3IA-0003). JS acknowledges funding from the German Academic Exchange Service through DLR-DAAD fellowship grant number 509. We thank Gerhard Jung for suggesting tests with random features.§ REFERENCESunsrt Supplementary Information File § SYMMETRY FUNCTION PARAMETERS Tables <ref>, <ref> and <ref> contain the symmetry function parameters used for the Lennard Jones system, Al, and B respectively. Definitions of the symmetry functions, and their parameters, are found in the main text.§ TEST ERRORS Tables <ref>, <ref> and <ref> show the average test errors of the models trained for each featureset discussed in the main text.
http://arxiv.org/abs/2312.15979v1
{ "authors": [ "Johannes Sandberg", "Thomas Voigtmann", "Emilie Devijver", "Noel Jakse" ], "categories": [ "cond-mat.dis-nn" ], "primary_category": "cond-mat.dis-nn", "published": "20231226101403", "title": "Feature Selection for High-Dimensional Neural Network Potentials with the Adaptive Group Lasso" }
Dealing with the data imbalance problem on pulsar candidates sifting based on feature selectionHaitao Lin 1Xiangru Li 2================================================================================================ Qiang Fu^⋆ Ashia Wilson^† 1em^⋆School of Mathematics, Sun Yat-sen University ^†Department of Electrical Engineering and Computer Science, MITDealing with the data imbalance problem on pulsar candidates sifting based on feature selectionHaitao Lin 1Xiangru Li 2================================================================================================ We propose a new method called the N-particle underdamped Langevin algorithm for optimizing a special class of non-linear functionals defined over the space of probability measures. Examples of problems with this formulation include training neural networks in the mean-field regime, density estimation, and kernel Stein discrepancy minimization. Our algorithm is based on a novel space-time discretization of the mean-field underdamped Langevin dynamics, for which we provide a new, fast mixing guarantee. In addition, we demonstrate that our algorithm converges globally in total variation distance, bridging the theoretical gap between the dynamics and its practical implementation.§ INTRODUCTIONMany problems in data science can be posed as an entropy-regularized mean-field optimization (EMO) problem, described by =-1min_μ∈𝒫_2(ℝ^d)F(μ)+λEnt(μ),where λ>0; 𝒫_2(ℝ^d) is the measure space on ℝ^d with finite second moment; F is a non-linear convex functional and Ent(μ)=∫μlogμ is the negative entropy of μ. Classical applications include training neural networks in the mean-field regime <cit.>, density estimation via maximum mean discrepancy minimization <cit.> and kernel Stein discrepancy minimization <cit.>.A popular approach for finding a minimizer of an EMO problem is based on the mean-field Langevin dynamics. When the problem is convex, <cit.> show the mean-field Langevin dynamics convergences to the minimizer asymptotically; and when the problem satisfies a uniform logarithmic Sobolev inequality, several works have established an exponentially fast convergence <cit.>.The mean-field Langevin dynamics, however, cannot be implemented and bridging this gap requires both space and time discretizations of the dynamics.A time-discretization of the mean-field Langevin dynamics was analyzed by <cit.> who extend the interpolation argument introduced by <cit.> to a non-linear Fokker-Planck equation. They establish the non-asymptotic convergence of the time-discretized dynamics in the energy gap. A space-discretization was studied by <cit.> who show that the finite-particle approximation to the density of the mean-field Langevin dynamics (referred to as a uniform-in-time propagation of chaos) converges exponentially fast, with a bias related to the number of particles.More practically, <cit.>analyze a space-time discretization of the mean-field Langevin dynamics and establish the non-asymptotic convergence of the resulting algorithm to a biased limit related to both the particles and stepsize.Their analysisapplies to several important learning problems and improves the results of the standard gradient Langevin dynamics.A natural candidate method for finding solutions to EMO problems faster is the mean-field underdamped Langevin dynamics. These dynamics resemble several techniques for adding momentum to gradient descent in optimization, many of which are known to result in provable faster convergence in a variety of settings <cit.>. Moreover, training neural networks using momentum-based gradient descent is considered effective in several applications <cit.>. <cit.> and <cit.> confirm that even naive space-time discretizations of the mean-field underdamped Langevin dynamics have impressive empirical performance when compared to various proposed discretizations of the mean-field Langevin dynamics on applications such as training of neural networks. In an effort to better understand the behavior of themean-field underdamped Langevin dynamics, <cit.> establish an exponential rate convergence of it and its finite-particle system under assumptions that are easy to verify for specific objectives of training neural networks. In addition, <cit.> implement an Euler-Maruyama discretization of the finite-particle systemand show that it performs empirically faster when compared with the space-time discretization of the mean-field Langevin dynamics in a variety of settings, such as training a toy neural network model.However, space-time discretizations of the mean-field underdamped Langevin dynamicsare not yet theoretically well understood. Furthermore, the rate obtained by <cit.> for the dynamics does not resemble an accelerated rate when compared with <cit.> and <cit.> who analyze the overdamped setting. §.§.§ A summary of our work Despite recent results, a remaining question is whether we can theoretically characterize the behavior of an implementable algorithm based on discretizing the mean-field underdamped dynamics. If there is a limiting bias, how does it scale with the number of particles and other problem parameters? Ideally, this characterization would give a sharper rate of convergence than <cit.>'s space-time discretization of the mean-field Langevin dynamics, suggesting there might be a worst-case advantage to adding momentum in the mean-field setting.In this paper, we introduce a fast implementable algorithm for solving EMO problems based on the mean-field underdamped Langevin dynamics. We prove that our proposed algorithm converges to a small limiting bias under a set of benign assumptions. In particular, our contributions are summarized as follows. *We sharpen the convergence guarantees for both the mean-field underdamped Langevin dynamics and its space-discretization under the same set of assumptions utilized by <cit.>(summarized in Theorems <ref> and <ref>). In particular, we sharpen the dependence on the smoothness constant.* We introduce a novel space-time discretization of the mean-field underdamped Langevin dynamics which we call theN-particle underdamped Langevin algorithm (Algorithm <ref>). We show the global convergence of our discretization in terms of the total variation (TV) distance under additional assumptions. Importantly, our results improve on <cit.>'s analysis of the space-time discretization of the mean-field Langevin dynamics, and our additional assumptions are satisfied by the same examples introduced by <cit.>; these results provide theoretical characterizations of momentum methods in several real-world applications including training neural networks and density estimation.Organization The remainder of this work is organized as follows. Section <ref> presents the formal definitions and assumptions as well as related works. Section <ref> proposes our main methods and theoretical results. Section <ref> discusses the application of our methods to some classical problems. Section <ref> describes our numerical experiments verifying the effectiveness of our proposed methods. § PRELIMINARIES We begin by introducing some general notation that will be used throughout this work.§.§ Notation Throughout, μ,π∈𝒫_2(ℝ^d) and we adopt the following standard notation:μ-π_sup|μ(A)-π(A)| denotes the TV distance between μ and π where the sup is taken over Borel measurable sets A⊂ℝ^d; W_2(μ,π) is the L^2-Wasserstein distance between μ and π; (μπ)∫μlogμ/π is the KL divergence between μ and π; · denotes the Euclidean norm; (μπ)𝔼_μ∇logμ/π^2 is the relative Fisher information between μ and π; and more generally _S(μπ)𝔼_μS^1/2∇logμ/π^2 for positive definite symmetric matrix S.Throughout, B_t will denote a d-dimensional Brownian motion. δ_x denotes the Dirac measure on x∈ℝ^d. a bmin{a,b}. m_2^2𝔼_μ_*·^2. a=O(b) and a≲ b denote that there exists C>0 such that a≤ Cb; a=Θ(b) and a≍ b denote that there exist c,C>0 such that cb≤ a≤ Cb and a=Θ(b) denotes a=Θ(b) up to logarithmic factors. ∂_i denotes the partial derivative w.r.t the i-th variable. _π(f^2)𝔼_π[f^2log(f^2/𝔼_π[f^2])]. Wlog, we specify λ=1, and assume the objectives we introduce (e.g. (<ref>)) are non-negative (we can add a constant to it). The functional derivative of F is denoted as δ F(μ)/δμ:𝒫_2(ℝ^d)×ℝ^d→ℝ where dF(μ+ϵ(π-μ))/dϵ|_ϵ=0=∫δ F/δμ(μ)(π-μ) and D_μF(μ,x)∇δ F/δμ(μ,x):𝒫_2(ℝ^d)×ℝ^d→ℝ^d is the intrinsic derivative of F.§.§ Background The gradient flow structure in 2-Wasserstein metric of the EMO functional is called the mean-field Langevin dynamics, dx_t=-D_μ F(μ_t,x_t)dt +√(2λ)dB_t. <cit.> show that the minimizer of ,μ_*(x)∝exp(-1/λδ F/δμ(μ_*,x)),corresponds to solutions of an EMO problem under mild conditions. We study the mean-field underdamped Langevin dynamics,dx_t =v_tdtdv_t =-γ v_tdt- D_μF(μ_t^x,x_t)dt+√(2λγ)dB_t,and show a new sharp mixing-time bound; here, γ>0 is the damping coefficient; μ_t(x,v)=Law(x_t,v_t) and μ_t^xLaw(x_t)=∫μ_t(x,v)dv is the x-marginal of μ_t. The limiting distribution of  is the solution to, min_μ∈𝒫_2(ℝ^2d)F(μ^x)+λEnt(μ)+∫1/2v^2μ(dxdv),where a momentum term is added to the EMO problem.To obtain the solution of the EMO problem, it suffices to marginalize the minimizer of (<ref>) given by, μ_*(x,v)∝exp(-1/λδ F/δμ(μ^x_*,x)-1/2λv^2).This work also sharpens the analysis of the space-discretization of  introduced by <cit.>, which we refer to as the N-particle underdamped Langevin dynamics, dx_t^i =v_t^idt dv_t^i =-γ v_t^idt -D_μF(μ_x_t,x^i_t)dt+√(2γ)dB^i_t;i=1,...,N where μ_x1/N∑_i=1^Nδ_x^i; μ^iLaw(x^i,v^i) and (B_t^i)_i=1^N are d-dimensional Brownian motions independent of i. While interesting, a time-discretization of  is necessary to run the method on a machine. We explore two time-discretization techniques in this paper.To motivate our algorithm as a time-discretization of , we review discretizations of the underdamped Langevin dynamics (), which is a special case of  where F(μ)=∫ V(x)μ(dx) is a linear functional of μ: dx_t =v_tdtdv_t =-γ v_tdt-∇ V(x_t)dt+√(2γ)dB_t.The  was first studied in <cit.> and <cit.>. Under functional inequalities such as Poincaré's inequality, the convergence guarantee of the  was first shown by <cit.> and <cit.> using a hypocoercivity approach, but without capturing the acceleration phenomenon compared to the overdamped Langevin dynamics. <cit.> are the first to achieve the acceleration of the  with convex V in χ^2-divergence. They prove that when 𝒞_≪ 1, the decaying rate ofis O(√(𝒞_)) whereas the decaying rate of the overdamped Langevin dynamics is O(𝒞_). A discretization ofis referred to as an underdamped Langevin Monte Carlo () algorithm.There are various discretization schemes proposed for implementing . The Euler-Maruyama (EM) discretization of , =-1 dx_t =v_kdt, dv_t =-γ v_kdt-∇ V(x_k)dt+√(2γ)dB_t, x_k+1 =x_k+hv_k,v_k+1 =(1-γ h)v_k-h∇ V(x_k)+√(2γ h)ξ_k,for stepsize h, ξ_k∼𝒩(0,I_d) and t∈[kh,(k+1)h], has been well-studied and shown to incur the largest error <cit.>. Recently, however, several works derive the convergence of the  in Wasserstein distance <cit.>, KL divergence <cit.> and Rényi divergence <cit.> using a more precise discretization scheme, often referred to as the left point method (LPM) <cit.> and given bydx_t =v_tdt, dv_t =-γ v_tdt-∇ V(x_k)dt+√(2γ)dB_t,for t∈[kh,(k+1)h], which is a group of linear stochastic differential equations (SDE) that can be exactly integrated. Unlike the EM scheme, LPM only fixes the drift term in each small interval. <cit.> show that the LPM incurs weaker stepsize restriction when compared with EM. Other discretization schemes are proposed in <cit.>, whose convergence guarantee are obtained in Wasserstein distance without achieving better dependence on terms such as the smoothness and LSI constants.In this work, we show that LPM can be applied to discretize both <ref> and <ref> to produce an implementable algorithm that converges quickly. §.§ Definitions and assumptionsFor each method considered, we study their behavior in settings where the objectivesatisfies a Log-Sobolev Inequality (LSI).A measure π∈𝒫_2(ℝ^d) satisfies Log-Sobolev Inequality (LSI) with parameter 𝒞_>0, if for any μ∈𝒫_2(ℝ^d)=-1(μπ)≤1/2𝒞_𝔼_μ[∇logμ/π^2]. or if equivalently, forall smooth f:ℝ^d→ℝ, _π(f^2)≤ 2𝔼_π[∇ f^2]/𝒞_. We introduce three essential assumptions on F given by <cit.> for establishing the non-asymptotic convergence of the <ref> and <ref>. [Convexity]The functional F is convex, which satisfiesF(tμ_1+(1-t)μ_2)≤ tF(μ_1)+(1-t)F(μ_2)for any μ_1,μ_2∈𝒫_2(ℝ^d) and t∈[0,1].[ℒ-smoothness]The functional F is ℒ-smooth; that is, the intrinsic derivativeexists and satisfiesD_μF(μ_1,x_1)-D_μF(μ_2,x_2)≤ℒ(W_2(μ_1,μ_2)+x_1-x_2)for any μ_1,μ_2∈𝒫_2(ℝ^d) and x_1,x_2∈ℝ^d. We assume 1≤ℒ<∞ throughout this paper.[LSI]The proximal Gibbs distribution μ̂(x,v)=exp(-δ F/δμ(μ^x,x)-1/2v^2)/Z(μ) satisfies LSI with constant 0<𝒞_≤ 1 uniformly in μ.Note that if μ̂^x(x)∝exp(-δ F/δμ(μ^x,x)) satisfies LSI with constant ρ>0, Assumption <ref> is satisfied with the choice 𝒞_=1/2ρ. And we refer our readers to <cit.> and <cit.> for the verification of Assumptions <ref>-<ref> in a variety of settings, including the neural network setting and density estimation setting.Beyond Assumptions <ref>-<ref>, we introduce four additional assumptions that will be required to show the global convergence of our space-time discretization. [Bounded Gradient]The intrinsic derivative of F satisfies (where ℒ>0)D_μF(μ,x)≤ℒ(1+x). Notably, <cit.> assume that F can be decomposed as F(μ)=U(μ)+𝔼_x∼μ[r(x)] where D_μU(μ,x)≤ R for any μ∈𝒫(ℝ^d), x∈ℝ^d, and where r(x) is λ_2-smooth with ∇ r(0)=0 in order to establish the convergence of their space-time discretization of <ref>. Thus, their assumption,D_μF(μ,x)≤D_μU(μ,x)+∇ r(x)≤ R+λ_2x,implies Assumption <ref> with the choice ℒ≥max{R,λ_2}. The next three assumptions are needed for bounding the second moment of the iterates (x_t,v_t) and (x_t^i,v_t^i) along  and , which is crucial for the establishment of our discrete-time convergence: For all μ∈𝒫_2(ℝ^d), the proximal Gibbs distribution (<ref>)satisfies𝔼_μ̂·^2=O(d).The functional F and the initialization measure μ_0 satisfyF(μ^x_0)=O(ℒd).The functional F and the initialization μ_0^N satisfy 𝔼_x_0∼μ_0^NF(μ_x_0)=O(ℒd).While Assumptions <ref>-<ref> are sufficient, they may not be necessary for the iterates to be bounded. Nevertheless, we argue these assumptions are not too restrictive by providing several examples satisfying them in Section <ref>. For problem (<ref>) and (<ref>), if we choose r(x)=x^2/2, there exists ℒ>0 such that the activation function satisfies |h(x;a)|≤√(ℒ) (also proposed in <cit.>); furthermore if the convex loss function l is quadratic or satisfies |∂_1l|≤√(ℒ) (also proposed in <cit.>), F will meet Assumption <ref> with λ'≲ (2π)^d/d-2exp(-8ℒ/d-2). Finally, if in addition we assume l is √(ℒ)-Lipschitz and choose μ_0=𝒩(0,I_2d) and μ_0^N=𝒩^⊗ N(0,I_2d), Assumption <ref> and <ref> will be satisfied. We defer other examples and the related verification of these examples to the Appendix <ref>.§.§ Related work Techniques for establishing the continuous-time convergence of the mean-field underdamped systems and their space-discretization (N-particle systems) are centered around coupling and hypocoercivity, also known as functional approaches <cit.>. The coupling approach generally constructs a joint probability of the mean-field and N-particle systems to make the analytic comparison between them. Based on coupling approaches, <cit.> obtain the continuous-time convergence of the underdamped systems with mean-field interaction and its space-discretization. <cit.> study the ergodicity of the <ref> without a quantitative rate. Under the setting of small mean-field dependence, <cit.> obtain the exponential contraction using coupling techniques in <cit.>. However, the small mean-field dependence is not satisfied in many applications. The functional approach (hypocoercivity) generally constructs appropriate Lyapunov functionals and studies how their values change along the dynamics. Based on hypocoercivity, <cit.> establish the exponential convergence of the mean-field underdamped systems and its propagation of chaos by constructing a suitable Lyapunov functional. Nevertheless, most of the works above only consider specific settings of  such as singular interactions and two-body interactions, which restricts the application to real-world problems. Setting γ = 1, <cit.> establish the exponential convergence of  and <ref> using the hypocoercivity technique in <cit.>. Under Assumptions <ref>-<ref>, they derive the convergence without restricting the size of interactions, which subsumes many settings above. Notably, the techniques of our Theorems <ref> and <ref> are adopted from <cit.> based on hypocoercivity where we consider other choices of γ to improve the decaying rate ofandestablished in <cit.>.§ N-PARTICLE UNDERDAMPED LANGEVIN ALGORITHMOur first step is to establish the global convergence of the mean-field underdamped Langevin algorithm, UMLAdx_t =v_tdt, dv_t =-γ v_tdt-D_μF(μ^x_k,x_k)dt+√(2γ)dB_t,for stepsize h, t∈[kh,(k+1)h] and k=1,...,K.Note that  is a time-discretization of the <ref>, where each step requires integrating from t=kh to t=(k+1)h for stepsize h. is intractable to implement in most instances given we do not often have access to μ_k^x per iteration. This prompts us to consider the particle approximation which uses 1/N∑_i=1^Nδ_x_k^i to approximate μ_k^x: dx_t^i =v^i_tdt, dv_t^i =γ v^i_tdt-D_μF(μ_x_k,x^i_k)dt+√(2γ)dB^i_tfor stepsize h, t∈[kh,(k+1)h], i=1,...,N, k=1,… and μ_x_k=1/n∑_i=1^nδ_x^i_k.Integrating the particle system  (<ref>) from t=kh to t=(k+1)h for stepsize h and i=1,...,N, we obtain our proposed Algorithm <ref> which we refer to as the N-particle underdampedLangevin algorithm.UNLAThe update parameters of Algorithm <ref>, φ_0, φ_1, φ_2 and (B_k^i)^x, (B_k^i)^v, are functions of γ and stepsize h. Thus we need to specify the value of γ and h to compute the update parameters and initialize (x_0, v_0) before running the algorithm.§.§ Convergence analysisWe begin by leveraging Theorem 2.1 and Theorem 2.3 from <cit.> analyzing the continuous-time dynamics using entropic hypocoercivity. Throughout, letS=([1/ℒ 1/√(ℒ); 1/√(ℒ)2 ])⊗ I_d. Both proofs utilize a Lyapunov functional similar to the one introduced by <cit.>.In particular, Theorem <ref> is established by showing the following functional,ℰ(μ) ≜ℱ(μ)+_S(μμ̂),where ℱ(μ) ≜ F(μ^x)+∫1/2v^2μ(dxdv)+Ent(μ), is decaying along the trajectory of the dynamics. Our second Theorem <ref> establishes convergence of the N-particle system <ref>. Using the notation x=(x^1,...,x^N), v=(v^1,...,v^N) and μ^N=⊗_i=1^Nμ^i, we take μ_*^N to be the limiting distribution of  satisfying μ_*^N∝exp(-NF(μ_x)-1/2v^2). With these definitions, we obtain our guarantee by showing the functional,ℰ^N(μ^N) ℱ^N(μ^N)+^N_S(μ^Nμ_*^N),where ^N_S(μ^Nμ_*^N) ∑_i=1^N𝔼_μ^NS^1/2∇_ilogμ^N/μ_*^N^2,and ℱ^N(μ) ∫ NF(μ_x)+1/2v^2 μ^N(dxdv)+Ent(μ^N),is decaying along the trajectory of , where ∇_i = (∇_x^i, ∇_v^i)^𝖳.If Assumptions <ref>-<ref> hold, μ_0 has finite second moment, finite entropy and finite Fisher information, then the law μ_t of the <ref> with γ=√(ℒ) and ℰ defined in (<ref>) satisfy,ℱ(μ_t)-ℱ(μ_*)≤ (ℰ(μ_0)-ℰ(μ_*))exp(-𝒞_/3√(ℒ)t).If Assumptions <ref>-<ref> hold, μ_0^N has finite second moment, finite entropy, finite Fisher information, and N≥(ℒ/𝒞_)(32+ 24ℒ/𝒞_), then the joint law μ_t^N of the <ref> with γ=√(ℒ) and ℰ^N defined in (<ref>) satisfy1/Nℱ^N(μ_t^N)-ℱ(μ_*)≤ℰ_0^N/Nexp(-𝒞_/6√(ℒ)t)+ℬ/N,where ℬ=60ℒd/𝒞_+36ℒ^2d/𝒞^2_, ℰ_0^Nℰ^N(μ_0^N)-Nℰ(μ_*). Theorem <ref> implies the non-uniform-in-N convergence of <ref>, which incorporates a bias term involving N due to the particle approximation. Our proof technique mirrors that of <cit.>; however,we demonstrate that a faster convergence and smaller bias in Theorem <ref> can be achieved by choosing γ=√(ℒ) instead of γ = 1 (see Table <ref>).Our main theorems analyzes the convergence of the discrete-time processes and their mixing time guarantees to generate an ϵ-approximate solution in TV distance with the specific choice of initialization, damping coefficient γ, and stepsize h. In addition to the assumptions specified in Theorems <ref>, let Assumptions <ref>-<ref> hold. Denote μ̅_K the law of (x_K,v_K) of the MULA and κℒ/𝒞_LSI. Then in order to ensure μ̅_K-μ_*_≤ϵ, it suffices to choose γ=√(ℒ), μ̅_0=𝒩(0,I_2d), andh=Θ(𝒞_^3/2ϵ/L^2d^1/2),K=Θ(L^5/2d^1/2/𝒞_^5/2ϵ).If we further suppose that Assumption <ref> holds, it suffices to chooseh=Θ(𝒞_ϵ/ℒ^3/2d^1/2), K=Θ(κ^2d^1/2/ϵ).A similar guarantee can be stated for the N-particle system (<ref>) with the additional requirement that the number of particles scale according to the dimension of the problem and problem parameters. In addition to the assumptions specified in Theorem <ref>, let Assumptions <ref>, <ref> and <ref> hold. Denote μ̅^i_K the law of (x_K^i,v_K^i) of the NULA for i=1,...,N and κℒ/𝒞_𝖫𝖲𝖨. Then in order to ensure 1/N∑_i=1^Nμ̅_K^i-μ_*_≤ϵ, it suffices to choose γ=√(ℒ), μ̅^N_0=𝒩(0,I_2Nd), h=Θ(𝒞_ϵ/ℒ^3/2d^1/2), K=Θ(κ^2d^1/2/ϵ), and the number of particles N=Θ(κ^2d/ϵ^2).§.§ Proof sketchesFor the continuous-time results, we outline the proof of Theorem <ref> (and analogously Theorem <ref>) in this section to provide intuition for how choosing γ=√(ℒ) can improve the decaying rate of . We begin with a review of some notations of hypocoercivity in <cit.>:A_t=∇_v, C_t=∇_x, Y_t=(A_tu_t,A_t^2u_t, C_tu_t, C_tA_tu_t)^𝖳,where u_t=logμ_t/μ̂_t. Inheriting the analysis of Theorem 2.1 in <cit.> and Lemma 32 in <cit.>, we show that for a general γ, the Lyapunov functional (<ref>) with S=[s_ij]⊗ I_d∈ℝ^2d× 2d is decreasing alongsatisfyingd/dtℰ(μ_t)≤ -Y_t^𝖳𝒦Y_t,where s_11=c, s_12=s_21=b, s_22=a and 𝒦 is an upper triangle matrix with diagonal elements (γ+2γ a-4ℒb, 2γ a, 2b, 2γ c). To ensure S≻ 0 and the right hand side of (<ref>) negative, the criteria of choosing positive constants a, b, c should be ac>b^2 and 𝒦≻ 0. If we specify γ=1, we can choose a=c=2ℒ and b=1 satisfying the criteria. Then we obtain λ_(𝒦)=1 andd/dtℰ(μ_t)≤ -λ_(𝒦)Y_t^𝖳Y_t ≤-𝒞_(ℱ(μ_t)-ℱ(μ_*))-1/2λ_(S)𝖥𝖨_S(μ_tμ̂_̂t̂)≤-𝒞_/6ℒ(ℰ(μ_t)-ℰ(μ_*))Applying Grönwall's inequality leads to the decaying rate O(𝒞_/ℒ) of(γ=1) in <cit.>. If we specify γ=√(ℒ), we can choose b=1/√(ℒ), a=2, c=1/ℒ satisfying the criteria. Then we obtain λ_(𝒦)=2/√(ℒ) andd/dtℰ(μ_t)≤ -λ_(𝒦)Y_t^𝖳Y_t ≤-2𝒞_/√(ℒ)(ℱ(μ_t)-ℱ(μ_*))-1/λ_(S)√(ℒ)𝖥𝖨_S(μ_tμ̂_t)≤-𝒞_/3√(ℒ)(ℰ(μ_t)-ℰ(μ_*))Applying Grönwall's inequality leads to the improved decaying rate O(𝒞_/√(ℒ)) of(γ=√(ℒ)) in our Theorem <ref>. We defer the whole proof to Appendix <ref>. For discretization errors, we outline the proof of Theorem <ref> (and analogously Theorem <ref>) in this section.Let (μ_t)_t≥ 0 and (μ_t/h)_t≥ 0 represent the law of <ref> and <ref> initialized at μ_0. Let Q_kh and P_kh denote probability measures ofandon the space of paths C([0,kh],ℝ^2d). Invoking Girsanov's theorem <cit.> and Assumption <ref>, we can upper bound the pathwise divergence betweenandin KL divergence for stepsize h and k=1,...,K under Assumptions <ref> and <ref>:(Q_KhP_Kh)≲ℒ^4h^5/γ∑_k=0^K-1𝔼_𝐐_Khx_kh^2+ℒ^2h^3/γ∑_k=0^K-1𝔼_𝐐_Khv_kh^2+ℒ^4h^5K/γ+ℒ^2h^4KdThe derivation of (<ref>) is similar to that of <cit.>; they establish the discretization error of <ref> in q-th order Rényi divergence (q∈[1,2)), which has KL divergence as a special case(q=1). Their smoothness assumption on the potential function V is (ℒ,s)-weak smoothness, which recovers ℒ-smoothness when s=1.We use many similar techniques of bounding the discretization error to those of <cit.>. TheirLemma 26 can be generalized to our Lemma <ref> in the mean-field setting, which describes an intermediate process of deriving (<ref>). Applying the data processing inequality, we can upper bound the KL divergence between the time marginal laws of the iterates by KL divergence between path measures:(μ_Tμ_K)≤(Q_KhP_Kh),where T=Kh. To uniformly upper bound the right-hand side of (<ref>), we need to uniformly upper bound 𝔼_𝐐_Khx_kh^2 and 𝔼_𝐐_Khv_kh^2. The techniques of uniformly upper bounding the iterates of <ref> in <cit.> rely on χ^2-convergence guarantee of . However, χ^2-convergence is not established forby previous works. We thus utilize different techniques to uniformly upper bound the iterates ofand . More specifically, we have𝔼_Q_T(x_t,v_t)^2=W_2^2(μ_t,δ_0)≲W_2^2(μ_t,μ_*)_+W_2^2(μ_*,δ_0)_, t∈[0,T],where δ_0 is Dirac measure on ∈ℝ^2d, andis the second moment of μ_* denoted by m_2^2. Now we need to upper bound . Under Assumption <ref>, μ_* satisfies LSI implying Talagrand's inequality: ≲(μ_tμ_*)/𝒞_. Under Assumptions <ref> and <ref>, Lemma 4.2 in <cit.> establishes the following relation between KL divergence and energy gap: (μ_tμ_*)≤ℱ(μ_t)-ℱ(μ_*).Moreover, <cit.> demonstrate that ℱ(μ_t) is decreasing along . According to two conclusions above,can be bounded as≲(μ_tμ_*)/𝒞_≤ℱ(μ_t)-ℱ(μ_*)/𝒞_≤ℱ(μ_0)-ℱ(μ_*)/𝒞_≤ℱ(μ_0)/𝒞_,where the last inequality follows from the assumption that ℱ(μ_*)≥ 0. Therefore, under Assumptions <ref> on m_2^2 and <ref> on F(μ_0), our Lemma <ref> establishes the upper bound of 𝔼_Q_T(x_t,v_t)^2 in terms of ℒ, 𝒞_ and d, which implies the uniform upper bound of (μ_Tμ_K). Applying Pinsker's inequalityμ̅_K-μ_T_≲√((μ_Tμ̅_K)), we can convert the discretization error bound in KL divergence to that in TV distance. Combining Pinsker's inequality and relation (<ref>), we derive the continuous-time convergence ofin Theorem <ref> in TV distance:μ_T-μ_*_≲√((μ_Tμ_*))≤√(ℱ(μ_T)-ℱ(μ_*)).Applying the triangle inequality to μ̅_K-μ_*_, the TV distance between the law of <ref> at Kh and the limiting distribution of , we obtain the global convergence of :μ̅_K-μ_*_ ≤μ̅_K-μ_T__ℬ+μ_T-μ_*__𝒱,where 𝒱 vanishes exponentially fast as T→∞ and ℬ is a vanishing bias as h→ 0. To ensure 𝒱+ℬ≤ϵ, it suffices to choose T=Θ(√(ℒ)/𝒞_) and specify h, K as in Theorem <ref>. The whole proof is deferred to Appendix <ref>.§.§ Discussion of mixing time resultsWe summarize the convergence results of MULA, NULA and several existing methods including <ref>, the EM-discretization of <ref> (referred to as  <cit.>), and its finite-particle system (referred to as <ref> <cit.>)in Table <ref>. For the mixing time to generate an ϵ-approximate solution in TV distance, our proposedandachieve better dependence on ℒ, d and ϵ thanand , and keep the same dependence on 𝒞_ asand , which justifies that our methods are fast. For the number of particles, we improve the dependence on ℒ for(γ=√(ℒ)) when compared with(γ=1) in <cit.> and forwhen compared with . Particularly, our dependence on the smoothness constant in the number of particle guarantee ofis Θ(ℒ^2) whereas the counterpart ofis Θ(ℒ^4). However, our dependence on the LSI constant in the number of particle guarantee ofis Θ(𝒞^-2_) whereas the counterpart ofis Θ(𝒞^-1_). Note that <cit.> considerin the neural network setting where they specifically choose F to be the objective (<ref>) and propose assumptions on l, h and r. <cit.> consider <ref> in a setting where they specify that F(μ)=U(μ)+𝔼_μ[r(x)] and propose assumptions on U and r. Consequently, they use different notations of the smoothness constant and establish the convergence rate in energy gap ℱ(μ_K)-ℱ(μ_*) instead of the TV distance. To make a fair comparison, we equivalently translate those smoothness constants into ℒ and convert convergence rates ofandto those in TV distance by relation (<ref>) and Pinsker's inequality (see Appendix <ref>).§ APPLICATIONS OF ALGORITHM <REF> In Section <ref>, we briefly mention three examples on the applications of problem (<ref>). In this section, we will show how Algorithm <ref> can be applied to training mean-field two-layer neural networks and density estimation via maximum mean discrepancy (MMD) minimization. §.§ Training mean-field neural networksConsider a two-layer neural network in the mean-field regime (with infinite neurons), which can be parametrized as h(μ;a)𝔼_μ[h(x;a)], where h(x;a) represents a single neuron with trainable parameter x and input a (e.g. h(x;a)=σ(x^𝖳a) for activation function σ); μ is the probability distribution of the parameter x. Given dataset (a_i,b_i)_i=1^n, loss function l and regularization function r, we can choose F(μ) in (<ref>) to beF(μ)=𝔼_(a,b)∼𝒟[l(h(μ;a),b)]+λ'𝔼_x∼μ[r(x)],where λ'>0; 𝒟 is the data distribution; l(,):ℝ×ℝ→ℝ is a convex loss function in the first variable (e.g. squared loss and logistic loss) and r(x) is a regularization function (e.g. r(x)=x^2/2). For empirical risk minimization given dataset (a_i,b_i)_i=1^n, we can chooseF(μ)=1/n∑_i=1^nl(h(μ;a_i),b_i)+λ'𝔼_x∼μ[r(x)].Since l is convex and h(μ;a) is a linear functional of μ, which is also convex, their composition remains convex if h satisfies quadratic growth (Proposition 3.2 in <cit.>). The learning of two-layer neural networks is thus transformed into a convex optimization problem in the space of measures, for which the convexity of loss function can be utilized to show the global convergence of the gradient-based optimization methods <cit.>. The entropy regularization term in (<ref>) comes from the noisy gradient descent of F(μ). While the minimizers of F(μ)+λEnt(μ) and F(μ) are different, the global convergence analysis of F(μ)+λEnt(μ) requires less stringent assumptions <cit.> and we can control the value of λ to make the minimizers of F(μ)+λEnt(μ) close to the minimizers of F(μ). The objectives (<ref>) satisfy Assumptions <ref>-<ref> for specific common choices l, h and r described in several works  <cit.>. In order to make (<ref>) also satisfy Assumptions <ref>-<ref>, assumptions on l, h and r proposed in <cit.> suffice. We only need to additionally specify the initialization without incurring stronger assumptions on l, h and r.For instance, if we choose r(x)=x^2/2; there exists ℒ>0 such that the activation function satisfies |h(x;a)|≤√(ℒ) (also proposed in <cit.>); and the convex loss function l is quadratic or satisfies |∂_1l|≤√(ℒ) (also proposed in <cit.>), F will meet Assumption <ref> with λ'≲ (2π)^d/d-2exp(-8ℒ/d-2). Finally, if in addition we assume l is √(ℒ)-Lipschitz and choose μ_0=𝒩(0,I_2d) and μ_0^N=𝒩(0,I_2Nd), Assumptions <ref> and <ref> will be satisfied. We defer other examples and the related verification to Appendix <ref>.§.§ Density estimationInheriting the definition in <cit.>, the maximum mean discrepancy between two probability measures μ and π is defined as ℳ(μπ)=∬ k(x,x)-2k(x,y)+k(y,y)dμ(x)dν(y), given a positive definite kernel k. Similar to Example 2 in <cit.>, we consider the non-parametric density estimation using the Gaussian mixture model, which can be parametrized as p(μ;z)𝔼_x∼μ[p(x;z)], where p(x;z) is the Gaussian density function with mean x and a user-specified variance σ^2. Given a set of samples {z_i}_i=1^n from the target distribution p^*, our goal is to fit p^* by minimizing the empirical version of ℳ(p(μ;z)p^*), defined asℳ̂(μ)=∭ p(x;z)p(x';z')k(z,z')dzdz'd(μ×μ)(x,x')-2∫(1/n∑_i=1^n∫ p(x;z)k(z,z_i)dz)dμ(x).We can choose F(μ) in (<ref>) to beF(μ)=ℳ̂(μ)+λ'𝔼_x∼μ[r(x)],where λ'>0 and r is the regularization function. <cit.> show that objective (<ref>) satisfies Assumptions <ref>-<ref> by choosing smooth and light-tailed kernel k, such as Gaussian radial basis function (RBF) kernel defined as k(z,z')exp(-z-z'^2/2σ'^2) for σ'>0. With Gaussian RBF kernel k (σ'=σ), we demonstrate that objective (<ref>) also satisfies our Assumptions <ref>-<ref> if we choose λ'≲ 2π, r(x)=x^2/2 and μ_0=𝒩(0,I_2d). We defer the verification of our assumptions in the setting above to Appendix <ref>.§ NUMERICAL EXPERIMENTSWe verify our theoretical findings by providing empirical support in this section. Our experiment[Code for our experiments can be found at <https://github.com/QiangFu09/NULA>.] is to approximate a Gaussian function f(z)=exp(-z-m^2/2d) for z∈ℝ^d and unknown m∈ℝ^d by a mean-field two-layer neural network withactivation.Consider the empirical risk minimization problem (<ref>) with quadratic loss function l, r(x)=x^2/2, d=10^3, λ'=10^-4 and n randomly generated data samples from f(z) (n=100), described byF(μ)=1/2n∑_i=1^n(h(μ;a_i)-f(a_i))^2+λ'/2𝔼_x∼μ[x^2].F satisfy Assumptions <ref>-<ref> with the choice of l, h, r, and thus we apply Algorithm <ref> for minimizing the objective above. Note that the number of neurons in the first hidden layer is equivalent to the number of particles in , and we choose N∈{256, 512, 1024, 2048}.The intrinsic derivative of F(μ) for the j-th particle in our method is given byD_μF(μ_x,x^j)=1/n∑_i=1^n(1/N∑_s=1^Nh(x^s;a_i)-f(a_i))∇ h(x^j;a_i)+λ' x^j.Note that 1/N∑_s=1^Nh(x_s;a) is in fact a two-layer neural network with N neurons. Instead of fine-tuning γ and stepsize h in , we directly fine-tune the value of φ_0, φ_1 and φ_2 in Algorithm <ref> by grid search. For simplifying the computation, we approximate (B_k^i)^x and (B_k^i)^v by ηξ^x_k and ηξ^v_k where ξ^x_k and ξ^v_k are independent standard Gaussian, and then we fine-tune the scaling scalar η. We compare our method () to <ref> with stepsize h_1 and scaling scalar λ_1 given by,x^j_k+1=x^j_k - h_1 D_μF(μ_x_k,x^j_k) + √(2λ_1h_1)ξ^i_kfor i=1,...,N, k=1,...,K and ξ^i_k∼𝒩(0,I_d), and EM-UNLA (the EM discretization of the <ref> with stepsize h_2 and scaling scalar λ_2) whose update is given by =-1x_k+1^j =x_k^j + h_2 v_k^jv_k+1^j =(1-γ h_2)v_k^j - h_2 D_μF(μ_x_k,x_k^j) + √(2λ_2h_2)ξ^i_kfor i=1,...,N, k=1,...,K and ξ^i_k∼𝒩(0,I_d) in the same task. We choose K=10^4 and also fine-tune h_1, λ_1 and h_2, λ_2 to make fair comparison. We postpone our choice of hyperparameters to the Appendix <ref>. For each algorithm in our experiment, we initialize x_0^j∼𝒩(0,10^-2I_d) and v_0^j∼𝒩(0,10^-2I_d) for j=1,...,N, average 5 runs over random seeds in {0,1,2,3,4} and generate the error bars by filling between the largest and the smallest value per iteration. <ref> illustrates the effectiveness of . For each N,enjoys faster convergence thanand <ref>. Notably, there is an interesting phenomenon in our experiments. For N=256, bothandsuffer from convergence instability, which means that the loss will escapethe stable convergence regime and slightly go up after many training epochs. However,outperformsandwithout convergence instability for N=512, 1024, 2048, and the loss ofeven goes on decreasing when the losses ofandkeep stable for N=1024, 2048. This phenomenon matches our theory that we do not reduce the number of particles forwhen compared with(see Table <ref>). These observations suggest that our method performs better in the high particle-approximation regime.<ref> demonstrates this finding more transparently. The second row of <ref> also suggests that EM discretization incurs a larger bias than LPM.§ DISCUSSIONThis paper (1) improves the convergence guarantees in <cit.> with a refined Lyapunov analysis (Theorems <ref> and <ref>); (2) discretizes the <ref> and <ref> with a scheme which results in smaller bias than the EM scheme; and (3) presents a novel discretization analysis of  and . Our theoretical results suggest the mixing time of MULA and NULA might be superior to that ofand <ref>, which is further validated by our experiments. We now note several directions for future potential developments. First, it is intriguing to see whether we can improve the number of particle guarantees forto avoid the behavior of convergence instability in the low particle approximation regime. Second, we suspect convergence rates for  andcan be improved. Importantly, we obtain convergence rates for theandin TV distance, which do not share the same distance with the convergence rates of , ,and . We hope to establish our results in the energy gap or KL divergence in the future. What's more, our techniques on uniformly bounding the iterates of  and  combined with Assumptions <ref>-<ref> generates an additional 𝒞_ by Talagrand's inequality, which is the reason why we do not improve the dependence of 𝒞_ forand . We hope to explore whether it is possible to weaken those assumptions and refine the analysis of uniformly bounding the iterates to improve the dependence of 𝒞_ in the mixing time and number of particles ofand .§ HELPFUL LEMMAS The solution (x_t,v_t) to the discrete-time process (<ref>) for t∈[kh,(k+1)h] isx_t =x_k+1-e^-γ(t-kh)/γv_k-γ h-(1-e^-γ(t-kh))/γ^2D_μF(μ_k^x,x_k)+B_k^x,v_t =e^-γ(t-kh)v_k-1-e^-γ(t-kh)/γD_μF(μ_k^x,x_k)+B_k^v,where (B_k^x,B_k^v)∈ℝ^2d is independent of k and has the joint distribution[[ B_k^x; B_k^v ]]∼𝒩(0, [[ 2/γ(h-2(1-e^-γ(t-kh))/γ+1-e^-2γ(t-kh)/2γ)1/γ(1-2e^-γ(t-kh)+e^-2γ(t-kh)); 1-e^-2γ(t-kh) ]])The solution (x_t^i,v_t^i) to the discrete-time process (<ref>) for i=1,...,N and t∈[kh,(k+1)h] isx^i_t =x^i_k+1-e^-γ(t-kh)/γv^i_k-γ h-(1-e^-γ(t-kh))/γ^2D_μF(μ_x_k,x^i_k)+(B^i_k)^x,v^i_t =e^-γ(t-kh)v^i_k-1-e^-γ(t-kh)/γD_μF(μ_x_k,x^i_k)+(B^i_k)^v.where ((B^i_k)^x,(B^i_k)^v)∈ℝ^2d is independent of i, k and has the joint distribution[[ (B_k^i)^x; (B_k^i)^v ]]∼𝒩(0, [[ 2/γ(h-2(1-e^-γ(t-kh))/γ+1-e^-2γ(t-kh)/2γ)1/γ(1-2e^-γ(t-kh)+e^-2γ(t-kh)); 1-e^-2γ(t-kh) ]])The proof technique is similar to the proof of Lemmas 10 and 11 proposed in <cit.>. The solution (x_t,v_t) to the discrete-time process (<ref>) for t∈[kh,(k+1)h] isx_t =x_k+∫_kh^tv_sds,v_t =e^-γ (t-kh)v_k-∫_kh^te^-γ(t-s)D_μF(μ^x_k,x_k)ds+√(2γ)∫_kh^te^-γ(t-s)dB_s.It follows from the definition of Brownian motion that the distribution of (x_t,v_t) is a 2d-dimensional Gaussian. We first compute the conditional means of (x_t,v_t) as follows:𝔼[v_t] =e^-γ(t-kh)v_k-1-e^-γ(t-kh)/γD_μF(μ_k^x,x_k), 𝔼[x_t] =x_k+1-e^-γ(t-kh)/γv_k-γ h-(1-e^-γ(t-kh))/γ^2D_μF(μ_k^x,x_k).Note that we can ignore the zero-mean Brownian motion terms above. Then we compute the conditional covariance for v_t:𝔼[(v_t-𝔼[v_t])(v_t-𝔼[v_t])^𝖳] =2γ(∫_kh^te^-γ(t-s)dB_s)(∫_kh^te^-γ(t-s)dB_s)^𝖳=2γ(∫_kh^te^-2γ(t-s)ds)· I_d=1-e^-2γ(t-kh)The solution (x_t^i,v_t^i) to the discrete-time process (<ref>) for i=1,...,N and t∈[kh,(k+1)h] isx^i_t =x^i_k+∫_kh^tv^i_sds,v^i_t =v^i_ke^-γ (t-kh)-∫_kh^te^-γ(t-s)D_μF(μ_x_k,x^i_k)ds+√(2γ)∫_kh^te^-γ(t-s)dB^i_s. Choosing t=(k+1)h for (<ref>) generates the updates of Algorithm <ref>. Assume F satisfies Assumptions <ref>-<ref>. Then for every μ∈𝒫_2(ℝ^2d) we have(μμ_*)≤ℱ(μ)-ℱ(μ_*)≤(μμ̂)≤(1+ℒ/𝒞_+ℒ^2/2𝒞_^2)(μμ_*). Assume that F satisfies Assumption <ref> and there exists a measure μ_*∈𝒫(ℝ^2d) that admits the proximal Gibbs distribution μ_*(x,v)∝exp(-δ F/δμ(μ^x_*,x)-1/2v^2). Then for all μ^N∈𝒫(ℝ^2dN), we have(μ^Nμ_*^⊗ N)≤ℱ^N(μ^N)-Nℱ(μ_*). Let X_1,...,X_N be measurable spaces, μ be a probability on the product space X=X_1×...× X_N and ν=ν^1⊗...⊗ν^N be a σ-finite measure. Then∑_i=1^N(μ^iν^i)≤(μν). Let x:ℝ_+→ℝ^d, and c∈ℝ^d, A∈ℝ^d× d, where A has non-negative entries. Suppose that the following inequality is satisfied componentwise:x(t)≤ c+∫_0^t Ax(s)ds,for all t≥ 0.Then the following inequality holds where I_d∈ℝ^d× d is the d-dimensional identity matrix:x(t)≤(AA^†e^At-AA^†+I_d)c. Let (x_t,v_t)_t≥ 0 and (x_t^i,v_t^i)_t≥ 0 respectively denote the iterates of the <ref> and <ref>. Assume that h≲ℒ^-1/2∧γ^-1. Under Assumption <ref> and Assumption <ref>, for t∈[kh,(k+1)h], we havesup_t∈[kh,(k+1)h]x_t-x_kh≤ 2ℒh^2x_kh+4hv_kh+2ℒh^2+2√(2γ)hsup_t∈[kh,(k+1)h]B_t-B_kh sup_t∈[kh,(k+1)h]x^i_t-x^i_kh≤ 2ℒh^2x^i_kh+4hv^i_kh+2ℒh^2+2√(2γ)hsup_t∈[kh,(k+1)h]B^i_t-B^i_khfor i=1,...,N. We only prove the first relation, and the proof of the second relation is similar.x_t-x_kh =∫_kh^t v_τdτ≤ hv_kh+∫_kh^t v_τ-v_khdτ≤ hv_kh+∫_kh^t∫_0^τγ v_τ'dτ'dτ+∫_kh^t∫_kh^τD_μF(μ_τ'^x,x_τ')dτ'dτ+∫_kh^t∫_kh^τ√(2γ)dB_τ'dτ≤ hv_kh+γ h(hv_kh+∫_kh^tv_τ-v_khdτ)+∫_kh^t∫_kh^τD_μF(μ_τ'^x,x_τ') dτ'dτ +∫_kh^t∫_kh^τ√(2γ)dB_τ'dτ≤ hv_kh+γ h(hv_kh+∫_kh^tv_τ-v_khdτ)+ℒh∫_kh^tx_τ-x_khdτ+ℒh^2x_kh+ℒh^2+√(2γ)hsup_t∈[kh,(k+1)h]B_t-B_khwhere the last inequality follows from Assumptions <ref> and <ref>. Likewise for V:v_t-v_kh =∫_kh^tγ v_τdτ+∫_kh^tD_μF(μ_τ^x,x_τ)dτ+∫_kh^t√(2γ)dB_t≤γ(hv_kh+∫_kh^tv_τ-v_khdτ)+∫_kh^tD_μF(μ_τ^x,x_τ)dτ+√(2γ)sup_t∈[kh,(k+1)h]B_t-B_kh≤γ(hv_kh+∫_kh^tv_τ-v_khdτ)+ℒ∫_kh^tx_τ-x_khdτ+ℒh+ℒhx_kh+√(2γ)sup_t∈[kh,(k+1)h]B_t-B_khwhere the last inequality follows from Assumptions <ref> and <ref>. Before applying matrix form of Grönwall's inequality, let c=c_1+c_2 with c_2=[[ hv_kh; 0 ]],A=[[ℒh γ h; ℒ γ ]], c_1=[[ ℒh^2x_kh+γ h^2v_kh+ℒh^2+√(2γ)hsup_t∈[kh,(k+1)h]B_t-B_kh;ℒhx_kh+γ hv_kh+ℒh+√(2γ)sup_t∈[kh,(k+1)h]B_t-B_kh ]].c_1 lies in the image space of A, and exp(A_t)c_1 also lies in the image space of A. For the first component:sup_t∈[kh,(k+1)h]x_t -x_kh≤ hexp((ℒh+γ)h)(ℒhx_kh+γ hv_kh+ℒh+√(2γ)sup_t∈[kh,(k+1)h]B_t-B_kh)+ℒhexp((ℒh+γ)h)+γ/ℒh+γhv_kh≤ 2h(ℒhx_kh+2v_kh+ℒh+√(2γ)sup_t∈[kh,(k+1)h]B_t-B_kh)where the second inequality comes from choosing h≲1/ℒ^1/21/γ.((AA^†(exp(Ah)-I)+I)c_2)_(1)=ℒhexp((ℒh+γ)h)+γ/ℒh+γhv_kh≤ 2hv_khCombining relations above andLemma <ref> completes the proof.Let (x_t,v_t)_t≥ 0 denote the iterates of the <ref> with (x_0,v_0)∼μ_0=𝒩(0,I_2d). Under Assumption <ref> and Assumption <ref>, we have𝔼(x_t,v_t)^2≲ℒd/𝒞_ 𝔼(x_t,v_t)^2=W_2^2(μ_t,δ_0) ≤ 2 W_2^2(μ_t,μ_*)+2W_2^2(μ_*,δ_0)≤2/𝒞_(μ_tμ_*)+2m_2^2≤2/𝒞_(ℱ(μ_t)-ℱ(μ_*))+2m_2^2≤2/𝒞_(ℱ(μ_0)-ℱ(μ_*))+2m_2^2≤2/𝒞_ℱ(μ_0)+2m_2^2The second inequality follows from Talagrand's inequality which can be implied by Assumption <ref>.[Assumption <ref> states that the proximal Gibbs distribution satisfies the LSI. Note that μ_* also has the form of the proximal Gibbs distribution and thus satisfies LSI.] The third inequality follows from Lemma <ref>. The fourth inequality follows that d/dtℱ(μ_t)<0 along the <ref> (Proof of Theorem 2.1 in <cit.>) and the last inequality follows from the assumption that ℱ(μ_*)≥ 0. By the definition of ℱ(μ), we have ℱ(μ_0)=F(μ^x_0)+∫1/2v^2μ_0(dxdv)+Ent(μ_0). Since (x_0,v_0)∼𝒩(0,I_2d), we have ∫1/2v^2μ_0(dxdv)≲ d and|Ent(μ_0)|=|∫μ_0logμ_0|=d/2log(2π)+1/2𝔼_μ_0·^2≲ d.By Assumption <ref>, we have F(μ_0^x)≲ℒd. By Assumption <ref>, we have m_2^2≲ d. Thus we have𝔼(x_t,v_t)^2≤2/𝒞_ℱ(μ_0)+2m_2^2≲ℒd/𝒞_+d Let (x^i_t,v^i_t)_i=1^N denote the iterates of the <ref> with (x_0^i,v_0^i)∼μ_0^i=𝒩(0,I_2d) for i=1,...,N and t≥ 0. Under Assumption <ref> and Assumption <ref>, we have1/N∑_i=1^N𝔼(x^i_t,v^i_t)^2≲ℒd/𝒞_ 1/N∑_i=1^N𝔼(x^i_t,v^i_t)^2=1/N∑_i=1^NW_2^2(μ^i_t,δ_0) ≤2/N∑_i=1^NW_2^2(μ^i_t,μ_*)+2W_2^2(μ_*,δ_0)≤2/𝒞_1/N∑_i=1^N(μ^i_tμ_*)+2m_2^2≤2/𝒞_1/N(μ^N_tμ^⊗ N_*)+2m_2^2≤2/𝒞_(1/Nℱ^N(μ^N_t)-ℱ(μ_*))+2m_2^2≤2/N𝒞_ℱ^N(μ^N_0)+2m_2^2The second inequality follows from Talagrand's inequality which can be implied by Assumption <ref>. The third inequality follows from Lemma <ref>. The fourth inequality follows from Lemma <ref> and the last inequality follows that d/dtℱ^N(μ^N_t)<0 along the <ref> (Proof of Theorem 2.2 in <cit.>) and ℱ(μ_*)≥ 0. By the definition of ℱ^N(μ^N), we have ℱ^N(μ_0^N)=∫ (NF(μ_x)+1/2v^2)μ_0^N(dxdv)+Ent(μ_0^N). Similar to the proof of Lemma <ref>, since (x,v)∼𝒩^⊗ N(0,I_2d), we have ∫1/2v^2μ_0^N(dxdv)≲ Nd and |Ent(μ_0^N)|≲ Nd. By Assumption <ref> and Assumption <ref>, we also have ∫ NF(μ_x)μ_0^N(dxdv)≲ Nℒd and m_2^2≲ d. Thus we have1/N∑_i=1^N𝔼(x^i_t,v^i_t)^2 ≤2/N𝒞_ℱ^N(μ^N_0)+2m_2^2=2/N𝒞_(∫ (NF(μ_x)+1/2v^2)μ_0^N(dxdv)+Ent(μ_0^N))+2m_2^2≲1/N𝒞_(Nℒd+Nd)+d≲ℒd/𝒞_+d Consider stochastic processes (x_t)_t≥ 0, (b_t^P)_t≥ 0, (b_t^Q)_t≥ 0 adapted to the same filtration, and σ∈ℝ^d× d any constant matrix (possibly degenerate). Let P_T and Q be probability measures on the path space C([0,T];ℝ^d) such that (x_t)_t≥ 0 followsdx_t =b_t^Pdt+σdB^P_tunder P_T, dx_t =b_t^Qdt+σdB^Q_tunder Q_T,where B^P and B^Q are P_T-Brownian motion and Q_T-Brownian motion. Suppose there exists a process (y_t)_t≥ 0 such thatσ y_t=b_t^P-b_t^Q,and 𝔼_𝐐_Texp(1/2∫_0^Ty_t^2dt)< ∞.If we define σ^† as the Moore-Penrose pseudo-inverse of σ, then we haved𝐏_T/d𝐐_T=exp(∫_0^T⟨σ_t^†(b_t^P_T-b_t^Q_T),dB^Q_T_t⟩-1/2∫_0^Tσ_t^†(b_t^P_T-b_t^Q_T)^2dt)Besides, (B̃_t)_t∈[0,T] defined by dB_tdB_t+σ_t^†(b_t^Y-b_t^X) is a 𝐏_T-Brownian motion. § VERIFICATION OF ASSUMPTIONS§.§ Verification of Assumption <ref> Training mean-field neural networks Denote μ̂(x,v)=μ̂^x(x)⊗𝒩(0,I_d) where μ̂^x(x)∝exp(-δ F/δμ(μ^x,x)). Since the second moment of 𝒩(0,I_d) is O(d), it suffices to ensure 𝔼_x∼μ̂^x(x)x^2=O(d). Consider the empirical risk minimization problem in the mean-field neural networks setting with training data (a_i,b_i)_i=1^n:F(μ)=1/n∑_i=1^nl(h(μ;a_i),b_i)+λ'𝔼_x∼μ[r(x)].* We will prove that Assumption <ref> holds if r(x)=x^2/2, |h(x;a)|≤√(ℒ) (such activation functions includeand ) and |∂_1 l|≤√(ℒ) (such loss functions include logistic loss, Huber loss and log-cosh loss) or l is quadratic. Since r(x)=x^2/2, we obtainδ F/δμ(μ,x) =1/n∑_i=1^n[∂_1 l(h(μ;a_i),b_i)h(x;a_i)]+λ'/2x^2Consider the case where |∂_1 l|≤√(ℒ). Since|h(x;a)|≤√(ℒ), we have |∂_1 l(h(μ;a_i),b_i)h(x;a_i)|≤ℒ. Let μ̂^x(x)=exp(-δ F/δμ(μ,x))/Z where Z=∫exp(-δ F/δμ(μ,x)) dx, and we have𝔼_μ̂·^2 =1/Z∫x^2exp(-1/n∑_i=1^n[∂_1 l(h(μ;a_i),b_i)h(x;a_i)]-λ'/2x^2)dx≜Z'/ZNow we bound Z' and Z respectively.Z' ≤∫x^2exp(ℒ-λ'/2x^2)dx≲exp(ℒ)d/λ',Z ≥∫exp(-ℒ-λ'/2x^2)dx=exp(-ℒ)(2π/λ')^d/2Choose λ' satisfying λ'≲(2π)^d/d-2/exp(4ℒ/d-2), and we have 𝔼_μ̂·^2=Z'/Z≲exp(2ℒ)/λ'(2π/λ')^d/2d≲ d. Consider the case where l is quadratic. |h(μ;a_i)|=|∫ h(x;a_i)μ(dx)|≤∫ |h(x;a_i)|μ(dx)≤√(ℒ), thus we have |∂_1 l(h(μ;a_i),b_i)h(x;a_i)|=|(h(μ;a_i)-b_i)h(x;a_i)|≤ℒ+|b_i|√(ℒ). We can scale the training data to ensure max_i=1^n|b_i|≤√(ℒ), and we obtain |∂_1 l(h(μ;a_i),b_i)h(x;a_i)|≤ 2ℒ. The remaining proof keeps the same with λ'≲(2π)^d/d-2/exp(8ℒ/d-2). * We will prove that Assumption <ref> holds if r(x)=x^2/2, |h(x;a)|≤√(ℒ)(1+x) (such activation functions include , , Softplus, ) and |∂_1 l|≤√(ℒ). Under these conditions, we have |∂_1 l(h(μ;a_i),b_i)h(x;a_i)|≤ℒ(1+x). Then, based on (<ref>), we obtainZ'≤∫x^2exp(ℒ(1+x)-λ'/2x^2)dx ≤exp(ℒ)∫x^2exp(3ℒ^2/2λ'-λ'/3x^2)dx≲exp(ℒ+3ℒ^2/2λ')d/λ'.We also haveZ≥∫exp(-ℒ(1+x)-λ'/2x^2)dx ≳exp(ℒ)∫exp(-ℒ^2/λ'-3λ'/4x^2)dx=exp(ℒ-ℒ^2/λ')(4π/3λ')^d/2Combining the upper bound of Z' and the lower bound of Z, if d≳ℒ^2/λ', we obtain𝔼_μ̂·^2=Z'/Z≲exp(5ℒ^2/2λ')d/λ'(3λ'/4π)^d/2≤exp(5ℒ^2/2λ')(3/4π)^d/2d≲ d.Note that d≳ℒ^2/λ' is possible for large-scale problems. Density estimation We now prove that objective (<ref>) satisfies Assumption <ref> with Gaussian RBF kernel and r(x)=x^2/2. We choose σ' in Gaussian RBF kernel k to be σ for brevity. With the specific choice of r, we reformulate (<ref>) asF(μ)=ℳ̂(μ)+λ'/2𝔼_x∼μx^2.According to the definition of ℳ̂(μ) in Section <ref>, the functional derivative of ℳ̂(μ) isδℳ̂(μ)/δμ(x)=2∭ p(x;z)p(x';z')k(z,z')dzdz'dμ(x')_-2/n∑_i=1^n∫ p(x;z)k(z,z_i)dz_Next we bound each part of δℳ̂(μ)/δμ(x). For , we have1/2 =1/(2πσ^2)^d∭exp(-x-z^2/2σ^2-x'-z'^2/2σ^2-z-z'^2/2σ^2)dzdz'dμ(x')=(πσ^2)^d/2/(2πσ^2)^d∬exp(-x-x'^2/6σ^2-3z'-2/3x'-1/3x^2/4σ^2)dz'dμ(x')=(1/√(3))^d∫exp(-x-x'^2/6σ^2)dμ(x')≤(1/√(3))^dwhere the last inequality follows from the relation exp(-x-x'^2/6σ^2)≤ 1. For 𝖰, we have1/2𝖰 =1/(2πσ^2)^d/21/n∑_i=1^n∫exp(-x-z^2/2σ^2-z-z_i^2/2σ^2)dz=1/(2πσ^2)^d/21/n∑_i=1^nexp(-x^2+z_i^2/2σ^2+z_i+x^2/4σ^2)∫exp(-z-1/2z_i-1/2x^2/σ^2)dz=(1/√(2))^d1/n∑_i=1^nexp(-x^2+z_i^2/2σ^2+z_i+x^2/4σ^2)≤(1/√(2))^dwhere the last inequality follows from the relation z_i+x^2≤ 2z_i^2+2x^2. Note that 𝖯≥ 0 and ≥ 0. Combining the bound ofand , we obtain the bound of δℳ̂(μ)/δμ(x) as follows:-√(2)≤-2(1/√(2))^d≤δℳ̂(μ)/δμ(x)=-≤ 2(1/√(3))^d≤√(3)Let μ̂^x(x)=exp(-δ F/δμ(μ,x))/Z where Z=∫exp(-δ F/δμ(μ,x)) dx, and we have𝔼_μ̂·^2 =1/Z∫x^2exp(-δℳ̂(μ)/δμ(x)-λ'/2x^2)dx≜Z'/ZNow we bound Z' and Z respectively.Z' ≤∫x^2exp(√(2)-λ'/2x^2)dx≲exp(√(2))d/λ',Z ≥∫exp(-√(3)-λ'/2x^2)dx=exp(-√(3))(2π/λ')^d/2Thus we have 𝔼_μ̂·^2=Z'/Z≲exp(√(2)+√(3))λ'^d/2-1/(2π)^d/2d≲ d for λ'≲2π.§.§ Verification of Assumption <ref> Training mean-field neural networksConsider the same objective in (<ref>) with r(x)=x^2/2 and μ_0=𝒩(0,I_d):F(μ)=1/n∑_i=1^nl(h(μ;a_i),b_i)+λ'𝔼_x∼μ[r(x)].* If l is √(ℒ)-Lipschitz, we have |l(h(μ;a),b)|≤√(ℒ)|h(μ;a)-b|. If |h(x;a)|≤√(ℒ), we have |h(μ;a)|≤√(ℒ) and thus F(μ_0)≲√(ℒ)(√(ℒ)+max_i=1^n|b_i|)+d. We can normalize the data samples to ensure max_i=1^n|b_i|≲ d√(ℒ). Thus F(μ_0)=O(ℒ+d). * If |h(x;a)|≤√(ℒ)(1+x), we have |h(μ_0;a)|≤√(ℒ)∫(1+x)μ_0(dx)≲√(ℒ)d^1/2. If l is √(ℒ)-Lipschitz, we have |l(h(μ_0;a_i),b_i)|≤√(ℒ)|h(μ_0;a_i)-b_i|≲ℒd^1/2+√(ℒ)max_i=1^n|b_i|. We can normalize the data samples to ensure max_i=1^n|b_i|≲ d√(ℒ). Thus we have F(μ_0)=O(ℒd+d).Density estimation Consider the same objective in (<ref>) with Gaussian RBF kernel (σ'=σ), r(x)=x^2/2 and μ_0=𝒩(0,I_d):F(μ)=ℳ̂(μ)+λ'/2𝔼_x∼μx^2,whereℳ̂(μ) =∭ p(x;z)p(x';z')k(z,z')dzdz'd(μ×μ)(x,x')-2∫(1/n∑_i=1^n∫ p(x;z)k(z,z_i)dz)dμ(x)=1/3^d/2∫exp(-x-x'^2/6σ^2)d(μ×μ)(x,x')-2/2^d/21/n∑_i=1^n∫exp(-x-z_i^2/4σ^2)dμ(x)≤1/3^d/2∫exp(-x-x'^2/6σ^2)d(μ×μ)(x,x')≤1/3^d/2≤ℒThus F(μ_0)=ℳ̂(μ_0)+λ'/2𝔼_x∼μ_0x^2≲ℒ+d, which satisfies Assumption <ref>.§.§ Verification of Assumption <ref> Training mean-field neural networksSimilar to examples of training mean-field neural networks above, we initialize μ^N_0=𝒩^⊗ N(0,I_d) and choose r(x)=x^2/2 for the following objective.𝔼_x∼μ^NF(μ_x)𝔼_x∼μ^N1/n∑_i=1^n[l(1/N∑_s=1^Nh(x^s;a_i),b_i)]+λ'/2𝔼_x∼μ^N1/N∑_s=1^N[x^s^2],where x=(x^1,...,x^N), x^i∼μ^i for i=1,...,N and μ^N=⊗_i=1^Nμ^i=Law(x^1,...,x^N).* If |h(x;a)|≤√(ℒ) and l is √(ℒ)-Lipschitz,and 𝔼_x_0∼μ_0^N1/n∑_i=1^n[l(1/N∑_i=1^Nh(x_0^i;a_i),b_i)]≲√(ℒ)(√(ℒ)+max_i=1^n|b_i|) and thus 𝔼_μ_0^NF(μ_x_0)≲ℒ+√(ℒ)max_i=1^n|b_i|+d. We can normalize the data samples to ensure max_i=1^n|b_i|≲ d√(ℒ). Thus we have 𝔼_μ_0^NF(μ_x_0)=O(ℒ+d).* If |h(x;a)|≤√(ℒ)(1+x) and l is √(ℒ)-Lipschitz, 𝔼_x_0∼μ_0^N1/n∑_i=1^n[l(1/N∑_s=1^Nh(x_0^s;a_i),b_i)]≤√(ℒ)(√(ℒ)1/N∑_s=1^N(1+𝔼_x_0∼μ_0^Nx_0^s)+max_i=1^n|b_i|)≲ℒd^1/2+√(ℒ)max_i=1^n|b_i|. We can normalize the data samples to ensure max_i=1^n|b_i|≲ d√(ℒ). Thus we have 𝔼_μ_0^NF(μ_x_0)=O(ℒd+d) Density estimation Now we verify Assumption <ref> for the example of density estimation. We consider the N-particle approximation of the objective (<ref>) with r(x)=x^2/2 and initialization μ^N_0=𝒩^⊗ N(0,I_d).𝔼_μ^N ℳ̂(μ_x,y)𝔼_x,y∼μ^N[1/N^2∑_s=1^N∑_t=1^N∬ p(x^s;z)p(y^t;z')k(z,z')dzdz' -2/nN∑_i=1^n∑_s=1^N∫ p(x^s;z)k(z,z_i)dz]≤𝔼_x,y∼μ^N[1/N^2∑_s=1^N∑_t=1^N∬ p(x^s;z)p(y^t;z')k(z,z')dzdz']=(1/√(3))^d𝔼_x∼μ^N[1/N^2∑_s=1^N∑_t=1^Nexp(-x^s-y^t^2/6σ^2)]≤(1/√(3))^d≤ℒwhere x=(x^1,...,x^N) and y=(y^1,...,y^N). Thus we can upper bound 𝔼_x_0,y_0∼μ^N_0F(μ_x_0,y_0) as follows:𝔼_x_0,y_0∼μ^N_0F(μ_x_0,y_0)=𝔼_x_0,y_0∼μ^N_0ℳ̂(μ_x,y)+λ'/2𝔼_x_0∼μ_0^N1/N∑_s=1^N[x_0^s^2]≲ℒ+dwhich satisfies Assumption <ref>. § CONTINUOUS-TIME RESULTS In this section, we give the explicit rate of Theorem 2.1 and Theorem 2.2 proposed by <cit.> with a specific choice of parameters and then provide the detailed proof of Theorem <ref> and Theorem <ref> by reparameterizing γ.§.§ Proof of Theorem <ref>Our proof is directly adapted from Theorem 2.1 in <cit.> using hypocoercivity in <cit.>. <cit.> prove the Lyapunov functionalℰ(μ_t)=ℱ(μ_t)+𝖥𝖨_S(μ_tμ̂_t)is decaying along the <ref> with S=([ c b; b a ])⊗ I_d and γ=1. Let A_t=∇_v, B_t=v·∇_x-D_μF(μ_t^x,x)·∇_v, C_t=[A_t,B_t]=A_tB_t-B_tA_t=∇_x and Y_t=(A_tu_t,A_t^2u_t, C_tu_t, C_tA_tu_t)^𝖳 where u_t=logμ_t/μ̂_t. More specifically, <cit.> prove thatd/dtℰ(μ_t)≤ -Y_t^𝖳𝒦Y_t,where𝒦=([ 1+2a-4ℒb-2b-2a-2ℒc0;0 2a -2ℒc-4b;00 2b0;000 2c ]).The choice of a, b, c should satisfies ac>b^2 and K≻ 0. If we choose a=c=2ℒ and b=1, the smallest eigenvalue of 𝒦 is λ_(𝒦)=1, and thus we haved/dt(ℰ(μ_t)-ℰ(μ_*)) ≤-(A_tu_t^2+A_t^2u_t^2+C_tu_t^2+C_tA_tu_t^2)≤-(A_tu_t^2+C_tu_t^2)=-1/2𝖥𝖨(μ_tμ̂_t)-1/2𝖥𝖨(μ_tμ̂_t)≤-𝒞_(μ_tμ̂_t)-1/2λ_(S)𝖥𝖨_S(μ_tμ̂_̂t̂)≤-𝒞_(ℱ(μ_t)-ℱ(μ_*))-1/4ℒ+2𝖥𝖨_S(μ_tμ̂_̂t̂)≤-𝒞_/6ℒ(ℰ(μ_t)-ℰ(μ_*)) Applying Grönwall's inequality, we obtainℱ(μ_t)-ℱ(μ_*)≤ℰ(μ_t)-ℰ(μ_*)≤(ℰ(μ_0)-ℰ(μ_*))exp(-𝒞_/6ℒt).Note that the proof in <cit.> also considers the approximation technique to remove some restrictive assumptions they make, which we omit in our proof. Now we consider a more general γ in the proof above. Analogous to the proof of Lemma 32 in <cit.>, if we incorporate a general γ, the diagonal elements of upper triangular matrix 𝒦 will become (γ+2γ a-4ℒb, 2γ a, 2b, 2γ c). If we choose γ=√(ℒ), b=1/√(ℒ), a=2 and c=1/ℒ, the smallest eigenvalue of K will become λ_(𝒦)=2/√(ℒ). Similar to the previous proof, we haved/dt(ℰ(μ_t)-ℰ(μ_*)) ≤-2/√(ℒ)(A_tu_t^2+A_t^2u_t^2+C_tu_t^2+C_tA_tu_t^2)≤-2/√(ℒ)(A_tu_t^2+C_tu_t^2)=-1/√(ℒ)𝖥𝖨(μ_tμ̂_t)-1/√(ℒ)𝖥𝖨(μ_tμ̂_t)≤-2𝒞_/√(ℒ)(μ_tμ̂_t)-1/λ_(S)√(ℒ)𝖥𝖨_S(μ_tμ̂_t)≤-2𝒞_/√(ℒ)(ℱ(μ_t)-ℱ(μ_*))-1/3√(ℒ)𝖥𝖨_S(μ_tμ̂_t)≤-𝒞_/3√(ℒ)(ℰ(μ_t)-ℰ(μ_*))where the fourth inequality follows from λ_(S)=1/ℒ+2+√(1/ℒ^2+4)/2≤1/ℒ+2≤ 3. Applying Grönwall's inequality, we obtainℱ(μ_t)-ℱ(μ_*)≤ℰ(μ_t)-ℰ(μ_*)≤(ℰ(μ_0)-ℰ(μ_*))exp(-𝒞_/3√(ℒ)t),which completes the proof of Theorem <ref>. <ref> exhibits a faster rate than the rate of <ref>.§.§ Proof of Theorem <ref>Our proof is directly adapted from Theorem 2.2 in <cit.> using hypocoercivity in <cit.>.<cit.> prove that the Lyapunov functionalℰ^N(μ_t^N)=ℱ^N(μ_t^N)+^N_S(μ_t^Nμ_*^N)is decaying along the <ref> with S=([ c b; b a ])⊗ I_d and γ=1. Let u_t^N=logμ_t^N/μ^N_* andY_t^N=(∇_vu_t^N,∇_v^2u_t^N, ∇_xu_t^N,∇_x∇_vu_t^N)^𝖳.<cit.> prove that d/dtℰ^N(μ_t^N)≤-(Y_t^N)^𝖳𝒦Y_t^Nwhere𝒦=([ 1+2a-4ℒb-2b-2a0;0 2a -4ℒc-4b;00 2b0;000 2c ]).The choice of a, b, c should satisfies ac>b^2 and K≻ 0. If we choose a=c=2ℒ and b=1, the smallest eigenvalue of K is λ_(𝒦)=1, and thus we have d/dtℰ^N(μ_t^N) ≤-(∇_vu_t^N^2+∇_v^2u_t^N^2+∇_xu_t^N^2+∇_x∇_vu_t^N^2)≤-(∇_vu_t^N^2+∇_xu_t^N^2)=-(μ_t^Nμ_*^N)Since μ_*^N does not satisfy the uniform LSI, we can not utilize the same technique to upper bound -(μ_t^Nμ_*^N). <cit.> and <cit.> obtain the lower bound of the relative Fisher information (μ_t^Nμ_*^N) using other technique to circumvent the uniform LSI of μ_*^N. We will directly provide the conclusion instead of providing many details about that technique in this paper, and we refer our readers to <cit.> for the precise proof.<cit.> propose that(μ_t^Nμ_*^N) =1/2(μ_t^Nμ_*^N)+1/2(μ_t^Nμ_*^N)≥1/2[2(1-ε)𝒞_-ℒ/N(16+12(ε^-1-1)ℒ/𝒞_)](ℱ^N(μ_t^N)-Nℱ(μ_*))+1/2(μ_t^Nμ_*^N)-ℒd/𝒞_(5𝒞_+3(ε^-1-1)ℒ)for ε∈(0,1). If we choose ε=1/2 and N≥32ℒ/𝒞_+24ℒ^2/𝒞_^2, we have(μ_t^Nμ_*^N) ≥𝒞_/4(ℱ^N(μ_t^N)-Nℱ(μ_*))+1/2λ_(S)_S(μ_t^Nμ_*^N)-ℒd/𝒞_(5𝒞_+3ℒ)≥𝒞_/4(ℱ^N(μ_t^N)-Nℱ(μ_*))+1/6ℒ_S(μ_t^Nμ_*^N)-ℒd/𝒞_(5𝒞_+3ℒ)≥𝒞_/24ℒ(ℰ^N(μ^N_t)-Nℰ(μ_*))-ℒd/𝒞_(5𝒞_+3ℒ)Combining (<ref>) with the lower bound of Fisher information above, we obtaind/dt(ℰ^N(μ_t^N)-Nℰ(μ_*))≤-𝒞_/24ℒ(ℰ^N(μ^N_t)-Nℰ(μ_*))+ℒd/𝒞_(5𝒞_+3ℒ)Applying Grönwall's inequality, we obtainℱ^N(μ_t^N)-Nℱ(μ_*) ≤ℰ^N(μ_t^N)-Nℰ(μ_*)≤(ℰ^N(μ_0^N)-Nℰ(μ_*))exp(-𝒞_/24ℒt)+ℒdt/𝒞_(5𝒞_+3ℒ)exp(-𝒞_/24ℒt)≤ (ℰ^N(μ_0^N)-Nℰ(μ_*))exp(-𝒞_/24ℒt)+120ℒ^2d/𝒞_+72ℒ^3d/𝒞_^2where the last inequality follows from exp(-x)≤ (1+x)^-1 for x>-1. Now we consider a more general γ in the proof above. Analogous to the proof of Lemma 32 in <cit.>, if we incorporate γ, the diagonal elements of upper triangular matrix 𝒦 will become (γ+2γ a-4ℒb, 2γ a, 2b, 2γ c). If we choose γ=√(ℒ), b=1/√(ℒ), a=2 and c=1/ℒ, the smallest eigenvalue of K will become λ_(𝒦)=2/√(ℒ). Similar to the previous proof, we have d/dt(ℰ^N(μ_t^N)-Nℰ(μ_*)) ≤-2/√(ℒ)(∇_vu_t^N^2+∇_v^2u_t^N^2+∇_xu_t^N^2+∇_x∇_vu_t^N^2)≤-2/√(ℒ)(∇_vu_t^N^2+∇_xu_t^N^2)=-2/√(ℒ)(μ_t^Nμ_*^N)≤-𝒞_/2√(ℒ)(ℱ^N(μ_t^N)-Nℱ(μ_*))-1/λ_(S)√(ℒ)_S(μ_t^Nμ_*^N)+2√(ℒ)d/𝒞_(5𝒞_+3ℒ)≤-𝒞_/2√(ℒ)(ℱ^N(μ_t^N)-Nℱ(μ_*))-1/3√(ℒ)_S(μ_t^Nμ_*^N)+2√(ℒ)d/𝒞_(5𝒞_+3ℒ)≤-𝒞_/6√(ℒ)(ℰ^N(μ^N_t)-Nℰ(μ_*))+2√(ℒ)d/𝒞_(5𝒞_+3ℒ)Applying Grönwall's inequality, we obtainℱ^N(μ_t^N)-Nℱ(μ_*)≤ℰ^N(μ_t^N)-Nℰ(μ_*)≤ℰ_0^Nexp(-𝒞_/6√(ℒ)t)+60ℒd/𝒞_+36ℒ^2d/𝒞_^2where ℰ_0^Nℰ^N(μ_0^N)-Nℰ(μ_*). This completes the proof of Theorem <ref>. The convergence rate exhibited in <ref> is faster and incurs a smaller bias than the rate exhibited in <ref>. § DISCRETIZATION ANALYSIS In this section, we provide the proof of Theorem <ref> and Theorem <ref> establishing the global convergence of the discrete-time-space processes. Our discretization analysis is unified for the MULA and NULA.§.§ Proof of Theorem <ref>Suppose Q_Nh is the joint law of the <ref> for t∈[0,Nh] and P_Nh is the joint law of the MULA for t∈[kh,(k+1)h] and k=0,1,...,K-1. Applying Girsanov's theorem (Lemma <ref>), we have(𝐐_Kh𝐏_Kh) =𝔼_𝐐_Khlogd𝐐_Kh/d𝐏_Kh=𝔼_𝐐_Kh∑_k=0^K-1(-1/√(2γ)∫_kh^(k+1)h⟨([ 0; D_μF(μ_t^x,x_t)-D_μF(μ_kh^x,x_kh) ]),dB_t⟩..+1/4γ∫_kh^(k+1)hD_μF(μ_t^x,x_t)-D_μF(μ_kh^x,x_kh)^2dt)=1/4γ∑_k=0^K-1∫_kh^(k+1)h𝔼_𝐐_KhD_μF(μ_t^x,x_t)-D_μF(μ_kh^x,x_kh)^2dtAnd we obtain(𝐐_Kh𝐏_Kh) =1/4γ∑_k=0^K-1∫_kh^(k+1)h𝔼_𝐐_KhD_μF(μ_t^x,x_t)-D_μF(μ_kh^x,x_kh)^2dt≤ℒ^2/2γ∑_k=0^K-1∫_kh^(k+1)h𝔼_𝐐_Khx_t-x_kh^2+W_2^2(μ_t^x,μ_kh^x)dt≤ℒ^2/2γ∑_k=0^K-1∫_kh^(k+1)h𝔼_𝐐_Khx_t-x_kh^2+𝔼_𝐐_Khx_t-x_kh^2dt=ℒ^2/γ∑_k=0^K-1∫_kh^(k+1)h𝔼_𝐐_Khx_t-x_kh^2dt≤16ℒ^4h^5/γ∑_k=0^K-1𝔼_𝐐_Khx_kh^2+64ℒ^2h^3/γ∑_k=0^K-1𝔼_𝐐_Khv_kh^2+16ℒ^4h^5K/γ+32ℒ^2h^4Kdwhere the first inequality follows from Assumption <ref> and the last inequality follows from Lemma <ref> and the inequality (1/n∑_i=1^nx_i)^2≤1/n∑_i=1^nx_i^2:𝔼_Q_Khx_t-x_kh^2≤ 16ℒ^2h^4𝔼_Q_Khx_kh^2+64h^2𝔼_Q_Khv_kh^2+16ℒ^2h^4+32γ h^3dCombined with Lemma <ref> and γ=√(ℒ), the discretization error is upper bounded as follows:(𝐐_Kh𝐩_Kh) ≤16ℒ^4h^5K/γmax_0≤ k≤ K𝔼_𝐐_Khx_kh^2+64ℒ^2h^3K/γmax_0≤ k≤ K𝔼_𝐐_Khv_kh^2+16ℒ^4h^5K/γ+32ℒ^2h^4Kd≲ℒ^9/2h^5Kd/𝒞_+ℒ^5/2h^3Kd/𝒞_+ℒ^7/2h^5K+ℒ^2h^4Kd=ℒ^9/2h^4Td/𝒞_+ℒ^5/2h^2Td/𝒞_+ℒ^7/2h^4T+ℒ^2h^3Tdwhere T=Kh. By Lemma <ref> and Theorem <ref>, we obtain(μ_tμ_*)≤ℱ(μ_t)-ℱ(μ_*)≤(ℰ(μ_0)-ℰ(μ_*))exp(-𝒞_/3√(ℒ)t)Combining with (<ref>), we upper bound the TV distance between μ̅_K, the probability measure ofat Kh and μ_*, the limiting distribution ofas follows:μ̅_K-μ_*_ ≤μ̅_K-μ_Kh_+μ_Kh-μ_*_=μ_Kh-μ̅_K_+μ_Kh-μ_*_≲√((μ_Khμ_K))+√((μ_Khμ_*))≲√((𝐐_Nh𝐩_Nh))+√((μ_Khμ_*))≲ℒ^9/4h^2T^1/2d^1/2/𝒞^1/2_+ℒ^5/4hT^1/2d^1/2/𝒞^1/2_+ℒ^7/4h^2T^1/2+ℒh^3/2T^1/2d^1/2+(ℰ(μ_0)-ℰ(μ_*))^1/2exp(-𝒞_T/6√(ℒ))where the first inequality follows from the triangle inequality of TV distance; the second inequality follows from Pinsker's inequality, and the fourth inequality follows from the data processing inequality. In order to ensure μ_Kh-μ_*_≤1/2ϵ, it suffices to choose T=Kh=Θ(√(ℒ)/𝒞_). In order to ensure μ̅_K-μ_Kh_≤1/2ϵ, it suffices to choose the stepsizeh=Θ(𝒞^1/2_ϵ/ℒ^5/4T^1/2d^1/2)=Θ(𝒞_ϵ/ℒ^3/2d^1/2),and the mixing timeK=T/h=Θ(ℒ^2d^1/2/𝒞_^2ϵ).The choice of T, h, K above ensures μ̅_K-μ_*_≤ϵ.§.§ Proof of Theorem <ref>Suppose Q_Nh^i is the joint law of the <ref> for the i-th particle and t∈[0,Kh]; P_Nh^i is the joint law of the NULA for the i-th particle. Applying Girsanov's theorem (Lemma <ref>), we have1/N∑_i=1^N(𝐐^i_Kh𝐏^i_Kh) =1/4γ∑_k=0^K-1∫_kh^(k+1)h1/N∑_i=1^N𝔼_𝐐^i_KhD_μF(μ_x_t,x^i_t)-D_μF(μ_x_kh,x^i_kh)^2dt≤ℒ^2/2γ∑_k=0^K-1∫_kh^(k+1)h1/N∑_i=1^N𝔼_𝐐^i_Khx^i_t-x^i_kh^2+W_2^2(μ_x_t,μ_x_kh)dt≤ℒ^2/γ∑_k=0^K-1∫_kh^(k+1)h1/N∑_i=1^N𝔼_𝐐^i_Khx^i_t-x^i_kh^2dt≤16ℒ^4h^5/γ1/N∑_i=1^N∑_k=1^K𝔼_𝐐^i_Khx^i_kh^2+64ℒ^2h^3/γ1/N∑_i=1^N∑_k=1^K𝔼_𝐐^i_Khv_kh^2+16ℒ^4h^5K/γ+32ℒ^2h^4Kdwhere the first inequality follows from Assumption <ref> and the last inequality follows from Lemma <ref> and the inequality (1/n∑_i=1^nx_i)^2≤1/n∑_i=1^nx_i^2:𝔼_Q^i_Khx^i_t-x^i_kh^2≤ 16ℒ^2h^4𝔼_Q^i_Khx^i_kh^2+64h^2𝔼_Q^i_Khv^i_kh^2+16ℒ^2h^4+32γ h^3dfor t∈[kh,(k+1)h] and k=0,1,...,K-1. Combining Lemma <ref> and γ=√(ℒ), the discretization error is upper bounded as follows:1/N∑_i=1^N(𝐐^i_Kh𝐏^i_Kh)≤16ℒ^4h^5K/γ1/N∑_i=1^Nmax_0≤ k≤ K𝔼_𝐐^i_Khx^i_kh^2+64ℒ^2h^3K/γ1/N∑_i=1^Nmax_0≤ k≤ K𝔼_𝐐^i_Khv_kh^2+16ℒ^4h^5K/γ+32ℒ^2h^4Kd≲ℒ^9/2h^5Kd/𝒞_+ℒ^5/2h^3Kd/𝒞_+ℒ^7/2h^5K+ℒ^2h^4Kd=ℒ^9/2h^4Td/𝒞_+ℒ^5/2h^2Td/𝒞_+ℒ^7/2h^4T+ℒ^2h^3Tdwhere T=Kh. By Lemma <ref> and Theorem <ref>, we obtain1/N(μ_T^Nμ_*^⊗ N)≤1/Nℱ^N(μ_T^N)-ℱ(μ_*)≤ℰ_0^N/Nexp(-𝒞_/6√(ℒ)T)+60ℒd/N𝒞_+36ℒ^2d/N𝒞_^2,where ℰ_0^Nℰ^N(μ_0^N)-Nℰ(μ_*). Combining with (<ref>), we upper bound the averaged TV distance between μ̅_K^i and μ_* over N particles as follows:1/N∑_i=1^Nμ̅_K^i-μ_*_ ≤1/N∑_i=1^Nμ̅_K^i-μ_Kh^i_+1/N∑_i=1^Nμ_Kh^i-μ_*_=1/N∑_i=1^Nμ_Kh^i-μ̅_K^i_+1/N∑_i=1^Nμ_Kh^i-μ_*_≲1/N∑_i=1^N√((μ_Kh^iμ̅_K^i))+1/N∑_i=1^N√((μ_Kh^iμ_*))≲√(1/N∑_i=1^N(μ_Kh^iμ̅_K^i))+√(1/N∑_i=1^N(μ_Kh^iμ_*))≤√(1/N∑_i=1^N(𝐐_Kh^i𝐏_Kh^i))+√(1/N(μ_Kh^Nμ^⊗ N_*))≤√(1/N∑_i=1^N(𝐐_Kh^i𝐏_Kh^i))+√(1/Nℱ^N(μ_Kh^N)-ℱ(μ_*))≲ℒ^9/4h^2T^1/2d^1/2/𝒞^1/2_+ℒ^5/4hT^1/2d^1/2/𝒞^1/2_+ℒ^7/4h^2T^1/2+ℒh^3/2T^1/2d^1/2+(1/Nℰ^N(μ_0^N)-ℰ(μ_*))^1/2exp(-𝒞_T/12√(ℒ))+ℒ^1/2d^1/2/N^1/2𝒞^1/2_+ℒd^1/2/N^1/2𝒞_where the first inequality follows from the triangle inequality of TV distance; the second inequality follows from Pinsker's inequality; the third inequality follows from Jensen's inequality; the fourth inequality follows from data processing inequality and the information inequality (Lemma <ref>) and the fifth inequality follows from Lemma <ref>. In order to ensure 1/N∑_i=1^Nμ^i_Kh-μ_*_≤1/2ϵ, it suffices to choose T=Kh=Θ(√(ℒ)/𝒞_). In order to ensure 1/N∑_i=1^Nμ̅^i_K-μ^i_Kh_≤1/2ϵ, it suffices to choose the stepsizeh=Θ(𝒞^1/2_ϵ/ℒ^5/4T^1/2d^1/2)=Θ(𝒞_ϵ/ℒ^3/2d^1/2),the mixing timeK=T/h=Θ(ℒ^2d^1/2/𝒞_^2ϵ),and the number of particlesN=Θ(ℒ^2d/𝒞_^2ϵ^2).The choice of T, h, K, N above ensures 1/N∑_i=1^Nμ̅^i_K-μ_*_≤ϵ. § EXPERIMENTAL SETTINGS In our experiment, we use a mean-field two-layer neural network to approximate the Gaussian function,f(z)=exp(-z-m^2/2d).We uniformly draw m∼𝒩(0,I_d) and 100 points {z_i}_i=1^100∼𝒩(0,I_d) with d=10^3 and calculate the corresponding labels {f(z_i)}_i=1^100. In this section, we give the actual updates of the methods involved in our experiment and provide the precise value of parameters in Table <ref>. The update of the UNLA is given byx^j_k+1 =x^j_k+φ_0 v^j_k-φ_1 D_μF(μ_x_k,x^j_k)+ηξ^x_k,v^j_k+1 =φ_2 v^j_k-φ_3 D_μF(μ_x_k,x^j_k)+ηξ^v_k.for j=1,...,N. The update of <ref> is given byx_k+1^j =x_k^j + h_2 v_k^j, v_k+1^j =(1-h_3)v_k^j - h_2 D_μF(μ_x_k,x^j_k) + √(2λ_2h_2)ξ_k.for j=1,...,N. The update of the <ref> is given byx^j_k+1=x^j_k - h_1 D_μF(μ_x_k,x^j_k) + √(2λ_1h_1)ξ_k.for j=1,...,N. § METHODS FOR COMPARISONS In this section, we review the convergence result ofin <cit.> and <ref> in <cit.>, which consider problem (1) in more specific settings. <cit.> suppose F(μ)=𝔼_(a,b)∼𝒟[l(h(μ;a),b)]+λ'/2𝔼_x∼μx^2 whereas <cit.> suppose F(μ)=U(μ)+𝔼_x∼μ[r(x)]. While our convergence results are established in TV distance, we consider more general settings compared with the previous two. Since the problem setting in <cit.> is only for training neural networks, we perform convergence analysis of thein <cit.>'s setting to make a comparison with our results. Define the free energyℒ(μ)=F(μ)+Ent(μ),where μ∈𝒫_2(ℝ^d). Let μ̅_k denotes the law of k-th iterate of theand μ_* denotes the minimizer of (<ref>), and <cit.> obtain the following results in Theorem 2:ℒ(μ̅_k)-ℒ(μ_*)≤exp(-𝒞_hk)(ℒ(μ̅_0)-ℒ(μ_*))+δ_h/2𝒞_,where δ_hk𝔼D_μF(μ̅_k+1,x_k+1)-D_μF(μ̅_k,x_k)^2 and 𝔼 is taken under the joint law of μ̅_k+1 and μ̅_k. Now we bound δ_hk uniformly in k with a different method from the one in <cit.>. We do not need to specify F to be the objective of training nerual networks. Since F is ℒ-smooth and satisfies Assumption <ref>, we obtain𝔼D_μF(μ̅_k+1,x_k+1)-D_μF(μ̅_k,x_k)^2 ≤2ℒ^2𝔼(x_k+1-x_k^2+W_2^2(μ̅_k+1,μ̅_k))≤ 4ℒ^2𝔼x_k+1-x_k^2=4ℒ^2𝔼-hD_μF(μ̅_k,x_k)+√(2h)ξ^2≤4ℒ^2h^2𝔼D_μF(μ̅_k,x_k)^2+8ℒ^2hd≤8ℒ^4h^2(1+𝔼x_k^2)+8ℒ^2hdWe refer to Lemma 1 in <cit.> to uniformly bound 𝔼x_k^2. Before applying Lemma 1, we translate some constants in <cit.> into our constants systems. <cit.> assumes that D_μU(μ,x)≤ R, λ_1I_d≼∇^2r(x)≼λ_2I_d. We let R=ℒ and λ_2=ℒ (since this specification matches our Assumption <ref>). We prove Lemma 1 proposed by <cit.> in the mean-field setting without particle approximation. But we also assume the decomposition F(μ)=U(μ)+𝔼_x∼μ[r(x)] with D_μU(μ,x)≤ℒ and λ_1I_d≼∇^2r≼ℒI_d. Given the update of the , if h≤λ_1/2ℒ^2, we have𝔼x_k+1^2 =𝔼x_k^2+h^2𝔼D_μF(μ_k,x_k)^2+2hd-2h𝔼⟨ x_k, D_μU(μ_k,x_k)+∇ r(x_k)⟩≤𝔼x_k^2+ℒ^2h^2(1+𝔼x_k^2)+2hd+2hℒ𝔼x_k-2hλ_1𝔼x_k^2≤(1-λ_1h)𝔼x_k^2+ℒ^2h^2+2hd+2ℒ^2h/λ_1Recursively, we obtain𝔼x_k^2≤ (1-λ_1h)^k𝔼x_0^2+ℒ^2h+2d/λ_1+2ℒ^2/λ_1^2≤𝔼x_0^2+ℒ^2h+2d/λ_1+2ℒ^2/λ_1^2.If x_0∼𝒩(0,I_d), 𝔼x_0^2=O(d). Thus (<ref>) implies 𝔼x_k^2=O(ℒ^2d). Plugging into the inequality above, we obtain𝔼D_μF(μ̅_k+1,x_k+1)-D_μF(μ̅_k,x_k)^2≲ℒ^6h^2d+ℒ^2hd.Applying Lemma <ref> and pinsker's inequality, we obtainμ_K-μ_*_≲√((μ_Kμ_*)) ≤√(ℒ(μ̅_K)-ℒ(μ_*))≲exp(-𝒞_hK/2)(ℒ(μ̅_0)-ℒ(μ_*))^1/2+ℒ^3hd^1/2/𝒞_^1/2+ℒh^1/2d^1/2/𝒞_^1/2In order to ensure μ_K-μ_*_≤ϵ, it suffices to chooseh=Θ(𝒞_ϵ^2/ℒ^3d), K=Θ(ℒ^3d/𝒞_^2ϵ^2).Now we translate the convergence results in <cit.>. Define the free energy of the particle system:ℒ^N(μ^N)=N𝔼_x∼μ^NF(μ_x)+Ent(μ^N),where μ_x=1/N∑_i=1^Nδ_x^i. Similar to the analysis above, Theorem 2 in <cit.> implies the TV-convergence of the <ref>, given by 1/N∑_i=1^Nμ̅_K^i-μ_*_≲√(1/N∑_i=1^N(μ̅_K^iμ_*)) ≤√(1/Nℒ^N(μ_K^N)-ℒ(μ_*))≲exp(-𝒞_hK/4)+h^1/2K^1/2(ℒ^3hd^1/2+ℒh^1/2d^1/2)+h^1/2K^1/2ℒ^2d^1/2/N^1/2 In order to ensure μ_K-μ_*_≤ϵ, it suffices to chooseh=Θ(𝒞_ϵ^2/ℒ^3d), K=Θ(ℒ^3d/𝒞_^2ϵ^2), N=Θ(ℒ^4d/𝒞_ϵ^2).
http://arxiv.org/abs/2312.16360v2
{ "authors": [ "Qiang Fu", "Ashia Wilson" ], "categories": [ "stat.CO", "math.OC", "math.ST", "stat.ML", "stat.TH" ], "primary_category": "stat.CO", "published": "20231226235904", "title": "Mean-field Underdamped Langevin Dynamics and its Space-Time Discretization" }
X Modality Assisting RGBT Object Tracking Zhaisheng Ding January 14, 2024 ========================================= The field of Remote Sensing (RS) widely employs Change Detection (CD) on very-high-resolution (VHR) images. A majority of extant deep-learning-based methods hinge on annotated samples to complete the CD process. Recently, the emergence of Vision Foundation Model (VFM) enables zero-shot predictions in particular vision tasks. In this work, we propose an unsupervised CD method named Segment Change Model (SCM), built upon the Segment Anything Model (SAM) and Contrastive Language-Image Pre-training (CLIP). Our method recalibrates features extracted at different scales and integrates them in a top-down manner to enhance discriminative change edges. We further design an innovative Piecewise Semantic Attention (PSA) scheme, which can offer semantic representation without training, thereby minimize pseudo change phenomenon. Through conducting experiments on two public datasets, the proposed SCM increases the mIoU from 46.09% to 53.67% on the LEVIR-CD dataset, and from 47.56% to 52.14% on the WHU-CD dataset. Our codes are available at: https://github.com/StephenApX/UCD-SCMhttps://github.com/StephenApX/UCD-SCM. Unsupervised Change Detection, Convolutional Neural Network, Remote Sensing, Vision Foundation Model § INTRODUCTION Remote sensing (RS) change detection (CD) aims to analyze change regions from two or more corresponding images taken at different times. As a fundamental task in RS community, change detection on very-high-resolution (VHR) images, propels a variety applications for disaster evaluation, land-use and land-cover (LULC) investigation and urban planning <cit.>. Unsupervised change detection (UCD) methods are usually deployed under limited samples. Existing UCD methods can be divided into traditional approaches and deep-learning (DL) approaches. Traditional UCD methods attempt to acquire a difference map at various levels, such as image-level, feature-level and post-classification-level <cit.>. Some DL-based methods adopt pre-trained convolutional neural network (CNN) as a feature extractor in order to better capture difference at different feature levels <cit.>. Others excavate reliable information from images as supervisory signals, and conduct training to distinguish changed or unchanged areas. For instance, Du et al. proposed a Deep Slow Feature Analysis (DSFA) to find unchanged pixels as training samples and generate change map after optimization <cit.>. GMCD consists of a pseudo label generation mechanism based on metric learning to accomplish model training <cit.>. Despite the efficiency of DL in generating valid change maps, existing methods disregard the characteristic and inner correlation among different scales. Besides, other training-based methods rely on limited information, which may result in the scarcity of semantic representation and phenomena of pseudo-change. Recently, vision foundation models (VFMs) and multimodal large language models (MLLMs) have shown great potentials in computer vision and natural language processing fields. Under zero-shot conditions, VFMs such as Segment Anything Model (SAM) <cit.> and FastSAM <cit.> are capable of recognizing visual objects and generate fine-grained masks. Contrastive Language-Image Pre-training (CLIP) model <cit.> can process inputs from different modalities and generate comparable embeddings, establishing connections between text and image. Combining SAM with CLIP, we can simultaneously obtain spatial understanding for segmentation and semantic understanding for classification. However, gaps still exist between open vocabulary and task-specific category, making it unfit to UCD task.In this work, we propose a Segment Change Model (SCM) for unsupervised change detection to address the aforementioned issues. Multi-scale features from bi-temporal images are firstly extracted from FastSAM, then fed into a Recalibrated Feature Fusion (RFF) module in order to better reveal local meanwhile global changes and preserve distinct change edges. Cooperating with SAM and CLIP models, we innovatively design a Piecewise Semantic Attention (PSA) scheme, which can introduce semantic understanding through simple texts to filter out pseudo changes. Afterwards, pixel-wise cosine distance is computed between concatenated features, then segmented into a binary change map through a global OTSU threshold algorithm. In this way, our proposed method enhances integration of multi-scale features from RS images, reduces pseudo change occurrence and improves performances over existing methods.§ METHODOLOGY §.§ Overview The overall SCM framework is shown in Fig.<ref>. The flowchart of the proposed SCM framework generally starts with feeding a pair of bi-temporal RS images 𝐗_1∈ℝ^H × W × C and 𝐗_2∈ℝ^H × W × C into a pre-trained FastSAM. The main architecture of FastSAM consists of a CNN-based encoder, a detect branch and a mask branch. The encoder can be divided into five stages, where we obtain the feature maps with different spatial and channel dimensions from last three stages, denoted as {𝐅_3, 𝐅_4, 𝐅_5}. We upsample and fuse them into two concatenations {𝐂_1, 𝐂_2} with same spatial size as the input image through Recalibrated Feature Fusion (RFF) module presented in Sec.<ref>. A difference map between 𝐂_1 and 𝐂_2 is calculated with cosine similarity at channel-wise axis, which depicts pixel-wise similarity between 𝐗_1 and 𝐗_2. This process can be formulated as:diff_(i,j) = 1 - C_1 · C_2/ C_1 _2 · C_2 _2= 1 - ∑_c=0^k C_1^(i,j,c) C_2^(i,j,c)/√(∑_c=0^k (C_1^(i,j,c))^2)√(∑_c=0^k (C_2^(i,j,c))^2), i ∈ [1,H], j ∈ [1,W].where i,j respectively depict the pixel location of image's height and width. In Sec.<ref>, we generate a Piecewise Semantic Attention (PSA) map and multiply with the difference map to filter out pseudo change phenomenon. Afterwards, a global OTSU thresholding algorithm is conducted on all non-zero values to search an optimal threshold for discriminating change and non-change area.§.§ Recalibrated Feature Fusion Module Although VFMs excel in extracting discriminating feature representation at various scales, simple concatenation of multi-scale feature maps may neglect their correlations and lead to imblance when computing their difference map under UCD circumstance. We construct a Recalibrated Feature Fusion (RFF) module in order to integrate low-level local features with high-level global features and restore the semantic correlations across different scales in parameter-free manner. We fisrt obtain a set of feature maps {𝐅_3, 𝐅_4, 𝐅_5} from last three stages of encoder of the FastSAM. At each scale, we recalibrate each feature map with a size of H × W × C by calculating mean value of each channel and generating the weight sequence with a size of 1 × 1 × C. By multiplying feature map and its weight sequence, we attempt to model inter-dependencies across different channels. Then, recalibrated features are integrated in a top-down manner. Starting from the last stage, feature maps from higher level are sampled equidistantly along channel dimension and interpolated along spatial dimension to align with feature maps from lower level. In this way, the resampled feature map from higher level is then merged with corresponding low level map by element-wise addition, aiming to semantically enhance lower level maps meanwhile maintain their local activations. Afterwards, we spatially upsample feature map with same spatial size as the input image and concatenate them into {𝐂_1, 𝐂_2}. §.§ Piecewise Semantic AttentionIn order to filter out pseudo change phenomenon, which is mainly caused by lack of semantic understanding in UCD task, we design a scheme based on SAM and CLIP and generate a Piecewise Semantic Attention (PSA) map.FastSAM is firstly utilized to acquire every possible objects' segmentation masks {𝐄_1,2}from an input image. In our scheme, we regard building change as the principal change target. In order to distinguish each segmented image patch between building and non-building classes, we construct two groups of texts which separately represents building related and non-building related objects: Afterwards, segmented image patches and predetermined groups of texts are respectively sent into the image encoder and text encoder of CLIP to acquire comparable embeddings. For each image patch, cosine similarity between image embedding and text embeddings are calculated and fed into the softmax function to generate categorical distributions. We summarize them into building (bld) class and non-building class probability:P_bld = ∑_c ∈{bldrelatedcls}^p_cAfter iterating each image patch and acquiring its building class probability, we initialize a zero score map with same size as input image and assign each pixel with every segmented mask and its corresponding probability. Moreover, in order to integrate bi-temporal information and avoid possible misclassification by CLIP classifier, we conduct an element-wise addition between two score maps from {𝐄_1, 𝐄_2}, and generate a semantic attention map through a piecewise remapping function:PSA = { 1, whenP_bld >= 0.5; P_bld * 2 , when0 < P_bld < 0.5; 0, background. .which is used to multiply with the difference map to filter out non-building changes. § EXPERIMENTS Datasets: We conducted external comparisons with other UCD methods and ablation study on two public binary CD datasets: LEVIR-CD <cit.> and WHU-CD <cit.>. LEVIR-CD dataset is a building CD dataset consisting of 637 samples, each with a resolution of 1024×1024×3. We directly experimented on the test set of LEVIR-CD, which contains 128 image pairs. In WHU-CD dataset, we equidistantly cut the whole image pair with the resolution of 32507 × 15354 in sliding-window manner, generating 480 test samples of 1024 × 1024. Evaluation Metrics: We adopted three metrics to evaluate model performance, including F1 score, mean Intersection over Union (mIoU) score, and overall accuracy (OA). Compared Methods: We chose five UCD methods for comparison. Traditional UCD approach includes PCA-KM <cit.>, and DL-based approaches include CNN-CD <cit.>, DSFA <cit.>, DCVA <cit.> and GMCD <cit.>. We replicated the aforementioned methods through open source codes and tested them on two datasets. Experimental Results: Table.<ref> presents the performances of different UCD methods, our proposed SCM achieves the highest F1, mIoU and OA metrics on both datasets. We visualize two sets of predictions from different methods in Fig.<ref>, where SCM achieves superior performances to other methods. Our method can segment related building changes under partial-change and thorough-change circumstances, with less pseudo changes and missed detections. Ablation Study: We further performed ablation study on both datasets to demonstrate the effectiveness of the proposed recalibrated feature fusion (RFF) module and piecewise semantic attention (PSA) scheme. As shown in Table.<ref>, results show that the two components of the SCM crucially improve the accuracy of UCD task. § CONCLUSIONIn this work, we introduced the Segment Change Model (SCM) for unsupervised change detection in VHR remote sensing images. A recalibrated feature fusion (RFF) module is proposed to integrate features and restore their semantic correlations across different scales. Cooperating with SAM and CLIP, we developed a piecewise semantic attention (PSA) scheme to further reduce pseudo change phenomenon. Experimental results affirm that our SCM outperforms conventional UCD methods. Nonetheless, the effect of varying semantic change targets on UCD performance requires further exploration.§ ACKNOWLEDGEMENTThis research was funded by the National Natural Science Foundation of China (No.42101346), the China Postdoctoral Science Foundation (No.2020M680109), and the Wuhan East Lake High-tech Development Zone Program of Unveiling and Commanding (No.2023KJB212). The numerical calculations were performed on the supercomputing system in the Supercomputing Center of Wuhan University. IEEEbib
http://arxiv.org/abs/2312.16410v1
{ "authors": [ "Xiaoliang Tan", "Guanzhou Chen", "Tong Wang", "Jiaqi Wang", "Xiaodong Zhang" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227044703", "title": "Segment Change Model (SCM) for Unsupervised Change detection in VHR Remote Sensing Images: a Case Study of Buildings" }
University of Maryland College Park MD [email protected] of Maryland College Park MD [email protected] of Maryland College Park MD [email protected] this paper, we present , a cloud-native streaming query engine that leverages the on-demand elasticity of Function-as-a-Service (FaaS) platforms to perform real-time data analytics. Traditional server-centric deployments often suffer from resource under- or over-provisioning, leading to resource wastage or performance degradation. addresses these issues by providing more fine-grained elasticity that can dynamically match the per-query basis with continuous scaling, and its billing methods are more fine-grained with millisecond granularity, making it a low-cost solution for stream processing. Our approach, payload invocation, eliminates the need for external storage services and eliminates the requirement for a query coordinator in the data architecture. Our evaluation shows that significantly outperforms state-of-the-art systems in terms of cost, especially on ARM processors, making it a promising solution for real-time data analytics on FaaS platforms. Flock: A Low-Cost Streaming Query Engine on FaaS Platforms Daniel J. Abadi January 14, 2024 ========================================================== PVLDB Reference Format:. . PVLDB, (): , .https://doi.org/doi: [This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit <https://creativecommons.org/licenses/by-nc-nd/4.0/> to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing mailto:[email protected]@vldb.org. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. , No. ISSN 2150-8097. https://doi.org/doi: ]footnote-1 PVLDB Artifact Availability:The source code, data, and/or other artifacts have been made available at <https://github.com/flock-lab/flock>.baselanguage=SQL, basicstyle=, breakatwhitespace=true, breaklines=false, classoffset=0, columns=flexible, commentstyle=, framexleftmargin=0.25em, frameshape=,commentstyle = , keywordstyle=, numbers=none,numberstyle=, showstringspaces=false, stringstyle=, moredelim=**[is][]@@,§ INTRODUCTIONMany high-volume data sources, such as sensor measurements, machine logs, user interactions on a website or mobile application, and the Internet of Things, operate in real time. Stream processing systems are critical to providing the freshest possible data and driving organizations to make faster and better automated decisions. To provide widespread access to streaming computation, an ideal stream processing system must be performance competitive, scalable, highly available, easy to use and low cost.Streaming jobs typically comprise multiple stages of execution organized as directed acyclic graphs (DAGs) based on their data dependencies, and each stage comprises several parallel tasks. These jobs show high variability and unpredictability, up to an order of magnitude more than the average load <cit.>. This, along with the broad variety of user SLOs, makes statically configuring and tuning streaming systems extremely difficult. Furthermore, traditional server-centric deployments use clusters provisioned with a fixed pool of storage and compute resources to execute these jobs, it can frequently suffer from resource under- or over-provisioning, leading to resource wastage or performance degradation, respectively.The cloud benefits have driven many recent efforts to port streaming analytics applications to full managed streaming analytics services, e.g. Google DataFlow <cit.> and AWS Kinesis Data Analytics for Flink <cit.>. These Backend as a Service (BaaS) serverless models are more elastic than on-premises alternatives and avoid their upfront costs. However, these cloud services provide elastic features that allow compute nodes to be added or removed dynamically, this scaling can take minutes, making it impractical on a per-query basis. In contrast, serverless platforms <cit.> fulfill the promise of transparent resource elasticity in the cloud <cit.>. Under the Function as a Service (FaaS) serverless model, developers and users decompose their applications into short-lived cloud functions. The ease of programming, fast elasticity, and fine-grained pricing in FaaS platforms allow for fine-grained scaling of resources to meet spiky demand, making them an appealing solution for streaming processing.Compared to BaaS, the FaaS model provides more fine-grained elasticity with sub-second start-up times that can dynamically match the per-query basis with continuous scaling. Further, its billing methods are more fine-grained with millisecond granularity. For example, Kenesis service <cit.> is charged an hourly rate based on the number of Amazon Kinesis Processing Units (or KPUs) used to run the streaming application. However, on AWS Lambda <cit.>, customers are only charged for the execution time they consume, often at a granularity of 1 ms <cit.>. Therefore a FaaS-based service is low cost to operate under low demand and can scale automatically to a high load at a proportional cost.To explore the promise of function services for stream processing, we built , a cloud-native streaming query engine that runs on FaaS platforms. Table <ref> shows the differences between and other state-of-the-art data analytics systems on FaaS. Existing approaches <cit.> take advantage of the on-demand elasticity of cloud object storage services, such as Amazon S3 <cit.> to shuffle data, which increases the performance cost and compromises the advantages of a serverless system. Instead, passes data through the invocation's payload between cloud functions. This is a general solution that can support in multi-cloud platforms <cit.>.For example, current AWS Lambda limits are set at 6 MB for synchronous invocations, and 256 KB for asynchronous invocations <cit.>.The maximum HTTP request size for the 2nd iteration of Google Cloud Functions is 32MB <cit.>. The HTTP request length of Azure Functions is limited to 100 MB <cit.>.With payload invocations, can store complete objects directly in query workflow state. This removes the need to read and write data from an external store service. Under the FaaS billing model, you do not pay for payload size but the cost of each job's duration, which is proportional to the aggregated runtimes across its component tasks – cloud functions. Therefore, this functional programming paradigm has a lower latency via storing data directly in the workflow, consequently, execution cost.Payload invocation also eliminates the requirement for a query coordinator from the data architecture since does not leverage any external storage service as a communication medium between functions, there is no need for a coordinator to monitor query stage completion and initiate new stages once dependencies are met. uses a unique way for passing many payloads/partitions to the same function instance, and shared data structures that ensure exactly-once aggregate of data on function services. When checkpointing is activated, query states are persisted upon checkpoints to guard against data loss and recover consistently. We have implemented a prototype of . We use this prototype to evaluate by measuring its performance cost on the NEXMark <cit.> and Yahoo Streaming Benchmarks (YSB) <cit.> that include windowing functions. We find that under realistic deployment scenarios, compared with traditional streaming systems like Flink <cit.> deployed on EC2 instances, is able to reduce costs more than an order of magnitude, with no observable effects on system throughput and query time. Since the FaaS platforms manage allocation of compute resources across jobs, the goal of is not to maximize resource utilization and enforce fairness but to reduce execution costs by increasing query performance and shortening function duration. supports the vectorized processing on ARM processors, which brings 20% speedup and reduce costs by more than 30% on x86. is thus the first streaming query engine, to the best of our knowledge, to support standardized abstractions, SQL and Dataframe API, on cloud function services, allowing users and engineers to avoid the time-consuming process of manually translating SQL into cloud workflows on heterogeneous hardware. § BACKGROUND§.§ AWS Lambda AWS Lambda <cit.> is a compute service that lets users run code without provisioning or managing servers. After uploading application code as a ZIP file or container image, Lambda automatically and precisely allocates compute execution power on a high-availability compute infrastructure and runs application code based on the incoming request or event, for any scale of traffic. When using Lambda, customers are responsible only for their code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources.With AWS Lambda, users are charged based on the number of requests for their functions and the duration, the time it takes for application code to execute. Lambda counts a request each time it starts executing in response to an event notification or invoke call. Duration is calculated from the time user code begins executing until it returnsor otherwise terminates, rounded up to the nearest 1 ms <cit.>. §.§ Apache Arrow and DataFusion Apache Arrow <cit.> is a cross-language development platform for in-memory data. It specifies a standardized language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware with SIMD optimizations. Arrow was introduced in 2016 and has since become the standard for columnar in-memory analytics, and as a high-performance interface between heterogeneous systems. Apache Arrow DataFusion <cit.> is an extensible query execution framework on the single machine, written in Rust, that uses Apache Arrow as its in-memory format. DataFusion supports both an SQL and a DataFrame API for building logical query plans as well as a query optimizer and execution engine capable of parallel execution against partitioned data sources.The function executor in is Arrow DataFusion, which has been extended to enable distributed query processing on cloud function services. §.§ Streaming Query ProcessingAny kind of data is produced as a stream of events, and it's most valuable at its time of arrival. Continuous queries are evaluated continuously as data streams continue to arrive, always reflecting the stream data seen so far. Since data streams are potentially unbounded in size, evaluating the query over different temporal windows of recent data from the streams is a common pattern. For example, in a tumbling window, events are grouped in a single window based on time or count. A event belongs to only one window. In a sliding window, events are grouped within a window that slides across the data stream according to a specified interval. Sliding windows can contain overlapping data; an event can belong to more than one sliding window.Let's illustrate the semantics of queries in by the following example. Consider a hypothetical online auction system containing two tables: [ style=base,basicstyle=, numbers=left,numberstyle=, xleftmargin=5mm, ] CREATE TABLE Auction (id INT, item_name VARCHAR(128), description VARCHAR(255), initial_bid INT, reserve INT, date_time DATE, expires DATE, seller INT, category INT);CREATE TABLE Bid ( auction INT, bidder INT, price INT, date_time DATE); Thetable contains all items under auction, and thetable contains bids for items under auction.At some point the user executes a continuous query to determine the average winning bid price for all auctions in each category across a series of fixed-sized, non-overlapping, 10-second contiguous time periods[We are assuming here that the auctions are very short-lived (with expiry time less than 10s) and that each auction starts and ends in a single window.]. In Flock, this query is expressed with the following DML:[ style=base,basicstyle=, numbers=left,numberstyle=, xleftmargin=5mm, captionpos=b,label=code:query_example ] – Flock Context: Window::Tumbling(Schedule::Seconds(10)); SELECT category,Avg(final) FROM (SELECT Max(price) AS final,category FROM auction AS AINNER JOIN bid AS BON A.id = B.auction WHEREB.date_time BETWEEN A.date_time AND A.expires GROUPBY A.id, A.category) AS Q GROUPBY category; When the user submits this query, it is continuously and transparently executed in a microbatch mode on the cloud functions.§ SYSTEM ARCHITECTURE Flock is a cloud-native SQL query engine for event-driven analytics on cloud function services. Figure <ref> illustrates the 's high-level architectural design. The cloud service provider packages and compiles the most recent query engine code into a single generic cloud function on a regular iteration cycle, then stores the resultant binary code in cloud object storage.When a user submits a SQL query, it is parsed, optimized, and planned as a series of low-level operators that the optimizer selects to execute the most efficient query.breaks the execution plan into stages, with each stage consisting of a chain of operators with the same partitioning serialized as a string as part of the cloud function context. creates cloud functions by using executable binary code from cloud storage and passing the encoded string (cloud context) as a function argument through the cloud vendor's SDK.The cloud function is created at the speed of light, and the query is processed in real time. Function arguments are deserialized as the cloud context during the initial instantiation of function instances, and therefore each function is customized for a specific set of parameters. The function is aware of carrying out a certain sub-plan and sending the result to the next function, allowing data flow in the cloud to occur without the intervention of a client coordinator. §.§ SQL Interface A query engine is a piece of code that can execute queries against data to produce answers to questions. Query engines provide a set of standard operations and transformations that the end-user can combine in different ways through a simple query language or application programming interface and are tuned for good performance. For example, SQL query engines are included in the most widely used relational databases, such as MySQL, Postgres, Oracle, and SQL Server. In addition, all data warehouses and lakehouses <cit.> come with a distributed SQL execution engine, such as Spark/Photon <cit.>, Flink <cit.>, Presto/Trino <cit.>, F1 <cit.>, Impala <cit.> and Hive <cit.>, for interactively querying massive datasets.Some exploratory research has been done on doing data analytics on cloud services <cit.>. However, there are yet no SQL-on-FaaS engines for data analytics. The end-user is compelled to split the physical plan for each query by hand when merging query stages into cloud functions as a dataflow execution paradigm on cloud. Forcing customers to use cloud vendor lock-in APIs to orchestrate query stages has the same effect as forcing users to create query execution plans directly in database systems.The user plans may be suboptimal, result in significant performance loss, and such customized directionally-acyclic graphs (DAG) are error-prone and are rarely to be reused. Furthermore, some cloud customers have raised concerns about vendor lock-in, fearing reduced bargaining power when negotiating prices with cloud providers. The resulting switching costs benefit the largest and most established cloud providers and incentivize them to promote complex proprietary APIs that are resistant to de facto standardization. Standardized and straightforward abstractions, such as SQL and Dataframe API supported by , would remove serverless adoption's most prominent remaining economic hurdle.§.§ Distributed Planneruses rule-based optimizations to apply predicates and projection push-down rules to a query plan that executed against the logical plan before the physical plan is created (see Figure <ref>). The physical plan is broken into a DAG of query stages in the client-side, where each stage consists of a chain of operators with the same partitioning. Each query stage is assigned to a cloud function using template specialization approach described in the next section. A directed edge from one stage to another represents data flow between cloud functions. Figure  <ref> shows the plan partition of the query example in Section <ref>. The physical plan of Arrow DataFusion is a nested layout in memory in which the data sink, not the data source, is the root reference. uses top-down breadth-first algorithm to split the physical plan into a DAG of query stages. separates the plan when it encounters aggregate[": mode=partial" is the same as doing partial aggregate within individual cloud functions or workers, and there is no requirement for plan partition.], join and sort operationsso that data can be shuffled around between query stages. Each stage or subplan deletes the old reference to the child plan and replaces it with an emptythat has the same schema as the child plan.represents the execution plan for reading in-memory batches of data. When the current stage receives all the output of the previous stage, it will feed the data to itsand complete the query execution. More details about the query subplan in each stage can be found in <cit.>.Furthermore, while splitting the physical plan, creates a corresponding cloud context for each query stage in order to make the dataflow paradigm operate on cloud function services. §.§ Microbatch Execution Moderuns in a micro-batch execution mode, similar to Apache Spark's Structured Streaming  <cit.>, that processes data streams as a series of micro batch tasks, achieving exactly-once fault-tolerance guarantees. In this mode, epochs are typically set to be a few hundred milliseconds to a few seconds, and each epoch executes as a traditional analytical job composed of a DAG of functions. When compared to continuous operator model <cit.>, micro-batch and FaaS are more natural fits. There are two main reasons for this: 1) The cloud function is billed based on the number of invocations and duration, whereas record-by-record is many orders of magnitude more expensive. 2) Some cloud providers, e.g. AWS Lambda, only allow the function instance to execute one request at a time, and a huge number of requests (via record-by-record) causes the function's latency to rise dramatically.During query planning, automatically chains together sequences of functions, each of that corresponds to a query stage. implicitly invokes the first cloud function to trigger the execution workflow at recurring times. Although all created functions have exactly the same binary code, when a function is instantiated in the cloud, its environment variable contains the specific cloud context carried when it was created. Therefore, different function instances can be specialised through the context (see Section <ref>). Functions share states by passing arguments/payloads and return values to each other, which does not incur any additional costs. The main challenge is determining how to send the shuffled states to the same function instance without the need of an external communication medium. We accomplished this by setting the stateful function's concurrency to one and allocating global memory that allows the function to reuse "static context" across multiple invocations to the same instance. More details on how to mitigate hotspots are described in Section <ref>.§.§ Fault Tolerance §.§.§ State Management achieves fault tolerance through the employment of a write-ahead log and a state store. Both of them run over object storage system such as S3[Starting from Dec 2020, all S3 , , andoperations are now strongly consistent <cit.>.] to allow parallel access. 1) the log keeps track of which data has been processed from each input source and reliably written to the output sink. 2) the state store holds snapshots of operator states for aggregate functions. Simiar to Spark Streaming <cit.>, states are written asynchronously, and can be behind the latest data written to the output sink. In the event of a failure, the system will automatically track whatever state it last updated in its log and recompute state from that point in the data.Input sources like Kafka <cit.> and Kinesis <cit.> are replayable that allow re-reading recent data using a stream offset. The function writes the start and end offsets of each epoch durably to the log. The stateful functions/operators regularly and asynchronously write the epoch ID along with their state checkpoint to the state store, utilizing incremental checkpoints if possible. These checkpoints aren't required to occur every epoch or to block processing. Upon recovery, the new function instance starts by reading the log to find the last epoch that hasn't been committed to the sink, including its start and end offsets. It then uses the offsets of earlier epochs to reconstruct the states in memory from the last epoch written to the state store. Finally, the system reruns the last epoch, and then executes the micro-batch from the new epoch. §.§.§ Invocation Failure If a cloud function times out or is terminated, the computed end-result is accurate, with no data loss. This is because the new function is resumed using the most recently stored checkpoint and states from S3. However, unlike traditional nodes, function invocation errors can occur when the invocation request is rejected by issues with request parametersand resource limits or when the function's code or runtime returns an error. If the asynchronous invocation fails, Lambda retries the function since the payload is part of the invocation and hence no data is lost. When an event fails all processing attempts or expires without being processed, it's put into a dead-letter queue (DLQ) <cit.> for further processing, which is part of a function's version-specific configuration.If the synchronous invocation fails, implements a linear backoff algorithm for automatic retries (see Section <ref>).The function may receive the same request/payload multiple times for asynchronous invocation because Lambda's internal queue is eventually consistent <cit.>. The stateful function maintains a bitmap to avoid double-counting and to ensure each payload is aggregated and processed exactly once (see Section <ref>).§ FUNCTION TEMPLATES §.§ Template Specialization Legal cloud functions are only scripts or compiled programs, so many systems <cit.> embedded the physical plan to the function code during the code generation phase. They generate code for the individual tasks, compiles it and packages it with necessary dependencies. To execute a job, a scheduler launches tasks as serverless functions and monitors their progress. However, compiling cloud functions and dependencies at query runtime might cause delays of seconds or even minutes, slowing query response time. For example, Flock is a Rust-based cloud-native query engine; if we build anrelease version with the features SIMD and mimalloc/snmalloc <cit.> on an AWS EC2 instance – c5a.4xlarge, the build time was roughly 4 minutes even with incremental compilation. What's worse, it takes 8m 33s to build from scratch. This is because requires a lengthy dependency tree to be built <cit.>. Another approach is that Locus <cit.> is built on Pywren <cit.>, a pure Python implementation that omits the code-generation and compilation steps and directly takes task code and execution plan as input with sacrificing performance and cost (longer charged duration).We propose function template specialization as a way to completely eliminate the compilation stage from the query execution pipeline. Template specialization in programming languages allows alternative implementations to be presented based on specific properties of the parameterized type that is being instantiated, enabling for certain types of optimization and reducing code bloat. Similarly, as shown in Figure <ref>,Flock's service provider creates, builds, and archives a generic cloud function as afile, which is primarily made up of four components: cloud context initialization, data collection and preparation, query execution and next function invocation (more details will be explored in Section <ref>). The service provider then uploads theto the cloud object storage, such as Amazon S3 <cit.>, and makes new public release available for users. eliminates the requirement for the client or central registry to spend time compiling SQL execution plans and new cloud functions into binary code, resulting in much lower end-to-end latency[For developers who want to write custom stream processing logic, 's stateful operators are UDFs with state that still require users to compile function code during query runtime. In this case, JIT code generation for each query over LLVM <cit.> or Cranelift <cit.> is a better solution to reduce branching overhead and the memory footprint.]; creates functions right away using the S3 object of the generic function that the service provider has provided in advance <cit.>. In addition, the cloud context, which includes the execution plan, is serialized and sent as a string into the Lambda API 's parameterin Figure <ref>. The cloud context is compressed with Zstd <cit.> after serialization by default since Lambda environment variables have a default 4 KB service quota that cannot be raised <cit.>[stores the execution plan in S3 and preserve the S3 object key in the cloud context if the execution plan exceeds this limit. For example, if running in a single function, this plan (82KB) <cit.> must be kept in S3.]. The environment variable settings that are accessible from function code during execution on cloud. The query launch time is reduced by 10,000 times using this approach since launching a cloud function just requires the creation of a function without compilation. then invokes the newly created function name to execute the query on the cloud function services via the Lambda API  <cit.>. The context initialization is performed once per function instance to prepare the cloud environment for invocations; it reads the encoded string from the environment variable and deserializes it as the cloud context. The generic function template specialization is then achieved. Even while all functions have the identical code — generic template, each function can identify the specific execution plan and which function to deliver the output to via the cloud context when it is instantiated in the cloud. §.§ Generic Functionis a new generation of cloud native query engine that consists simply of generic functions and a client library. The generic function can work with any type, rather than a specific type only, allowing it to be designed, built, and delivered to the cloud platform ahead of time. To take use of the latest query engine capabilities, the cloud service provider only has to offer customers with an updated version of the generic function on a regular basis without disclosing the source code. The client library can translate SQL queries to executable cloud functions.A generic function is a function code whose behavior depends on the identities of the arguments supplied to it via(see Figure <ref>). When a function is invoked, it deserializes the cloud context given by the client to discover the appropriate code regions — those with specializers that are compatible with the actual context. The pseudo code in Listings <ref> shows how the generic cloud function is implemented and operated. The function code can be broken down into four parts: (1) Cloud Context Initialization. (line 5) is a synchronization primitive for running a one-time global initialization. The given closure(line 9-14) is used to deserialize environment variables into cloud context, and it will be run if(line 15) is used for the first time; otherwise, the routine will not be invoked. Private data that is only used per invocation should be defined within the handler. Global variables such asretain their value between invocations in the same execution environment. As a result, the cloud context is only initialized once throughout the lifetime of the instance, and future invocations reuse the resolved static context.(line 11 and 23) is a type of global resource that are created during initialization stays in memory between invocations, allowing the handler to collect states across invocations.We explain it in more details in Section <ref>.(2) Data Preparation. The function essentially receives the payload in JSON format from the HTTP request's body, computes the result, and either returns it to the client or forwards it to the next functions as HTTP requests.When the runtime receives an event, it passes the event (line 22) to the function handler. leverages Apache Arrow <cit.> to save streaming data (line 24) in the in-memory columnar format to maximize cache locality, pipelining and SIMD instructions on modern CPUs.In the case of the function associated with the aggregate operation, such as , usesto collect all data partitions before being given to the embedded query engine in the current function.More details are described in Section <ref>.(3) Query Execution. The function includes Arrow DataFusion <cit.>, an in-memory query engine that provides both a DataFrame and SQL API for querying CSV, Parquet, and in-memory data. DataFusion leverages the Arrow compute kernels for vectorized query processing. All rows with a particular grouping key are in the same partitions, such as the case with hash repartitioning on the group keys. Data partitions are processed in parallel in the cloud function (line 26).(4) Next Function Invocations. Following the execution of the query stage in the current function, the output is placed into the next function invocation's payload (seein Figure <ref>), and finally, a synchronous or asynchronous invocation (line 27) is made to make distributed dataflow possible. The implicit invocation chain is analogous to functional programming. More complex data shuffling are described in detail in Section <ref>. language=Rust, breaklines=true, style=boxed,basicstyle=, frame=none, backgroundcolor=, numberstyle=,[caption=Generic Function Skeleton.,captionpos=b,label=code:generic_func] use lambda_runtime::service_fn, LambdaEvent; use serde_json::Value;/// Initialize the function instance once and only once. static INIT: Once = Once::new(); static mut CLOUD_CONTEXT = CloudContext::Uninitialized;macro_rules! init_cloud_contextlet ctx_fn = || match std::env::var( **CONTEXT_NAME)Ok(s) =>CLOUD_CONTEXT = CloudContext::Lambda(( ExecutionContext::unmarshal( s), Arena::new()));... ; INIT.call_once(ctx_fn); matchmut CLOUD_CONTEXTCloudContext::Lambda((ctx, arena)) => (ctx, arena), CloudContext::Uninitialized => panic!("uninitialized!"),async fn handler(event: LambdaEvent<Payload>) -> Result<Value>let (mut ctx, mut arena) = init_cloud_context!(); let (input, status) = prepare_data(ctx, arena, event)?; if status == HashAggregateStatus::Readylet output = collect(ctx, input).await?; invoke_next_functions(ctx, output, ...).awaitelse if status == HashAggregateStatus::NotReadyOk(json!("response": "data is not yet ready"))else if status == HashAggregateStatus::ProcessedOk(json!("response": "data has been processed"))#[tokio::main] async fn main() -> Result<()>lambda_runtime::run(handler_fn(handler)).await?; Ok(())§.§.§ Heterogeneous Hardware According to the AWS blog <cit.>, AWS Lambda functions running on Graviton2 <cit.>, using an Arm-based processor architecture designed by AWS, deliver up to 34% better price performance compared to functions running on x86 processors for a range of serverless applications including real-time data analytics. To give users with better price-performance, provides function binaries for both x86 and Arm architectures, and users may select different generic function binaries from AWS S3 bucket to create Lambda functions that operate on x86 and/or Arm processors. Currently, has 4 versions on S3 — , ,and . For Lambda functions using the Arm/Graviton2 processors, duration charges are 20% lower than the current pricing for x86. However, the reported performance difference (19%) between x86 and Arm by AWS may not include SIMD optimization. An unanswered question is who performs better on query operations when AVX2 and Arm Neon intrinsic are employed. The Graviton2 processor also has support for the Armv8.2 instruction set. Armv8.2 specification includes the large-system extensions (LSE) introduced in Armv8.1. LSE provides low-cost atomic operations and improves system throughput for CPU-to-CPU communication, locks, and mutexes.To measure the difference between architectures, we compared the latency and duration cost between the two architectures in Section <ref>. § SERVERLESS ACTORS AND COMMUNICATIONThe actor model is a highly popular computational pattern, which simplifies the job of composing parallel and distributed executions by using a basic unit of computation: the actor. is an actor model that provides an isolated, independent unit of compute and state with multiple-threaded execution on cloud functions for serverless event-stream processing service with pay-for-use. §.§ One-way CommunicationThe most important element of the actor model is that actors can communicate via asynchronous messages. Previous work has proposed solutions for data exchange in the serverless context <cit.>. They rely on external storage to exchange large amounts of data since cloud functions can't accept incoming connections. For example, Starling <cit.> uses Amazon S3 to pass intermediate data between function invocations. However, the solutions consist of additional services, which increases the latency (I/O), billed expenses of function duration and S3 access, and therefore compromises the advantages of a serverless system. differs from earlier systems in that it is built for real-time stream processing on gigabytes of data rather than OLAP workloads.AWS Lambda functions have a 6 MB payload size limit for synchronous invocations and a 256 KB size limit for asynchronous invocations <cit.>. The function's concurrency is the number of instances that serve requests at a given time, and the default regional concurrency limit starts at 1000 <cit.> which can be easily increased to 5000 by contacting Amazon. By combining these two AWS Lambda quotas above, as well as data encoding and compression, can transfer GB-level intermediate results between functions without using external storage. When a function is invoked (seein Figure <ref>), passes data in the payload and the payload is serialized to JSON bytes because the content type of HTTP requests body is enforced toin AWS Lambda. Data partitioning guarantees that each partition can be placed in a function's payload, and the functional chain seamlessly passes the data to the next query stage. This removes the need to persist and load data from data stores such as DynamoDB <cit.> and S3 <cit.>. There is no additional cost associated with invoking Lambda functions with a payload, which reduces bill cost and duration.Table <ref> shows the latency difference between AWS S3 and function payload. By default, objects are compressed with Zstd <cit.>, which provides a 4x compression ratio on NYC Citi Bike trip data <cit.>. Therefore, the real single partition size we tested reached up to 60MB, which is enough to handle streaming workloads. The latency in the table also includes the overheads of marshalling/unmarshalling andcompression/decompression, this is because serialization and deserialization phases happen in Lambda Rust Runtime <cit.>. However, these parts are not bottlenecks, accounting for less than 7% of total time. When the compressed partition is less than or equal to 1.5MB, the payload communication is an order of magnitude faster than S3. Since 15 MB exceeds the maximum size of the function payload, executed the same function instance three times synchronously or 60 times asynchronously, resulting in a 6x or 2x improvement, respectively. We'll explain how multiple payloads are routed to the same running instance in Section <ref>, which is the critical part of data shuffling.The limitation of this approach is that AWS Lambda does not yet provide per-instance concurrency like GCP Functions <cit.> for now. GCP function allows for up to 1,000 concurrent requests on a single instance of an application, providing a far greater level of efficiency.In extreme cases, multiple data partitions shuffling for aggregation may performs poorly when constrained to a single-request model. §.§ Sync and Async When a function is invoked asynchronously, Lambda puts the event in a Lambda-owned queue and returns right away, rather than exposing Lambda's internal queues directly. A separate process reads events from the queue and executes the function. AWS Lambda is a multitenant system that implements fairness is by setting per-customer rate-based limits, with some flexibility for bursting <cit.>. Therefore, there may be an occasional invocation delay while dealing with heavy workloads.Figure <ref> shows the benefit of asynchronous calling that is when the current function invokes the next function, it can return immediately instead of waiting for the succeeding function to complete its execution. This significantly decreases the expense of duration. If n represents the total number of query stages, f_i represents the lambda function or function group corresponding to the ith query stage. The total asynchronous invocation cost of the bill is∑ _i=0^nλ(f_i) + ∑ _i=0^nd(f_i) λ(f_i) indicates the cost of function invocations to the ith query stage with a specific memory and processors. d(f_i) is the billed duration cost of the ith query stage. For the synchronous invocation, the duration cost (including waiting time) of the ith stage is ∑ _j=i^nd(f_j). The total cost of the bill is ∑ _i=0^nλ(f_i) + ∑ _i=0^n∑ _j=i^nd(f_j) However, compared to asynchronous invocation, synchronous invocation is faster and more reliable, and it won't be affected by internal queue throttling. Furthermore, if the query is executed by a single function or the stages are shallow, the billed duration of synchronous invocation is lower and less expensive. This is because its payload maximum is 6MB, which is 24 times larger than the asynchronous's 256KB, the synchronous approach requires 24 times fewer invocations, alleviating the the single-request model problem of AWS Lambda. Since each function call is charged for duration, the asynchronous approach may have a higher cost in terms of invocation and duration time. §.§ No CoordinatorMigrating streaming applicationsfrom a traditional serverful deployment to a serverless platform presents unique opportunities. Traditional serverful deployments rely on existing workflow management frameworks such as MapReduce <cit.>, Apache Spark <cit.>, Sparrow <cit.>, Apache Flink <cit.> to provide a logically centralized scheduler for managing task assignments and resource allocation. The scheduler traditionally has various objectives, including load balancing, maximizing cluster utilization, ensuring task fairness, keeping track of distributed tasks, deciding when to schedule the next task (or set of tasks), and reacting to finished tasks or execution failures.A traditional serverful scheduler is not required by serverless computing. This is because FaaS providers are responsible for managing the containers or MicroVMs <cit.>and serverless platforms typically provide a nearly unbounded amount of ephemeral resources. However, existing data systems on FaaS platforms like Starling <cit.> and Lambada <cit.> still require a coordinator to monitor task completion and start new stages once dependencies are completed due to they use S3 as the communication medium between functions. They would otherwise have no way of knowing if the current query step is complete.completely eliminates the coordinator by putting the function name of the next stage in the current function's cloud context during the query planning phase on the client-side (see Figure <ref>). When the current function finishes the computation, it simply passes the result to the next function invocation's payload. 1) For asynchronous invocation, if the function is terminated abnormally or thrown invocation errors, AWS Lambda retries the function. configures a dead-letter queue <cit.> on the function to capture events that weren't successfully processed for further processing.2) For synchronous invocation, implements truncated linear backoff algorithm that uses progressively longer waits between retries for rate limit exceeded errors. The retries are only required in synchronous invocations when passes multiple payloads to a single function with concurrency equals to 1, more details are explained in Section <ref>. The current function, in this approach, regularly re-invokes a failed function, increasing the waiting time between retries until the maximum backoff time is reached. The following is the duration of the wait:min (50*increase_factor + random_milliseconds, max_backoff) The increase_factor starts from 1, and reset to 1 when 50*increase_factor exceeds the maximum backoff. random_milliseconds is bounded to 100ms that helps to avoid cases in which many functions retry at once, sending requests in synchronized waves. In conclusion, removing the query engine's core coordinator makes coding, operation, and maintenance easier while potentially reducing query processing time. §.§ Function NameThe name of the cloud function is made up of three parts:[ basicstyle=, columns=flexible, numbers=none,frame=none, backgroundcolor=, ] Function Name: <Query Code>-<Query Stage ID>-<Group Member ID>The query code is the hash digest of a query. The query stage id is a 2-digit number that represents the position of a stage in the DAG, and the group member id is the position of the function in a group it belongs to.The function name does not include a timestamp so that the created function can be reused by continuous queries without incurring a cold start penalty. This naming convention guarantees that each cloud function is appropriately identified and categorized into a distinct query, allowing to detect and resolve issues efficiently. The cloud function concurrency is the number of instances or execution environments that serve requests at a given time <cit.>. The first time you invoke a function, AWS Lambda creates a function instance and runs its method to process the event. After the invocation has ended, the execution environment is retained for a period of time. If another request arrives, the environment is reused to handle the subsequent request. However, if requests arrive simultaneously, Lambda scales up the function instances to provide multiple execution environments, and the events are processed concurrently. Each instance has to be set up independently, so each instance experiences a full cold start. uses  <cit.> to set the maximum concurrency for each function in the query DAG, ensuring that the function has the ability to scale on its own, preventing it from growing beyond that point. §.§ Function Group sets the default 1000 concurrency to stateless (non-aggregate, e.g. scan, filter and projection) functions. Each of them is preferentially executed on a data partition contained the same keys to maximize data parallelism. If the concurrency of the stateful (aggregate, e.g. group by, sort and join) function is set to more than 1, is unable to ensure the integrity of the query results. Lambda is likely to spawn multiple running instances to handle payloads from the non-aggregate functions, causing the partial results to divergence and ultimately fail to aggregate. Therefore, for aggregate function, set its concurrency to 1, which enforces AWS Lambda to create at most one running instance for the aggregation function at any given time.However, one of the key benefits of serverless query engine is the ease in which it can scale to meet traffic demands or requests, with little to no need for capacity planning. Setting the concurrency of aggregate functions to one goes against the essence of serverless computing. Because there is only one function instance for the current query stage, and AWS Lambda does not yet provide per-instance concurrency <cit.> like GCP Functions to accept concurrent requests on a single running instance, hot spots are caused by awaiting the completion of the preceding aggregate task.We proposes the function group technique that creates a set of cloud functions in a group for each query stage after physical plan partition to reduce the hotspot effect. 1) for non-aggregate function, its concurrency is 1000 by default. only makes one function member (name) in that group, and AWS Lambda governs its running instances and routes requests to them. 2) for aggregate function, its concurrency is set to 1. creates a group consisting of multiple identical functions with different names. Figure <ref> shows an example of how the cloud function group works. The function 0 contains the query stage 0 and has the default 1000 concurrency.Lambda starts four instances of function 0, each of which conducts a local aggregation and hash partitioning of the output into two distinct payloads – green and blue boxes. The same color payload in different function instances includes the same shuffle id (see Section <ref>) that is used to generate a deterministic random key to perform consistent hashing <cit.>. Because the cloud context of the function 0 (see Figure <ref>) has the next function name — here is ,is group name and 8 is group size. Using the group information, maps payloads associated with the same key on four distinct instances to the same function name (same instance) in the next stage. This approach distributes shuffling operations to multiple function instances and guarantees data integrity since all functions' concurrency in the group equals to 1. To minimize serial aggregates on the same function instance caused by hash collisions, each function 0 does a single consistent hash lookup to determine the ring's start point, then maps different data partitions to different function names in a counterclockwise order. For example, if green partitions are mapped to , then blue partitions is mapped to .With consistent hashing, on the other hand, the hashing function is independent of the number of cloud functions in the group. This allows the extended optimizer to dynamically routing the data as we add or remove functions, and hence, scale on demand to balance hotspots and cold starts[We have yet to implement this feature in the codebase.]. Furthermore, by reading statistics from the state store, function instances can agree on dynamically coalescing shuffle partitions for adaptive query execution <cit.>.§ FLOCK DATAFLOW PARADIGMFor workloads using streaming data, data arrives continuously, often from different sources, and is processed incrementally. The processing function does not know when the data stream starts or ends. Consequently, this type of data is commonly processed in temporal windows. has native support for tumbling, sliding and session window functions, enabling users to launch complex stream processing jobs with minimal effort. The first query stage is datasource functions, which continue to fetch messages from the stream until a full batch is obtained or the time window expires. 's execution plan, unlike traditional distributed execution engines, is a dynamic directed acyclic graph that changes with time in the cloud rather than a static one on-premise. This is because each stage in the query DAG represents a cloud function group, AWS Lambda automatically scales up and down running instances for each stage based on the number of incoming events. However, as the data shuffling is sent in pieces via the function invocation's payload, the aggregate function is likely to receive data partitions from many temporal windows. In this section, we seek to answer the following questions: 1) When data partitions of different shuffling operations or even different queries are delivered to the same function instance, how to distinguish aggregation between data? 2) How to know the aggregation is complete and it's time to move on?§.§.§ Payload StructureThe payload has afield that contains an "on-the-wire" representation of Arrow record batches. Thefield defines the tables, fields, relationships, and types of the data carried.The payload also contains metadata about the data.For example, thefield provides different compression options, such as Snappy <cit.>, Lz4 <cit.> and Zstd <cit.>, to compress Arrow data. The default option is Zstd.[ basicstyle=, columns=flexible, numbers=none,frame=none, backgroundcolor=, ] Payload:UUID, EpochID, ShuffleID, Data, Schema, Encoding UUID:QID, SEQ_NUM, SEQ_LENQID: <Query Code>-<Job ID>-<Query Timestamp> To make shuffling and aggregation more deterministic, marks each payload with a UUID.Thecopies the query code from the function name (see Section <ref>). Unlike the function name, thealso contains the query start timestamp and job id. This allows different queries' payloads to be differentiated from one another. The indicates which microbatch the current data partition comes from. The payload'salso includesandin addition to the .is a monotonically increasing number used to identify the uniqueness of the payload in a certain set of aggregated data, and thefield represents the total number of payloads needs to be aggregated. These two fields ensure the aggregate function knows whether all payloads have been collected for a certain job. In the case of the partial aggregate inside the function, it produces multiple payloads, and each of them may be shuffling to different functions in the next function group (see Section <ref>). The Shuffle ID is used to assign an incremental number to each output payload in the function, and payloads (across function instances) in the same stage belonging to the same partition (range) are allocated the same shuffle id. This is mainly used to distinguish different aggregate tasks of the same query job since they can all be mapped the same next function. For example, the green payload's shuffle id is 1 and the blue payload is 2 in Figure <ref>. §.§.§ Global ArenaStatic initialization happens before the query code starts running in the function. Listings <ref> (line 23) shows thecode runs when a new execution environment is running for the first time, and also whenever a function scales up and the Lambda service is creating new environments for the function. The initialization code is not run again if an invocation takes effect on a warm instance. Static initialization is the best place to allow a function to reuse global resource in the same environment over multiple invocations. The cloud context and memory arena are deserialized or created in the initialization phase and are only loaded once per environment, avoiding them from being loaded on every invocation.is a global versioned hash map that aggregates data partitions outside of the function handler to ensure the data integrity for further stream processing.The key is a tuple including theand Shuffle ID of the payload. The value containing the currently received payloads. Thefield in each payload indicates the total number of payloads that the current session needs to collect. AWS Lambda does not yet provide per-instance concurrency <cit.>, decompresses and deserializes data only after receiving all payloads, allowing for maximum parallelization.The function can receive the same payloads several times for async function invocation due to Lambda's internal queue is eventually consistent <cit.>. The bitmap field is provided for this reason: it guarantees that each payload is aggregated and processed only once. The payload'sis a bitmap index that represents each payload as a single bit to track the aggregation state. The function can handle the same payload many times without incurring repeated duration expenses while utilizing bitmap. Even if the function output is empty, payloads carrying just metadata must be passed to the next function. §.§.§ Multi-level ShufflingLet's have a look at the query execution plan for an online auction system in Figure <ref>. The query is divided into four stages by , and the whole execution flow on cloud functions is represented in Figure <ref>.Stage 0: This stage reads upstream streaming data first, until the time window is reached. Separate cloud functions can be used for the auction and bid data sources, however just for simplicity, both data sources ( and ) are read in the same running instance. The repartition operator uses a hash of an expression (the join key) and the number of partitions (here, M=4) to map N input partitions to M output partitions. The data to be distributed in such a way that the same values of the keys end up in the same partition or payload. To deliver payloads to the next query stage, calls the next function 4 times.Stage 1: Lambda starts four instances, one for each payload to process. Each input payload has a distinctranging from 1 to 4. The function then performed a local hash aggregation and repartitioned the partitions into two output payloads (green and blue boxes) after the hash join. The output payload inherits the input payload's . Based on the payload position, shuffle ids are assigned in increments of one. Each function uses the same deterministic seed to generate the same hash key, then does a single lookup to establish a starting point in the hash ring, then picks each next function counterclockwise for each payload and calls it in parallel. Stage 2:The current function, unlike stage 1, collects multiple input payloads in the global arena (see Section <ref>). Shuffle ids are allocated in the same way in the current function's output payloads. The input's shuffle id, on the other hand, must be set to the output'sso that the next function name can determine whether or not payloads are duplicates that can be aggregated by the same aggregate job. Stage 3:The third stage produces output partitions, and its next function is a data sink action that delivers the result to downstream services in the current function.§ EVALUATION In our evaluation we seek to answer the following questions in corresponding sections:* How does x86_64 and arm64 architecture affect performance?* How performant is compared to alternatives?* How does 's operational cost compare to alternatives as query workloads change? §.§ Experimental SetupTo evaluate 's performance cost, we run our experiments on the following two streaming benchmarks: Yahoo Streaming Benchmark (YSB) <cit.> is a simple advertisement application, and its job is to read various JSON events from Kafka and store a windowed count of relevant events per ad campaign into Redis. NEXMark Benchmark <cit.> is an evolution of the XMark benchmark for an online auction house. NEXMark presents a schema of three concrete tables, and a set of queries to run in a streaming sense. NEXMark attempts to provide a benchmark that is both extensive in its use of operators, and close to a real-world application by being grounded in a well-known problem. The original benchmark was adopted and extended by the Apache Foundation for their use in Beam <cit.>, a system intended to provide a general API for a variety of streaming systems. To make things a bit more dynamic, they changed the size of the windows to be merely ten seconds, rather than the minutes and hours the original specification sets. They also added more queries <cit.>, for example, q1 - q8 are from original NEXMark queries, q0 and q9 - q13 are from Apache Beam[The NEXMark Query 13 is BOUNDED_SIDE_INPUT_JOIN: Joins a stream to a bounded side input, modeling basic stream enrichment.]. We follow the Beam implementation, as it is the most widely adopted one. §.§ x86 vs Arm Architectures The AVX2 and ARM Neon intrinsics are used in this experiment, which rely on Rust SIMD auto-vectorization as well as handwritten arrow kernels that explicitly employ SIMD intrinsics. We generated 500,000 NEXMark events, including 9995 person events, 29985 auction events, and 459770 bid events. Each subplot performs a different operation on the events. The proportion of a function's allotted memory determines the CPU share dedicated to it. This is why we tweak total memory to tune the CPU <cit.>. Figure <ref>(a) shows the performance of the lambda function executing fours query operators (filter, join, aggregate and sort) under the x86 and Arm architectures while varying the function's memory size. Except that Arm is 5-10% slower than x86 in aggregate operations, the billed duration of all other operations is less on Arm than on x86. The filter's duration accounts for 34% - 77% of total x86 time, the join's duration for 76% - 91% of total x86 time, and the sort's duration for 61% - 76% of total x86 time on Arm.Furthermore, Arm's duration charge is 20% less expensive per millisecond than x86. For example, the 1ms charge for ARM 512MB is $0.0000000067, which is 20% cheaper than $0.0000000083 for x86 <cit.>. When compared to traditional x86 architecture on cloud, on AWS Graviton2 processor saves more billing cost due to the shorter duration and lower charge.Figure <ref>(b) shows the difference between NEXMark N5 and N6 under different architectures. N5 introduces the first use of windowing in NEXMark, and requires a sliding window to boot, which calculates the hot items in the last 10 seconds and update every 5 seconds. N6 is the only one in NEXMark that makes use of . Both queries executed for 20 seconds, 1M event per second. N5 runs 14% faster than x86 on Arm, which is 31% cheaper than x86.Similarly, N6 is also 28% cheaper. In comparison to x86, Arm is indeed faster and less expensive. The rest of experiments are run on Arm-based Graviton2 processors. §.§ Performance Cost Figure <ref> compares the throughput, query time, billed duration and hourly cost of executing 10 million events and 1 million events per second between Flink and under different configurations. Flink was deployed on EC2 instances – c4.2xlarge, c4.4xlarge and c4.8xlarge respectively with different numbers of CPU cores and memory sizes.EC2 instances are long-running, we set the duration of Flink to be the same as the query time. However, for , the billed duration refers specifically to the execution part of the cloud functions and does not include data preparation and transmission. We configured 3 memory sizes for Flock's cloud functions — 512 MB, 2 GB and 8 GB. Flink uses 8 workers, which equals to the concurrency of function. updates the state asynchronously to S3, whereas Flink updates the state to the local RocksDB <cit.>. To avoid the compaction overhead in EC2 instances, we only compared and Flink in our experiments with the hashmap state backend enabled.We ran the NEXMark queries 1, 2, 3, 5, 7, 10, 11, and 12 in the experiments. N1, N2, and N3 are elementwise queries that feed a micro-batch of events per second. N5 is a sliding window query that schedules overlapping events that occurred in the last 10 seconds and updates every 5 seconds. N7 is a tumbling window query that aggregates events using distinct time-based windows that open and close at 10 second intervals. N10 is a query to log all occurrencesto the file system – Flink saves output to the local file system, whereas saves data to S3.N11 is a session window query that groups events for the same user that occur at similar times, while filtering out periods of time when no data is available. N12 is a tumbling window query with a 10 second interval dependent on processing time.The c4.2xlarge has 8 vCPUs and 15.0 GiB of memory, but the performance is still far inferior to the Flock-512MB. This is because Flink is a Scala-based implementation, whereas Flock is a Rust-based high-performance query engine that includes SIMD and mimalloc <cit.> and is based on Arrow DataFusion <cit.>. When using the c4.4xlarge or c4.8xlarge, Flink can generally obtain similar throughput and query time as .The duration of Flock-512MB on N10 is greater than query time, this is because the processing of events from distinct mini-batches or windows can be separated, and then we can invoke new cloud functions to process any stacking events so that pipeline parallelism hides part of the duration delay, thereby shortening the query response time. The query time isn't lower than 20 seconds, since we produce 20 million events in total, and only process 1 million events per second.When Flock-512MB is operating on N7, an out of memory error is thrown. This is due to N7's need for 676 MB of RAM to collect, decompress, and deserialize data. Even if the function is not completed correctly, it will be charged for the time it took.As illustrated in the hourly cost subgraph. Flock can reduce the hourly cost to 1/10 with similar performance to Flink. When the streaming data rate is low, the volume is modest, or the data is queried rarely, Flock's cost performance is more than two orders of magnitude better than Flink. §.§ Invocation PayloadTable <ref> shows the difference in latency of payload and S3 communication when Lambda memory size is set to 128MB, rather than end-to-end query processing. The coordinator overhead of the state-of-the-art system such as Starling <cit.>, for example, is not included. Figure <ref> therefore compares the invocation with payload to S3 communication in terms of latency, duration and billed cost while varying the number of events on NEXMark Q3.The memory size of the Lambda function is set to 512MB in this experiment, and it is launched in , with Flock-S3's coordinator is deployed on the client-side. For Flock-S3, the latency minus the duration, which is about 3 seconds, is used to indicate the overhead of coordinator and function calls. Flock-S3 is an order of magnitude slower than Flock-Payload due to the round trips between the coordinator and cloud functions. It also has a one-order-of-magnitude higher billed duration cost than Flock-Payload. This is because S3 reads and writes all happen during function execution, and I/O latency is billed. In the case of Flock-Payload, however, raising the number of events increases the payload size, which impacts delivery time and execution time, and hence query latency, but delivery time has no effect on the billed duration. The S3 subgraph shows the cost of utilizing S3 as an external communication medium for query processing. , , ,requests are charged $0.005 per 1,000 requests, and , , and all other requests are charged $0.0004 per 1,000 requests <cit.>. For S3 reads and writes, it actually means we are billedwhereis the number of that type of request we made during a monthly billing interval within one S3 region.Here, because the total number of S3 requests is less than 1000, we are directly charged $0.0054. Flock-Payload does not use S3 to transmit data between functions, there is no extra cost. The total cost is shown in the last subgraph, with the integer component coming from S3 communication and the fractional part coming from the duration cost. §.§ Distributed Query ProcessingWhen compared to query execution on a single function, distributed query execution has a substantial overhead and should be applied only when there is benefit in doing so <cit.>. Distributed execution splits the query plan into query stages, each of which is handled by a lambda function or function group. First, it can handle larger volumes of data. Due to each Lambda function can be allocated up to 10 GB of memory <cit.>, it partitions input data into distinct function instances using hash shuffling, each of which performs the query independently. Furthermore, since AWS Lambda currently only supports the single request paradigm, the aggregate function must obtain all shuffled data from the previous query stage in a serial manner. Distributed execution can dramatically minimize latency by lowering the payload size and number of aggregates.Figure <ref> shows the latency and billed duration of NEXMark Query 4 and YSB under centralized and distributed executions. We produced 10 million events for NEXMark N4. For centralized mode, invoked the same function instance 65 times to complete N4. In distributed mode, ordinary lambda functions have a concurrency of 1000 by default , the aggregate function group has a size of 8, and each function member has a concurrency of 1. The latency of N4 is reduced by 4 times using distributed mode, but the billed duration is indeed 10 times that of the centralized mode. This is because N4 is divided into four query stages, each of which is called multiple times due to shuffle or aggregate, and each function execution results in a billable duration.YSB models a simple ad account environment, where events describing ad views enter the system and those of a certain type are accounted to their associated campaign. Every ten seconds is expected to report the total ads for each campaign within those ten seconds. YSB is a tumbling window of 10 seconds, and we generate 1 million events per second. Because ad event characteristics are all string types, and a single ad event is much larger than a NEXMark event, we raised the capacity of the Lambda function to 8GB. This is because the centralized mode cannot process queries in a 2GB memory environment. The centralized version of YSB has a substantially higher latency than the distributed mode by an order of magnitude. This is also because the ad event is excessively big, requiring to invoke the same function instance 540 times in order to collect 10 seconds of window data and run the query. On the other hand, the billed duration in the distributed mode is comparable to that in the centralized mode, implying that distributed query processing has clear benefits on YSB.§.§ Cold StartFigure <ref> shows the latency and billed duration while running NEXMark N3 multiple times with different number of events per second.A cold start is the first request that a new Lambda instance handles. This request takes longer to process because the Lambda service needs deploy our code and spin up a new microVM <cit.> before the request can begin. The first request handled by a Lambda instance will also trigger a one-time function that initializes the Lambda execution context from the cloud environment (see line 23 in Listings <ref>). When the number of events per second is 1K or 10K, both the first and second invocations to the lambda function have a long delay. The second call's billed duration has decreased dramatically, indicating that it is not a new instance, but its overall time has risen to 1.6 seconds. The third subgraph, on the other hand, only has one "cold start".We believe this is some unexplained behavior of AWS Lambda infrastructure. As expected, warm runs had a one- to two-order-of-magnitude decrease in latency. For stream processing, as long as the maximum idle time limit is not exceeded, won't be troubled by cold run because Lambda is almost guaranteed to be warm since the query is executed continuously. For workloads that exceed the idle time limit, AWS Lambda supports provisioned concurrency <cit.> with extra cost that initializes a requested number of execution environments to respond immediately to the function's invocations. § RELATED WORK Serverless workflows.Major cloud providers introduced serverless workflow services<cit.>, which provide easier design and orchestration for serverless workflow applications. Netherite <cit.> and Kappa <cit.> are distributed execution engines that offers high-level language programming environment to execute Durable Functions efficiently. These frameworks are complete programming solutions that support advanced features (arbitrary composition, critical sections), but they are not well-suited for supporting large, complex analytics jobs. Because they involve manually combining operators into a DAG utilizing vendor LOCK-in API. For example, Netflix's Conductor <cit.>, Zeebe <cit.>, and AWS Step Function <cit.> use a JSON schema for authoring workflows, and Fission Workflows <cit.>, Google Cloud Composer <cit.>, and Fn Flow <cit.>, are somewhat more code-based, as the schema is constructed in code. Without query optimizer, the customized jobs are error-prone and suboptimal, resulting in significant performance loss, and are seldom reused for streaming workloads. Instead, supports Dataframe and SQL API to make streaming computation more accessible to users.Data passing is a key challenge for chained cloud functions. Pocket <cit.>, Locus <cit.> and Caerus <cit.> implement multi-tier remote storage solutions to improve the performance and cost-efficiency of ephemeral data sharing in serverless jobs.Cloudburst <cit.> proposes using a cache on each lambda-hosting VM for fast retrieval of frequently accessed data in a remote key-value store, adding a modicum of statefulness to serverless workflows. Lambada <cit.> and Starling <cit.> use S3 as exchange operators for shuffling large amounts of data. SONIC <cit.> uses a hybrid and dynamic approach to choose data passing methods (VM- or Remote-Storage) automatically between any two serverless functions.Compared to state-of-the-practice systems, is the first system to build a steaming query engine for data passing on cloud function services using the payload of function invocations, which is a general solution aimed at major cloud vendors without using external communication mediums.Serverless streaming analytics. shares the similar vision with Orleans on the virtual actor model  <cit.>,an actor is automatically activated on demand to enable a serverless event-stream processing service with pay-for-use. The most similar work to ours may be Apache Flink Stateful Functions <cit.>. Flink takes care of the state and messaging, while the application runs as a stateless Kubernetes deployment, or as FaaS functions. Flink's TaskManagers are coordinators that manage the state, handle the messaging, invoke the stateful functions and go through a service that routes the resulting messages to the next respective target functions, for example a Kubernetes (load-balancing) service, the AWS request gateway for Lambda, etc. Stateful Functions, which are atomic units of isolation, distribution, and durability, are now the building blocks of applications that are similar to Azure Durable Functions.Flock emerges as the inaugural cloud-native streaming processing engine facilitating SQL execution on Function-as-a-Service (FaaS) across heterogeneous hardware platforms, including x86 and ARM. It innovates by enabling data shuffling and aggregation without relying on a centralized coordinator or external storage solutions such as S3. This serverless technique demonstrates versatility, extending its applicability to other infrastructure components, such as metadata management in distributed file systems <cit.>, and online schema evolution <cit.>, illustrating its potential to influence a broad spectrum of computing paradigms.§ CONCLUSION is a step forward in the field of real-time data analytics on FaaS platforms. The ability to leverage the on-demand elasticity of FaaS and the use of payload invocations for data passing, provides a new approach to stream processing that is cost-effective, low-latency and scalable. The elimination of external storage services makes more efficient and easier to use than traditional systems.As FaaS platforms continue to evolve and gain more widespread adoption, we expect to see more organizations embracing the use of and similar systems to perform real-time data analytics.ACM-Reference-Format
http://arxiv.org/abs/2312.16735v3
{ "authors": [ "Gang Liao", "Amol Deshpande", "Daniel J. Abadi" ], "categories": [ "cs.DB", "cs.DC" ], "primary_category": "cs.DB", "published": "20231227223455", "title": "Flock: A Low-Cost Streaming Query Engine on FaaS Platforms" }
[email protected] 0000-0002-5534-3903Technical University of Munich Boltzmannstr. 3 Garching bei München Germany 85748 Stanford University Jane Stanford Way Stanford California USA [email protected] Stanford University Jane Stanford Way Stanford California USA [email protected] University of Munich Boltzmannstr. 3 Garching bei München Germany 85748 About 20 percent of patients undergoing breast-conserving surgery require reoperation due to cancerous tissue remaining inside the breast. Breast cancer localization systems utilize auditory feedback to convey the distance between a localization probe and a small marker (seed) implanted into the breast tumor prior to surgery. However, no information on the location of the tumor margin is provided. To reduce the reoperation rate by improving the usability and accuracy of the surgical task, we developed an auditory display using shape sonification to assist with tumor margin localization. Accuracy and usability of the interactive shape sonification were determined on models of the female breast in three user studies with both breast surgeons and non-clinical participants. The comparative studies showed a significant increase in usability (p<0.05) and localization accuracy (p<0.001) of the shape sonification over the auditory feedback currently used in surgery. < g r a p h i c s >Left to right: (a) Shape sonification concept for breast tumor localization shown on a drawing of the female breast. (b) Top view: Yellow indicates the seed location. As the distance to the seed decreases, the frequency of the beating sound increases. Blue represents the tumor shape. A discrete synthesizer sound is triggered whenever the localization probe is above the tumor.(c) A breast surgeon using shape sonification to localize a tumor on an agar breast model during Study 3. Sub-figure (a) shows a three-dimensional view of the breast in a lying down position with an ellipsoid tumor and a seed, a small rectangular box, inside the breast. The tumor margin and seed location are projected onto the skin bottom up, and the two areas are marked on the skin with a blue ellipse for the tumor margin and a yellow ellipse for the seed area. Sub-figure (b) shows the same two ellipsoid shapes as well as the sound waves produced by the tumor and seed sound. The sound waveforms show the change in sound intensity over time. The seed sound waveform shows pikes that densify towards the middle of the seed. The tumor sound waveform shows a noisy wave that keeps the same sound intensity over the entire tumor area. Sub-figure (c) shows two hands with surgical gloves. One hand holds a black marker, the other a localization probe (a cylindrical handpiece). The person is in the process of drawing an ellipsoid shape on top of a red breast model wrapped in cling wrap. Interactive Shape Sonification for Tumor Localization in Breast Cancer Surgery Nassir Navab January 14, 2024 ==============================================================================§ INTRODUCTION Breast cancer continues to be a significant health challenge, with its incidence surpassing that of any other cancer worldwide. In 2018 alone, over two million new cases of breast cancer were diagnosed, accounting for 23% of all global cancer cases. The most common treatment for women diagnosed with breast cancer is a lumpectomy, a breast-conserving procedure that aims to remove the tumor and a small margin of surrounding healthy tissue <cit.>.Male breast cancer is a rare condition, accounting for less than 1% of all diagnosed breast cancer cases <cit.>. Mastectomy has been the standard surgical approach for male breast cancer <cit.>. However, recent studies have suggested that lumpectomy can be an oncologically safe treatment option for male patients as well <cit.>. Despite its effectiveness, a lumpectomy has limitations, including the potential for tumor-positive margins, i.e., cancerous tissue left inside the breast after the initial procedure. In this case, reoperation is necessary <cit.>. Reported reoperation rates range from less than 10% to more than 70% <cit.>. Another common challenge during lumpectomy is overexcision, where surgeons remove more tissue than necessary, leading to poorer cosmetic outcomes. While surgeons aim to remove the entire tumor with a 1-cm margin, resection volumes have been reported to be 2.3 to 2.5 times larger than the optimal resection volume <cit.>. Breast cancer localization systems have been developed to make excision more precise and thus decrease the rate of reoperations. Some localization systems, such as the Savi Scout®[Savi Scout®, Merit Medical Systems, Jordan, UT, USA] use sound feedback to convey the location of the tumor to the surgeon <cit.>. However, only the location of the seed, a small marker implanted into the tumor prior to surgery, is sonified. Although the tumor size and location can be visualized using medical imaging, these images are obtained prior to surgery. Due to the deformability of breast tissue, the location and shape of the tumor will have changed from the position of the breast during preoperative imaging to the position in surgery. This introduces a potential margin of error for the surgeon in determining the tumor's actual shape and margin location at the time of surgery, even with the use of current localization systems. This work aims to evaluate whether sonification of both the seed location and the tumor shape is effective in improving breast cancer localization accuracy and overall system usability. Novel auditory displays encoding both seed and margin location were evaluated (Figure <ref>). Beyond breast cancer surgery, these sonification strategies can be extended to various surgical tasks or even find application in multimodal user interfaces. Our work provides the following main contributions: * We introduce shape sonification into surgical guidance systems.* We present an auditory display for breast cancer localization using multi-parameter sound mapping for simultaneous encoding of shape information (tumor margin) and point location (seed).* We report results from three user studies with four breast surgeons and 33 non-clinical participants comparing shape sonification to the current clinical sound feedback.* We provide evidence that shape sonification has the potential to improve the usability and accuracy of surgical localization tasks. § RELATED WORKThe work presented in this paper most closely relates to three areas of research, which the following section summarizes: auditory displays with a focus on shape sonification, recent experiments in sonification for surgical guidance, and breast cancer localization approaches. §.§ Shape SonificationPrevious research in psychoacoustics and sensory substitution proves that the visual sensory channel can be substituted by the auditory channel for certain tasks. A study by Auvray et al. [] showed that objects can be recognized by their auditory representation and that auditory cues can be used for localization tasks. Meijer et al. [] conducted an experiment on auditory image representation. They converted images into corresponding sound patterns, proving that auditory representation can effectively retain visual image information. Gerino et al. [], who evaluated multiple two-dimensional (2D) sonification techniques for shape recognition on touchscreens, were able to show that sonification is effective at conveying shape information to users. They introduced a shape sonification task which asked the user to explore an invisible 2D shape (e.g., a triangle or a rectangle) by moving their index finger along a line on a touch screen. For each position of the finger on this one-dimensional line, a "cross-section" of the shape was sonified and played to the user.Another study on geometric shape recognition by van den Doel et al. [] focused on sound feedback for blind people. Their system mapped images of basic geometric shapes to sound. During their study, the user explored the virtual image by moving a pointer over a graphics tablet (Wacom tablet) to explore the shape and was later asked to draw the perceived shape.A tablet-based system was also used in a showcase by Javier Sanchez [], enabling sound-based curve and shape recognition through pen and finger movement. Tommasini et al. [], while also exploring invisible geometric shapes via sound feedback, were interested in quantifying the dynamic movements of a computer mouse during shape exploration. They showed differences in exploration strategies for the distinct geometric shapes. A study evaluating shape recognition via gestures asked users to discriminate between concave and convex curves in three-dimensional (3D) space using sound <cit.>. Our work introduces a new use case for shape sonification, namely breast cancer localization. The proposed system sonifies both the tumor shape and the location of a marker inside the tumor.§.§ Surgical SonificationAuditory displays are a promising means of guidance during medical procedures when visual focus is required on the surgical site, and a visual overlay in Augmented Reality (AR) might distract from the surgical task. Among works introducing sonification as an alternative or addition to visual surgical guidance systems are Matinfar et al. [], Roodaki et al. [] and Schütz et al. []. Matinfar et al. [] presented a four-dimensional sonification for surgical instrument alignment during pedicle screw placement.Schütz et al. [] proposed an audiovisual guidance system for coil placement during transcranial magnetic stimulation by means of position and angle sonification.Roodaki et al. [] evaluated the influence of sound feedback on the performance of medical precision tasks, such as needle placement in eye surgery. The audio guidance encoded location and angle information about the needle to the user. Their study showed increased angle alignment accuracy over a visual medical guidance system. All three works showed that sonification is as effective as visualization in conveying tool location and angle information. Two other studies presenting an auditory display for needle placement are Black et al. [] and Bork et al. []. Black et al. [] presented an auditory synthesis model using pitch and stereo panning parameter mapping for navigated needle placement, while Bork et al. [] introduced an audiovisual AR system to improve occluded anatomy localization perception in 3D. The latter study showed enhanced needle placement accuracy when using auditory and visuotemporal guidance. Another study evaluating the benefit of sonification in a surgical setting mapped the spatial position of a surgical tool tip with respect to a target location by encoding the direction and distance to said target position in the auditory display <cit.>. Directionality was represented via sound pitch and distance to the target via beat frequency. Unlike the studies presented in this section, which applied sonification approaches to surgical tool location and orientation, our work focuses on shape sonification. We present a novel sonic interaction with the tumor. Instead of sonifying the surgical tool alignment with some pre-planned position, we sonify the anatomical target.§.§ Breast Tumor Localization TechniquesCurrent intraoperative localization methods of non-palpable breast lesions rely on medical imaging such as mammography, ultrasound, or magnetic resonance imaging (MRI) <cit.>. Although the acquired imaging data is volumetric, scans are commonly presented to the surgeon as a series of 2D multiplanar images and thus demand experienced mental registration of these images to the patient to locate the tumor. Image-guided tumor localization is furthermore challenging as the patient's position during diagnostic imaging differs from their position during the surgical procedure. For instance, a breast MRI is typically acquired with the patient lying face down in a prone position, while a mammogram is taken with the patient standing and the breast compressed between two plates. In contrast, actual lumpectomy surgery is performed with the patients lying on their backs. To ensure precise localization, even after the position is changed, a trackable marker is embedded directly into the tumor, which can be localized regardless of breast tissue deformation. While wire-guided localization has been a common approach, its limitations prompted the development of a wireless alternative using radioactive seeds, which, however, requires substantial training for the safe handling of radioactive material <cit.>. Several radiation-free and wireless methods, including the use of radiofrequency identification tags, magnetic seeds, and infrared reflectors, have emerged <cit.>. These markers can be inserted at the time of pre-operative biopsy. An exemplary radiation-free localization system is the Savi Scout®. It involves a 12×1.6 mm electromagnetic wave reflector (seed) activated by infrared light impulses generated by the console probe and two antennas that allow the reflection of the electromagnetic wave signal back to the probe <cit.>. The system provides real-time proximity information of the detection probe to the seed through auditory feedback <cit.>. A recent analysis of over 800 cases revealed a significant decrease in reoperation to 12.9% when using the Savi Scout® <cit.>.Although marker-based localization systems can help determine the location of the seed within the tumor, they do not provide information about the tumor margin location or its overall shape. Inferring these details from preoperative medical images can be a cognitively demanding and error-prone task. To address this challenge, several studies have explored the use of AR to assist with breast tumor localization <cit.>. Perkins et al. [] demonstrated a Mixed Reality surgical planning system for breast tumor targeting in seven patients. The location of the tumor was marked on the skin of the breast by tracing the virtual tumor shape. In a preliminary 2D perceptual task, they evaluated the tracking accuracy of the localization system. They reported an overlap of the drawn and the virtual shape using the Dice coefficient (ranging from 0.56 to 0.95). For the patient study, they reported a Dice coefficient of about 0.2 comparing the shape drawn using a HoloLens and the ground truth shape <cit.>. Unlike Perkins et al. [], who introduced a planning system to be used before incision, Gouveia et al. [] performed a live lumpectomy using AR guidance. Lan et al. [] proposed the use of a fiber optoacoustic guide in combination with a tablet-based AR interface for lumpectomy. Their study showed a localization accuracy of 0.25mm of the tip of the fiber optoacoustic guide. However, no information about the tumor shape was presented to the user. Another recent work in the field of breast tumor localization introduced an AR visualization system for ultrasound breast biopsy, which offers segmented lesion visualization <cit.>. Their system demonstrated improved accuracy, as evidenced by the distance from the needle tip to the lesion being reduced to 5.09 mm.Unlike the above studies presenting visual augmentation approaches, we propose to add to the state-of-the-art practices in breast tumor localization by providing an auditory display for seed-based localization systems. We aim to enhance tumor excision precision by providing information about the tumor margin location in addition to the seed location sonified in the state-of-the-art solutions.§ EXPERIMENTSWe conducted three user studies to investigate whether shape sonification can increase the usability and accuracy of the tumor localization task in lumpectomy. Repeat testing helped us to incrementally refine the shape sonification design. For the first study, the use context was abstracted to a 2D plane (Figure <ref>(a)). The second study evaluated a refined shape sonification design on a silicone model of the female breast (Figure <ref>(b)), and the third study tested the final sonification design on 16 agar breast models with breast surgeons (Figure <ref>(c)). All studies were approved by the university's Institutional Review Board (IRB) office.In particular, we sought to test the following hypotheses: * Hypothesis 1 (H1): Shape sonification (Rhythm, Synth, Sine) significantly increases the localization accuracy compared to the current clinical auditory feedback (Beep).* Hypothesis 2 (H2): Shape sonification (Rhythm, Synth, Sine) significantly enhances the usability of the auditory display compared to the current clinical auditory feedback (Beep).* Hypothesis 3 (H3): Shape sonification (Sine) significantly decreases the amount of excess healthy breast tissue resected compared to the current auditory feedback (Beep). § STUDY 1 - 2D PLANEWe conducted a repeated-measures within-subject study to compare two shape sonification designs to the current clinical auditory feedback. §.§ ParticipantsTwelve volunteers took part in the study, five women and seven men. The participants had an average age of 31 years, with a standard deviation of 9.74 years.The study included three postdoctoral researchers, one radiologist, three medical students, three bioengineering students, and two students in product design. None of the participants reported having any hearing impairments or prior experience in using interactive sonification for shape localization. §.§ Apparatus§.§.§ Hardware and Software SetupTo enable the sonification study, we had to set up a surgical localization system. The system consisted of a Savi Scout® radar localization probe, an electromagnetic (EM) tracking system (the trakSTAR™ EM 6 degrees-of-freedom tracking solution)[trakSTAR™, NDI, Waterloo, Ontario, Canada] with a mid-range transmitter, and a graphics processing unit accelerated laptop for running the sonification and tracking software. The transmitter of the EM tracking system was placed on a table and faced the area where the user performed the instructed task (Figure <ref>(a)). The EM tracking system was connected to the laptop via USB, and the open-source library Plus Toolkit[Plus Toolkit (https://plustoolkit.github.io/)] was used for real-time streaming of the position tracking and sensor data to Unity[Unity (https://unity.com/)] (Long Term Support Release 2020.3.21f1). In Unity, the distance between the radar localization probe and a virtual 3D model of a tumor was computed using the closestPoint() method, which returns the closest point on the surface of the tumor object from the probe tooltip as a point in 3D space. The distance between this closest point on the surface of the tumor and the tip of the probe, as well as the distance between the probe and the virtual seed, were streamed to Wekinator[Wekinator (http://www.wekinator.org/)], an interactive machine learning system for music composition <cit.>. Wekinator used a Neural Network to map the distance measures from Unity to the output parameters used for sound synthesis in ChucK[ChucK (https://chuck.stanford.edu/)], an audio programming language for real-time sound synthesis <cit.>. Unity, Wekinator, and ChucK communicated via Open Sound Control (OSC), a network protocol for interactive computer music <cit.>.§.§.§ Tracking and RegistrationWhile the Savi Scout® system provides audible feedback on the proximity of the probe to the implanted seed, it does not provide numerical information on the position and orientation required for our study. Therefore, we required a tracking method providing real-time position and orientation of the probe to train the sonification model accurately. Clinical navigation systems often use external tracking hardware. Besides optical tracking solutions, EM tracking is a popular alternative. EM tracking detects the position and orientation of a wired sensor inside a magnetic field created by a field transmitter. While the accuracy of EM tracking is susceptible to the presence of ferrous metals and conductive materials that may distort the magnetic field, we chose EM tracking over optical tracking to avoid line-of sight issues<cit.>. The Savi Scout® probe was tracked using a 0.9 mm diameter, six degrees-of-freedom sensor at the tip of a shielded flexible cable. We fixed the position of the EM sensor on the probe using a 3D printed holder. We also 3D printed a custom board to hold a 15x15 cm white drawing paper in place. The OBJ files of the 3D prints were imported into Unity for alignment.We performed a rigid registration of the virtual board to the 3D printed physical board. First, four fiducial markers were placed on the physical board by tapping the corners with the tracked probe. Then, four corresponding virtual fiducials were placed onto the corners of the virtual board in Unity to achieve the rigid registration. The physical board was fixed to a table, allowing the setup to stay steady during the experiment. §.§ SonificationThe distance to sound mapping was done in Wekinator using a multilayer perceptron Neural Network with two hidden layers per input parameter. The model was trained on 80 to 100 recordings of the probe position for each sonification model and the manually set desired sound parameter values. For the purpose of this study, three sonification models were created: one mimicking the clinical status quo (Beep) and two newly proposed shape sonifications (Rhythm, Synth).§.§.§ Beep (status quo)The distance between the tip of the probe and the location of a virtual seed (point) inside the tumor object was used as input to the Neural Network. Two output parameters were mapped to the volume and frequency of a beeping sound (Figure <ref> (a)). The baseline beep sonification used a sine wave oscillator at 440 Hertz (Hz). As the distance between the probe and the tumor decreased, the frequency of the beeps increased. This condition imitated the current sound feedback used by the Savi Scout® system. No information about the tumor shape or margin location was included in this condition.§.§.§ RhythmThe first of the two proposed shape sonification models used both the distance to the tumor margin and the distance to the seed as input parameters to convey both the shape and point information within the same beating sound (Figure <ref> (b)). Rhythm used a constant beat with a change in instrument to indicate the tumor margin and seed location to the user. The sound was synthesized using the ModalBar instrument in ChucK. The pitch was set to 330 Hz. When the probe was positioned above the tumor border, our system synthesized a distinct marimba beat. When the probe was above the seed location, a higher-pitched and clearer xylophone beat was played. For any location inside the tumor between the border and the inside seed location, the two sounds of the marimba and xylophone beat were interpolated. Four output parameters from Wekinator were mapped to the volume and timbre of the two beat sounds.§.§.§ SynthThe second shape sonification used the same input parameters as Rhythm to create two distinct sounds: a continuous sound to represent the shape and a beat to represent the seed location (Figure <ref> (c)). This sonification separated the areas of interest into discrete zones. The Synth sonification model played a continuous synthesizer sound (musical note C4) when the probe was above the tumor area and no sound when the probe was outside the area. For the seed location, a ticking beat sound (ChucK ModalBar at 660 Hz) was synthesized to indicate the point of interest. Three output parameters from the Neural Network were mapped to the volume and timbre of the ticking seed sound and the volume of the continuous tumor sound. §.§ ProcedureThe three sonifications (Beep, Rhythm, Synth) were presented to each participant in a random order. We began the study session by administering a pre-task questionnaire, including questions on demographic background, medical expertise, and video game experience. After the pre-task questionnaire, the tutorial phase began. The localization task and respective sonification concept were explained to the participant, followed by a trial run, which was not time-constrained. The tutorial phase was followed by the testing phase. The volunteer was tasked to mark the tumor margin and seed location on a paper sheet based on the sound feedback. The paper was exchanged after each tumor. Since the Beep condition lacked information about the tumor shape, the participant received a piece of paper with a picture of the tumor and seed showing the seed location relative to the tumor. For each sonification, the participant was asked to localize six shapes randomly selected from a pool of 15. Once six tumors were located, a post-task questionnaire was filled out by the participant. The questionnaire consisted of a raw NASA-TLX (Task Load Index) questionnaire <cit.> to evaluate the participants' perceived task load, a System Usability Scale (SUS) <cit.> to assess the user-perceived usability of the sonification, and several study-specific questions aimed at collecting qualitative feedback regarding their subjective impressions of the sonifications. This process was then reiterated twice more for the remaining sonifications. §.§ Data AnalysisTo evaluate the accuracy of the shape sonification, we captured images of the 15 different tumor shapes in Unity in 2D view. The shapes' size, orientation, and location with respect to the drawing paper were visible on the images. The images were resized to 15x15 cm to match the size of the paper. In a next step, we used Matlab to segment the images of the virtual tumor shapes to obtain a ground truth (Figure <ref>). By applying a threshold to the red color channel, we obtained the segmented shape of the ground truth tumors. Similarly, the corresponding ground truth for the seed locations was extracted from the Unity images. The 15x15 cm sheets of paper used by the participants to draw the tumor shapes were scanned and later segmented in Matlab. Segmentation of the drawn tumors and corresponding seeds was achieved by finding connected components in the binary images using 8-connected neighborhoods. The tumor in each image was isolated as the largest circular connected component, while the corresponding seed was isolated as the largest connected component inside the drawn tumor area (Figure <ref>). To isolate the drawn tumor, the circularity component was calculated using the following equation: Circularity = (4π A/P^2)(1 - 0.5/r)^2wherer = P/2π + 0.5 where A represents the area of the object, P represents the perimeter, and r is a parameter calculated from the perimeter.The Sørensen–Dice coefficient, area ratio, and intercentroid distance were calculated to compare the drawn shape (DS) to the ground truth (GT) using the following equations <cit.>: Sørensen-Dice = 2|DS ∩ GT|/|DS| + |GT|Area Ratio = Area_DS/Area_GTIntercentroid Distance = √((x_DS-x_GT)^2 + (y_DS-y_GT)^2) §.§ Results§.§.§ AccuracyThe accuracy data measured using the Sørensen–Dice coefficient (1), area ratio (2), and intercentroid distance (3) were not normally distributed. Twelve outliers were removed using the interquartile range method. Since the study followed a within-subject design, a Friedman test was used to determine significance between conditions for the three measures of accuracy. The Friedman test showed a significant difference between conditions for the Dice coefficient (p<0.001) and the area ratio (p<0.001). No significant difference in intercentroid distance between sonifications was reported (Figure <ref>(c)). A paired Wilcoxon test with Holm correction was chosen to compare between conditions. The pairwise tests showed a significantly increased Dice coefficient (p<0.001) for both proposed shape sonifications (Rhythm and Synth) over the point sonification (Beep) (Figure <ref>(a)). Likewise, a significantly smaller area ratio (p<0.001) was recorded for the novel shape sonifications over Beep (Figure <ref>(b)). The significant increase in Dice coefficient proves that the use of the shape sonifications results in a larger overlap of the ground truth tumor shape and the drawn tumor shape, indicating an increased localization accuracy when using shape sonification. Means and standard deviations of all accuracy measures are summarized in Table <ref>.§.§.§ UsabilityThe responses to the NASA-TLX questionnaire on subjective workload were normally distributed. No values were outside the mean ± three std, so no outliers were excluded from the sample. A repeated measures ANOVA showed no significant difference between conditions for the overall task load. Out of the NASA-TLX subscales, a significant difference was found for the performance scale. Paired samples t-tests showed a significant reduction (p<0.05) in the subjectively-perceived performance of Rhythm and Synth over Beep. A Shapiro-Wilk test showed that the SUS questionnaire data was not normally distributed. The interquartile range method was used to identify and remove one outlier. A Friedman test indicated a significant difference between conditions (p<0.001). A post-hoc paired Wilcoxon test with Holm correction was used to compare sonifications. The pairwise Wilcoxon test showed that the Synth sonification's usability was perceived as significantly better than Beep (p<0.001) and Rhytm (p<0.05). A comparison of Rhythm and Beep did not yield significant differences in usability scores. The Beep condition received the worst usability rating. Table <ref> lists means and standard deviations of the NASA-TLX and SUS scores for all sonifications. §.§.§ Qualitative FeedbackWe asked the participants to rank the three sonifications. 75% of the participants ranked Synth first. 58% of the participants ranked them in this order: Synth, Rhythm, Beep. They elaborated that the continuous sound representing the tumor shape and the ticking sound representing the seed were more easily distinguishable in the Synth than the Rhythm sonification. To quote three participants: "I really like the consistent [Synth] sound, it made the boundary easier to find.", "The [Rhythm] sounds were very pleasant, but I got confused about which tone indicated the seed.", "[Rhythm] was hard to use to trace the outline.". To improve the most preferred Synth sonification, three participants suggested adding a "distinct sound in the exact location of the seed" to increase the localization accuracy.The Beep sonification was perceived as "fast and intuitive", however, the sonification provided no "tumor shape information nor a direction to seed from [the] current position". §.§ DiscussionOur results show that the use of shape sonification leads to a significant increase in localization accuracy. The Sørensen–Dice coefficient reports a greater overlap of the drawn tumor and the ground truth tumor for both novel shape sonifications compared to the clinical standard simulated by the Beep sonification. Besides measuring shape overlap, we also evaluated the area ratio. We were able to report a substantially decreased area ratio of drawn shapes over ground truth shapes for both shape sonifications compared to the status quo sonification.Due to breast surgeons describing their mental model of the tumor location at the time of initial localization as that of a 2D shape on the surface of the breast, the preliminary sonification models were initially tested on a 2D plane. However, this experiment setup deviates from the real scenario, where the system is used on the curved surface of the breast.§ STUDY 2 - 3D SURFACE A second study was conducted to evaluate the auditory displays on a model of the breast. By using a breast phantom, the experimental setup simulated the surgical scene, and the results are more applicable to clinical research. §.§ ParticipantsA total of 21 volunteers with a mean age of 26.5 ± 3.4 years participated in the study. Twelve indicated to be men and 9 to be women.The study included a radiologist, three postdoctoral researchers in Radiology, one medical student, nine graduate students in Bioengineering, four students studying Management, and three students specializing in Product Design. None of the participants reported to have any hearing impairments. Four participants indicated prior experience in using the interactive shape sonification. §.§ ApparatusThis study used the same hardware and software setup as Study 1 except for the use of a silicone breast model instead of a paper sheet for task execution (Figure <ref>(b)).For fabrication of the silicone breast phantom, we first poured a mold using casting silicone by Pixiss[Pixiss, Grand Rapids, MI, United States]. The 3B Scientific[American 3B Scientific, Tucker, GA, United States] Breast Self Examination Model 1 acted as the male part in the mold-making process. After the mold had cured for 12 hours, we used Smooth-On[Smooth-On, Macungie, PA, United States] Ecoflex™ 00-20, a "stretchy" silicone mixed with beige silicone pigment to pour the breast phantom. Silicone was chosen to replicate the physical properties of breast tissue, namely deformability and flexibility. A digital 3D model of the breast was segmented from an MRI scan of the breast phantom. The OBJ file was imported into Unity for registration.In order to facilitate the marking of the tumor shape on the model's surface, we covered the phantom with cling wrap. After marking two to three tumors, we replaced the cling wrap to prevent overlapping markings. To secure the breast phantom in place, we used a custom 3D-printed board with a recess designed to match the breast model's shape. Both the OBJ files of the board and the breast were imported into Unity. The same rigid registration process as in Study 1 was performed to align the virtual and real experiment setups.§.§ Sonification The sonifications were implemented the same way as in Study 1. For this study, the status quo auditory feedback (Beep) was compared to one new shape sonification (Sine).§.§.§ Beep (status quo)Again, only the distance between probe and seed was sonified by Beep (Figure <ref>(a)). The sonification model from Study 1 was slightly changed to enhance its similarity to the current clinical sound feedback. Instead of the sine wave oscillator, the ChucK ModalBar instrument was used at 200 Hz. In addition, the frequency of the beat was increased. Three output parameters from Wekinator were mapped to sound volume and beat frequency.§.§.§ SineOnce more, the distance from the probe to the tumor margin and the seed was mapped to sound parameters to enable the Sine shape sonification (Figure <ref>(b)). As Synth had performed best in Study 1, we decided to further refine it for Study 2. Qualitative feedback from the study indicated that Synth had led to frustration during rough tumor localization. This was caused by the absence of sound feedback whenever the probe was outside the tumor area. This led us to include a beat sound into Sine to indicate the probe's distance to the seed even outside the tumor area. Similar to Beep, the beat frequency increased as the probe approached the seed. The well-received discrete sonification of the tumor shape via a continuous synthesizer sound was kept in the Sine sonification. Three output parameters from Wekinator were mapped to the volume and beat frequency of the seed sound as well as the volume of the tumor sound.§.§ ProcedureThe two conditions (Beep and Sine) were presented to each participant in a random order. We again began the study session by administering the pre-task questionnaire, followed by the tutorial phase. During the testing phase, the volunteers were tasked to mark the tumor margin, including the seed location on the silicone breast model. Once eight tumors were located, the participants filled out the post-task questionnaire. The same process was repeated for the second condition. §.§ Data AnalysisThe same measures as in Study 1 were collected. Due to a shift in experiment setup from a 2D to a 3D surface, the location of the margin and seed were recorded by tracing the drawn markings with the tracked probe. These locations were stored in the Unity coordinate frame and later processed in Matlab. To obtain the ground truth (GT), we bottom-up projected the virtual 3D tumors located inside the virtual breast onto the breast model's surface in Unity. We stored this projection for each of the 15 different tumor objects.To compute the Sørensen–Dice coefficient, area ratio, and intercentroid distance in Matlab, the 3D GT and 3D marking recording (Figure <ref>) were projected onto a 2D plane that best fits the 3D shape. The plane is calculated by minimizing the sum of the quadratic distances between the plane and the points constituting the 3D shape, and the fit is performed by computing the eigenvalues and vectors associated with the distribution of points.§.§ Results§.§.§ AccuracyA Shapiro-Wilk test showed a non-normal distribution for all accuracy measures in Study 2. Eight outliers were removed using the interquartile range method. A Wilcoxon signed-rank test was used to determine significance. The shape sonification (Sine) achieved a significantly better (p < 0.001) Dice coefficient (Figure <ref>(a)) and significantly reduced (p < 0.001) intercentroid distance (Figure <ref>(b)) over Beep. The analysis showed no significant difference (p = 0.67) in area ratio between Beep and Sine (Figure <ref>(c)). Thus, the shape sonification (Sine) was able to improve the accuracy of the localization task for both the tumor margin (Dice coefficient) and the seed location (intercentroid distance).§.§.§ UsabilityA Shapiro-Wilk test showed normality of the NASA-TLX data. Values outside the mean ± three std were identified as outliers and one sample was excluded from the data. A paired samples t-test showed no significant difference between sonifications for the overall NASA-TLX score (p = 0.08). Out of the NASA-TLX subscales, only 'Effort' showed a significant difference. Beep resulted in significantly less (p < 0.05) effort than Sine.The SUS results were not normally distributed.Zero outliers were identified using the interquartile range method. A Wilcoxon signed-rank test did not show significance.The accuracy and usability measures' means and standard deviations are reported in Table <ref>.§.§.§ Qualitative FeedbackThe reported lower system usability of Beep was also reflected in the qualitative feedback from the post-study questionnaire. Participants enjoyed that Beep was "intuitive and easy to use", "much faster", and "not mentally challenging". However, they disliked that no information about the size and shape of the tumor was provided. One participant wrote that Beep "lacks precision and information". The person further mentioned that this results in a "lack of confidence using it". Another participant said: "I was not confident about my drawings, especially the size and boundaries of the tumor.". Frustration was also expressed by another volunteer, who stated: "Discriminating between when I was near the seed vs when I was actually on it was rather taxing. It was also taxing to mentally translate the margin area through visuals alone.". When asked how they would improve the Beep sound feedback, a few people suggested distinguishing between being near the seed versus being directly on top of it.86% of the participants indicated to prefer the shape sonification (Sine) over Beep. The participants positively mentioned the Sine sonification's pleasantness of the sound, the accuracy gained through the information on the tumor margin, and the distinct nature of the two sounds. They reported to like the "clear separation along the border", the "continuous positive feedback if within the border", and that the "seed detection is easily distinguishable from the closeness sound". One participant also mentioned: "It’s very fun to use and offers more accuracy in identifying the tumor boundaries and seed.". The participants negatively pointed out the system's latency which caused a slight delay in the uptake of the sound. They further missed directional information: "I did not like that I got no directional cues - am I bottom left? am I top right?". When asked about ways to improve Sine, two participants mentioned making the tumor margin even more distinguishable: "The sound of the outline needs to be more obvious.", "I would [...] make the device more sensitive to the outline of the tumor.".§.§ DiscussionTo summarize, the Sine shape sonification resulted in significantly improved localization accuracy over Beep. The task load data showed that Beep was significantly less effort to use than Sine. Although Sine received a higher system usability rating, no significant difference in usability was found. However, given that shape sonification introduces another parameter and thus adds complexity to the Sine condition,an increase in effort and a slight reduction in usability is not surprising. Furthermore, the use of shape sonification due to careful construction of the outline based on auditory feedback inevitably increases task time. One should investigate whether the sonification design or the prolonged task time is the primary driver of reduced usability in this scenario. Eliminating the slight latency present in the technical setup should additionally improve the system's usability.Although the breast model enabled a replication of the real use scenario, the silicone was not soft enough to provide realistic breast tissue properties. A study interested in yielding clinically applicable results should look into alternative breast models.While this study included two volunteers with medical expertise, no breast surgeons were present. To gain insights into the sonifications' perception among the intended users of the system, we conducted a third and final study with breast surgeons. § STUDY 3 - 3D SURFACE WITH BREAST SURGEONS To further inform the development of the new auditory display and to test the system's suitability for the surgical task, we evaluate the refined shape sonification with the intended users of the system – the breast surgeons. Contrary to our prior studies, we also determined the shape sonification's influence on the accuracy of tumor excision.§.§ ParticipantsA total of four attending breast surgeons with a mean age of 44 ± 9 years std participated in the study, all four of which were women. None of them had a hearing impairment. All four surgeons indicated extensive experience using the Savi Scout® seed based localization system during lumpectomy. §.§ ApparatusExcept for a change in the breast model, the hardware and software setup was identical to the prior studies (Figure <ref>(c)). In this study, the breast phantoms were comprised of agar, glycerol, and red food coloring. Agar and glycerol were selected for their replicability and compatibility with various imaging modalities such as ultrasound and MRI <cit.>. The fabrication of the 16 agar phantoms consisted of continuously stirring a mixture of 3% agar, 9.5% glycerol, and 87.5% distilled water on a hot plate for 8 minutes while ensuring that the mixture did not boil. Drops of red food coloring were added to make the tumors indistinguishable from the breast model mass. 16 tumors were poured from the same mixture as the breast model. However, a 1:200 diluted MRI contrast agent (Feraheme) was added to the mixture for visibility of the tumor mass on MRI imaging. The tumors were inserted into the breast pour after a curing time of about one hour. Diverse tumor shapes and sizes, as well as placements inside the breast, were chosen for a realistic scenario. The phantoms were wrapped in cling wrap and stored in a refrigerator to prevent the agar from drying.Before and after the study, a General Electric MRI scanner was used to scan all 16 agar models. The pre-study scans were then utilized to segment and save the breast and tumor volumes as OBJ files. These virtual objects were subsequently imported into Unity to align the real and virtual experimental setups. This enabled us to determine the probe's position in relation to the breast and tumor.§.§ SonificationThe breast surgeons were presented with the same sonifications as the participants in Study 2:Beep (Figure <ref>(a)), a replica of the clinical sound feedback, and the proposed shape sonification Sine (Figure <ref>(b)). §.§ ProcedureEach surgeon filled out the pre-task questionnaire followed by an explanation and initial training step using the first sonification. Then, the surgeon was asked to first mark the tumor margin and seed location on the surface of the agar breast model, transfer the marking from the cling wrap onto the agar using a small spatula, and lastly, excise the tumor volume from the breast model using a scalpel and a small spoon-like tool. After they marked and removed the tumors from two breast models, they filled out the post-task questionnaire for the sonification that they had just used. The same process was repeated for the second sonification on two more breast models. §.§ Data AnalysisThe pre- and post-study MRI scans were used to determine the calculated resection ratio (CRR), a measure used in clinical research to define the amount of excess healthy breast tissue resected during lumpectomy <cit.>. CRR is calculated by dividing the total resection volume (TRV) over the optimal resection volume (ORV). ORV was defined as the tumor volume, including a 2mm safety margin of healthy tissue. TRV and ORV were obtained by segmenting the 3D tumor and excision volumes from the MRI scans using Horos[Horos (https://horosproject.org/)], an open-source medical image viewer.All other quantitative (Dice coefficient, area ratio, intercentroid distance) and qualitative measures (pre- and post-study questionnaire) were recorded and analyzed as in Study 2. §.§ Results§.§.§ AccuracyA Shapiro-Wilk test showed a non-normal distribution for all accuracy measures in Study 3. Zero outliers were identified using the interquartile range method. A Wilcoxon signed-rank test showed a significantly improved Dice coefficient (p < 0.001) of Sine over Beep. No significant difference in area ratio, intercentroid distance, or calculated resection ratios was found. Figure <ref> shows two exemplary post-study MRI scans for Beep and Sine.§.§.§ UsabilityA Shapiro-Wilk test showed a non-normal distribution of the NASA-TLX and SUS data.The interquartile range method was used to identify outliers, and one sample was removed from the data. Due to the small sample size of four surgeons, theWilcoxon signed-rank test showed no significant difference between the two sonifications for neither task load nor system usability. However, the shape sonification Sine achieved a better usability rating than Beep. The accuracy and usability measures' means and standard deviations are reported in Table <ref>.§.§.§ Qualitative FeedbackThis improvement was also echoed by the qualitative feedback we received in the post-task questionnaire. Their feedback on the sonification they routinely use in surgery (Beep) was that: "It only helps localize the seed, so I have to estimate and guess tumor size and shape.".Another surgeon wrote: "It was harder to distinguish the edge of the lesion with as much confidence as when the Sine was on.". When asked to rank the two sonifications by preference, all surgeons reported preferring Sine. They liked that the "[tumor] mass was integrated into the sounds" and that "the dual sounds" gave them a "second way to verify that the target is within the specimen." They further said about Sine: "It helps give shape and size approximation of the targeted tumor.". §.§ DiscussionA comparison of the results from the surgeon study to the non-surgeon study (Study 2) shows differences in usability results. While non-surgeons rated the task load of Sine (44.35±12.68) to be higher than Beep, the surgeons' task load rating of Sine (30.83±18.94) was much lower and rated as almost equally demanding as Beep (29.72±16.98). Sine received a better usability rating (85.63±10.68) than Beep (65.63±11.43) by the surgeons. Although the small number of participants does not allow for a generalized claim, the trend in the reported data shows the great potential of shape sonification in improving the usability of the breast cancer localization task.Overall, Study 3 was limited by the small sample size. A study with more breast surgeons should be performed to achieve significant results. A future study should also look into ways to improve the breast models' flexibility and thus increase the similarity with breast tissue. In addition, the wet surface of the agar model resulted in movement of the cling wrap. This, in turn, added imprecision to the accuracy results. An alternative method for surface markings should be considered.§ DISCUSSIONOur study results were able to verify H1. Shape sonification increased the accuracy of the breast tumor localization task. All three studies reported a significantly improved Dice coefficient, indicating a greater overlap of the ground truth (GT) and drawn tumor shapes. Study 1 additionally reported a significantly reduced area ratio while, Study 2 showed a significantly reduced distance between the GT and drawn seed location.The increased localization accuracy suggests shape sonification's potential to reduce the reoperation rate due to tumor-positive margins. A comparable experiment on breast tumor localization using visual Augmented Reality guidance reports an average Dice coefficient of 0.2 <cit.>. Our proposed shape sonifications achieved an average Dice coefficient of 0.57, 0.72, and 0.74 in Studies 1, 2, and 3, respectively, proving superior accuracy.We were furthermore able to partially confirm H2. Study 1 demonstrated significantly increased usability of the Synth shape sonification over the current clinical sound feedback. The combination of two distinct sounds - a continuous and a beat sound - improved discrimination between margin and seed location. Studies 2 and 3 showed no significant differences in task load or usability between shape sonification and the status quo auditory feedback. This effect might be attributed to the added complexity and increased task time that comes with reconstructing a shape from sound feedback. Interestingly, the breast surgeons in Study 3 reported improved usability of the shape sonification (85.63±10.68) over Beep, the status quo sonification (65.63±11.43). Moreover, shape sonification was ranked as the preferred option in all three studies. This gives hope that improvements to the sonification design can enhance the usability of the breast cancer localization task and make sound-based guidance systems more user-friendly than current alternatives.We were not able to prove H3's claim of reduced over-excision ratios when using shape sonification. Due to the limited number of breast surgeons at our University hospital and the copious resources needed to fabricate and obtain MRI scans of the breast models, our sample size was too small to achieve statistical significance. Although insignificant, our data shows a slight reduction in the removal of excess healthy breast tissue (2.43±1.58) over the current auditory feedback (2.54±1.10). The results for both sonifications fall within the clinically reported average ratios of 2.3 to 2.5 of total resection volume over ideal resection volume <cit.>. This non-significant reduction of the resection ratio can be attributed to the lack of depth feedback during excision. This might have led surgeons to take more volume than necessary to ensure resection of the entire tumor mass. §.§ Design Implications The presented auditory display was designed with the intention of smooth integration into the surgical workflow of a lumpectomy. Instead of replacing current localization systems and forcing the surgical team to adapt to a new workflow, the proposed sonifications build on the current seed-based localization system, requiring only minor adjustments on the side of the breast surgeon.Due to the introduction of additional tumor margin sonification, the localization task will likely be prolonged. However, the consecutive surgical task of tumor excision might shorten, as gradual excision due to guessing the tumor margin location can be eliminated.These benefits of the presented auditory display over the traditional method will hopefully allow for fast and easy adoption into clinical routine. While the introduced medical application led us to design an interactive shape sonification that prioritizes precise localization and reconstruction of a shape's contours, previous shape sonification works in the field of Human-computer interaction (HCI) have focused on 2D shape recognition <cit.>, 3D object recognition <cit.>, image understanding <cit.> or curvature perception <cit.>. However, our evaluation has revealed general findings on shape sonification that contribute to the investigation of auditory displays in the field of HCI:* Shape sonification can provide sub-centimeter precision in localizing the contour of a 2D shape.* Encoding proximity to two objects in two sounds is preferred over one sound as it reduces the task load.* The use of distinguishable concurrent sounds (e.g. a beat and a continuous sound) can lead to improved usability of an auditory display for visuospatial tasks.These findings are not only applicable to surgical sonification but are also of relevance to the design of sensory substitution devices. Like the works by Meijer [], Auvray et al. [], and Gerino et al. [], who present visual-to-auditory sensory substitution devices for the blind or visually impaired, our sonification technique could also provide visuospatial information to people with vision impairment. Sonified representations of a map, an image, or a user interface could be generated and explored using the presented interactive shape sonification. However, a user study with people with congenital blindness or visually impaired individuals who lost their eyesight later in life would be imperative to understand how blind people perceive and interact with the presented sonification techniques. Besides the design of accessible technologies, our technique could be used to enrich multi-sensory interactions in Mixed Reality by creating more engaging sensory experiences. Shape sonification could, for example, enhance spatial understanding of virtual objects that are occluded or out of sight.§.§ Limitations Current breast cancer localization systems do not incorporate tumor margin information as only the location of the seed is known at the time of surgery.In our studies, we were able to bypass this issue by using breast models that allowed only slight deformation and by simulating the use of a radiological image acquired in the same position as the patient is in during surgery (e.g. supine MRI). However, this setup does not correspond to the real circumstances. To bridge the gap between experimental and patient application, a sophisticated tracking system is required. Advanced biomechanical modeling of breast tissue deformation enables these tracking systems for breast surgery. Modeling breast tissue deformation is a field of research on its own, which, for example, works by Samani et al. [], Gavaghan et al. [], Eiben et al. [] or Alcañiz et al. [] are addressing. An integration of the valuable research results from this field and our work will be the key to a user-friendly and robust solution for breast cancer localization in clinical practice.A limiting factor of our hardware and software setup is the use of three networks that stream data between the different software applications. This setup has caused a noticeable latency on the user's end and influenced the usability of the sonifications. Future studies might examine alternative technical setups for realizing the location to sound mapping while avoiding latency issues.Another observed limitation of our study design is the user-dependent accuracy of the drawn markings. This accuracy was influenced by the individual strategies the participants adopted for marking. The need to remove the probe from the current position to mark the location in the center underneath the tip of the probe added to the imprecision of the drawn outlines. While this task design was consciously chosen as it represents the current clinical practice, it might be worth investigating how the marking of the tumor location can be improved. Methods to increase marking accuracy might include tracking the pen. §.§ Future WorkFuture work on an auditory display for breast cancer localization should, first of all, prioritize the integration of depth information into the design. Information on the tumor's exact position and depth within the located column of breast tissue is crucial in achieving the desired clinical outcomes. Including depth information might significantly impact the resection ratio and thus decrease the amount of excess healthy breast tissue unnecessarily removed today.Secondly, the future design of such sonification models could benefit from including directional cues. These cues might make the localization and marking process faster and reduce user frustration. Yet this addition might increase the sonification's complexity and lead to a higher cognitive load.Lastly, the issue of realistic simulation of breast tissue properties in an experimental setup should be tackled. This would include not only the use of breast models made from more flexible materials, but also a method for reliably registering and tracking the material's deformation over the course of the study. § CONCLUSION The field of breast cancer localization is evolving, with surgical guidance systems showing great potential for improving patient outcomes. Compared to current localization systems that only provide sound feedback on the location of a marker implanted inside the tumor, our approach provides more comprehensive guidance through additional sonification of tumor margins.In three user studies, we demonstrated that the proposed shape sonification significantly enhances localization accuracy. We were furthermore able to show that breast surgeons utilizing shape sonification experienced improved usability compared to the current clinical auditory feedback.We showed evidence of the accuracy of auditory displays, reinforcing their potential as a substitution or addition to conventional, visual user interfaces. Our work presents an exemplary use case for shape sonification in a surgical precision task and hopefully helps to promote broader application of auditory displays and multimodal, e.g., audio-visual and audio-haptic, interfaces in surgical applications and beyond. ACM-Reference-Format
http://arxiv.org/abs/2312.16129v1
{ "authors": [ "Laura Schütz", "Trishia El Chemaly", "Emmanuelle Weber", "Anh Doan", "Jacqueline Tsai", "Bruce Daniel", "Christoph Leuze", "Nassir Navab" ], "categories": [ "cs.HC", "H.5.2; H.5.5; J.3" ], "primary_category": "cs.HC", "published": "20231226172415", "title": "Interactive Shape Sonification for Breast Cancer Localization" }
A comprehensive study on the accuracy and generalization of deep learning-generated chemical ODE integrators Han Li^a,b,1, Ruixin Yang^a,b,1, Yangchen Xu^a, Min Zhang^a,b, Runze Mao^a,b, Zhi X. Chen^a,b,*^aState Key Laboratory of Turbulence and Complex Systems, Aeronautics and Astronautics, College of Engineering,Peking University, Beijing, 100871, China^bAI for Science Institute (AISI), Beijing, 100080, China ===================================================================================================================================================================================================================================================================================================================================================Speech emotion recognition (SER) systems aim to recognize human emotional state during human-computer interaction. Most existing SER systems are trained based on utterance-level labels. However, not all frames in an audio have affective states consistent with utterance-level label, which makes it difficult for the model to distinguish the true emotion of the audio and perform poorly. To address this problem, we propose a frame-level emotional state alignment method for SER. First, we fine-tune HuBERT model to obtain a SER system with task-adaptive pretraining (TAPT) method, and extract embeddings from its transformer layers to form frame-level pseudo-emotion labels with clustering. Then, the pseudo labels are used to pretrain HuBERT. Hence, the each frame output of HuBERT has corresponding emotional information. Finally, we fine-tune the above pretrained HuBERT for SER by adding an attention layer on the top of it, which can focus only on those frames that are emotionally more consistent with utterance-level label. The experimental results performed on IEMOCAP indicate that our proposed method performs better than state-of-the-art (SOTA) methods. The codes are available at github repository[https://github.com/ASolitaryMan/HFLEA.git].Frame-level emotional state alignment, speech emotion recognition, HuBERT § INTRODUCTIONIn order to improve the experience of human-computer interaction, speech emotion recognition has become one of the research hotspots in recent years. Technologies in this field have advanced considerably over the past decade.The conventional methods for SER focus on using neural networks to mine emotional information from hand-crafted or spectral features <cit.>.Due to limited labeled data, these methods have only shown slight performance improvements. With the success of natural language processing pretraining models <cit.>, several self-supervised audio pretraining models have emerged, such as wav2vec <cit.>, wav2vec2.0 <cit.>, HuBERT <cit.>, and WavLm <cit.>. Those models are obtained by self-supervised pretraining using large amounts of unlabelled data, and what they learn can be transferred to improve the performance of downstream tasks, such as automatic speech recognition <cit.>, SER <cit.>, speaker recognition <cit.>, etc.There are mainly three methods for implementing SER using audio pretrained models. The first is to extract the embeddings of the pretrained model as the input of the downstream task model <cit.>. The second kind is to fine-tune the model for SER <cit.>. The third category involves redesigning the pretext task, pretraining the model based on this pretext task, and then fine-tuning it to implement SER <cit.>. Based on the second and third methods, Chen et al. <cit.> utilize wav2vec2.0 to realize a pseudo label task adaptive pretraining approach (P-TAPT) for sentiment analysis, which can align frame-level pseudo-emotion labels with frames to alleviate the inconsistency between emotional states of some frames in the audio and its utterance-level label, and performs better than many SER systems that only utilize utterance-level labels. However, P-TAPT has two aspects that can be improved. The first point is that it aims to achieve a frame-level emotional alignment method similar to that of HuBERT. However, it only introduces frame-level pseudo-emotion labels and fine-tunes wav2vec2.0 directly to realize frame-level emotion state alignment instead of pretraining HuBERT. The HuBERT has been proven to be more suitable for this offline discrete frame-level self-supervised task <cit.>. The second one is that the authors fine-tuned the aligned model directly with average pooling to aggregate frame-level representations to utterance-level representations for final utterance-level SER system. This method may not fully exploit the aligned frame-level emotional information.Inspired by <cit.>, we propose a frame-level emotion alignment (FLEA) method for SER, which is an extension and improvement of P-TAPT. In the first step, we fine-tune the HuBERT model to implement a SER system and extract the embeddings of its i-th transformer layer for clustering to achieve frame-level pseudo-emotion labels. Then, we continue to pretrain the HuBERT model using pseudo-labels, referred to as CPT-HuBERT that allows to automatically align frames with pseudo-emotion labels. Finally, we add an attention layer on the top of CPT-HuBERT and fine-tune it for SER. The role of attention is to align frame-level representations with utterance-level labels. In addition, we explore the effect of using different transformers layers of embeddings to cluster and different number of clusters on SER performance. We perform experiments on IEMOCAP <cit.> to validate the effectiveness of FLEA. The unweighted accuracy (UA) and weighted accuracy (WA) of FLEA are 75.7% and 74.7%, outperforming SOTA methods.§ PROPOSED METHOD In this section, we first review HuBERT model, which is the key backbone of our proposed method. Then, we introduce our proposed method in detail, and the system framework is shown in Fig.<ref>. §.§ The Review of HuBERT HuBERT is a large pretrained model obtained through self-supervised learning, which is used to learn general audio representations from unlabeled raw audio signals for various downstream speech tasks. First, HuBERT leverages the offline clustering algorithm to generate pesudo labels for mask language model (MLM) pretraining. Second, the raw audio signals need to be encoded into meaningful continuous latent representations by a feature extractor which consist of a stack of CNN layers. Finally, the model utilizes the transformer layers to learn the structure of spoken inputs by predicting the cluster for masked audio segments. The training of HuBERT includes two phases. The pseudo labels used in the first phase of pretraining are generated from the mel frequency cepstrum coefficient (MFCC), and the pseudo labels used in the second phase of pretraining are generated from the embeddings of the model saved in the first phase. The MLM pretraining means that the representations of the masked frames and unmasked frames require computing cross-entropy losses with pseudo-labels respectively. The two losses denote as L_m and L_u, and the final predictive loss L of the model can be calculated as follows: L = αL_m + (1-α)L_u The predictive loss forces the model to learn good high-level representations of unmasked frames to help infer the pseudo labels of masked frames correctly <cit.>. §.§ TAPT and Pretraining HuBERTTAPT and cluster. Since all of the SER datasets only have utterance-level labels, in order to achieve frame-level affective alignment, we need to introduce frame-level pseudo-emotion labels. To this end, we generate pseudo labels by following this methods <cit.> which prove that frame-level emotion state can be inferred by training with a segment-based classification objective. As shown in phase 1 in Fig. <ref>, we first fine-tune HuBERT for SER with TAPT <cit.>. TAPT is kind of fine-tune, which first continues pretraining pretrained model on the target datasets to bridge the gap between pretrained and target domains and then fine-tunes above pretrained model for SER. Next, we extract the embeddings from the i-th transformer layer in HuBERT to generate frame-level pseudo-emotion labels by k-means algorithm. Consistency in the k-means mapping from audio input to discrete targets is crucial, as it allows the model to focus on modeling the emotion sequential structure of the inputs during MLM pretraining.Pretraining HuBERT. According to the theory that bad teachers (pseudo labels) make good students (learned representations) <cit.>, we continue using MLM to pretrain the HuBERT with the frame-level pseudo-emotion labels as illustrated in Fig. <ref>. During pretraining, we set the value of α in the predictive loss to be 1 as in the official HuBERT. In other words, we only apply the loss function over the masked regions. After pretraining, the embeddings of each frame in the last transformer layer of CPT-HuBERT is mapped to a discrete frame-level pseudo-emotion labels to achieve frame-level fine-grained emotion alignment embeddings.§.§ Soft Attention An advantage of the frame-level fine-grained emotion alignment embeddings is that there are clear differences between the frame embeddings of different emotions, while the embeddings of frames representing the same emotion are typically adjacent to each other, as shown in the third phase of Fig.<ref>. By leveraging a simple attention, we can focus on frames that are strongly related to the utterance-level emotion label, while disregarding frames that are irrelevant to the emotion label. This approach effectively addresses the issue of interference from emotion label-unrelated frames in SER.In this work, we use the soft attention to align the frame-level fine-grained emotion alignment embeddings with utterance-level emotion labels. The attention is implemented as follows: α_i = softmax(tanh(𝐖x_i)) Z = ∑^N_i=1α_ix_i where the x_i is the frame-level fine-grained emotion alignment embedding of the i-th frame and the 𝐖∈ R^1 × D is a trainable parameter to encode the attention weights of frames. The variables α_i, N, Z and D are the attention weights of the i-th frame, the number of frames, the utterance-level emotional representation and the dimension of embeddings respectively.§ EXPERIMENTS §.§ Experimental Setup Dataset. IEMOCAP <cit.> is a well-known multi-modal emotion corpus, which includes audio, visual and lexical modalities. In this work, we only use the data of audio modality. The corpus contains five recording sessions, each session has one male and one female speaker. In order to prevent leakage of speaker information and labels, whether it is pre-training HuBERT or fine-tuning the model for SER, we perform five-fold cross-validation using a leave-one-session out strategy on the corpus. This means that the data of four sessions are used as training data; the data of one speaker from the remaining session is used as the validation set and the data of other one speaker forms the testing set. We conduct our experiments with the 5531 audio utterances of four emotion categories happy (happy & excited), sad, neutral and angry.The UA and WA are used as evaluation metrics in line with previous methods. The UA is the mean of the accuracy of each category and WA is the accuracy of all samples.Experimental Details. The pretrained HuBERT[https://huggingface.co/facebook/hubert-base-ls960] we used is the backbone of FLEA, which consists of 6 CNN layers and 12 transformer layers. It has an embedding dimension of 768. When we fine-tune HuBERT for SER with TAPT, we use the official k-means model[https://github.com/facebookresearch/fairseq/tree/main/examples/hubert] to generate pseduo labels to continue pretraining HuBERT on the IEMOCAP. In the first and third phases as shown in Fig.<ref>, whether TAPT or fine-tuning the model for SER, the batch size is 64, the learn rate is 1e-4, the loss function is cross entropy loss and the optimizer is AdamW. The batch size is 64 and the epochs are all 40. In order to explore the impact of clustering on the final SER performance, we follow the official HuBERT <cit.> extracted the embeddings of the 6-th, 9-th, and extra 11-th transformer layers of HuBERT and performed clustering with 50, 100, and 150 clusters for each layer of embeddings. The reason we chose these numbers of clusters is the small sample size of the dataset.During pretraining HuBERT, the mini-batches are generated by bucket sampler. The learning rate is 5e-4, the training steps are 20,000, and the warmed up steps are 4,000. More details are shown in our github project. The P-TAPT is used as baseline <cit.>. §.§ Experiments and Analysis A series of ablation experiments are conducted to evaluate the effect of pretraining HuBERT for frame-level emotion state alignment, attention and the number of clusters produced by different layers of embeddings on the performance of SER system.For comparison with the baseline, we list the results of FLEA with attention pooling, and the results of the same fine-tuning method as baseline, which fine-tunes HuBERT for SER with using average pooling to aggregate embeddings, as shown in Table <ref>. §.§.§ The impact of embeddings and clustersFrom Table <ref>, we can observe that different layers of embeddings and number of clusters have different effects on SER performance. No matter how many classes are clustered, the effect of pretraining HuBERT for SER with the frame-level pseudo-emotion labels clustered by the embedding of the 9-th layer is better than that of the 6-th and 11-th layers. This experimental results demonstrate that the embedding of the 9-th layer is more suitable for generating pseudo labels to pretrain HuBERT. The finding is consistent with the study <cit.>.In addition, regardless of which layer of embeddings are used to cluster, the performance of SER decreases as the number of clusters increase. This phenomenon may be related to the size of the dataset. A clustering number of 50 is appropriate on the IEMOCAP dataset for pretraining HuBERT. However, we believe this is due to the limitation of dataset size. With more sentiment data, this cluster number may be larger and the model would be more robust, as in the case of the official HuBERT clustering number of 500.§.§.§ The role of pretraining HuBERT and attention poolingAs shown in Table <ref>, in the 9-th layer, fine-tuning HuBERT with average pooling for SER performs better than the baseline regardless of the numbers of clusters. This indicates that using the MLM method to pretrain HuBERT to realize frame-level emotion state alignment yields better performance than directly fine-tuning wav2vec2.0 for the same purpose.Furthermore, at a cluster number of 50, the UA and WA of FLEA are improved by 0.8% and 1.6%, respectively, compared to those of fine-tuned CPT- HuBERT with average pooling for SER, and by 1.9% over the UA of the baseline. This indicates that introduction of attention to align frame-level embeddings with utterance-labels is effective. Compared to average pooling, attention pooling can pay attention to the those frames strongly related to utterance-label, which makes better use of aligned frame-level emotion information. Although the performance of average pooling gradually approaches or even exceeds that of attention pooling as the number of clusters increases, the performance of the final SER also decreases. §.§ Performance Comparison with previous Methods The effectiveness of our proposed method can be spotlighted via comparing with current key results performed on the IEMOCAP corpus (Table <ref>). It shows that the best UA (75.7%) and WA (74.7%) are achieved by our proposed method. Moreover, our system outperforms the baseline[https://github.com/b04901014/FT-w2v2-ser] even though it leaks speaker information while fine-tuning wav2vec for clustering to generate frame-level pseudo labels. After our testing, FLEA performs better if we use the baseline clustering approach, which leaks speaker information. In addition, as shown in Table <ref>, our method performs well beyond the latest SER methods of the day, such as ShiftCNN, SUPERB, SMW-CAT, etc. Meanwhile, the performance of FLEA is close to some multi-modal methods, which are based on the modalities of audio and lexical.§ CONCLUSIONS In this work, we propose a novel method called FLEA for SER, which achieves SOTA performance on the IEMOCAP corpus. We show that frame-level emotion state alignment can be achieved by pre-training HuBERT with MLM method using frame-level pseudo-emotion labels. On the above aligned model, performing attention pooling to aggregate frame-level embeddings to utterance-level embeddings can get better performance for SER. Furthermore, we find that the model pre-trained for SER using the pseudo label generated by the embedding clustering of the 9-th transformer layer of HuBERT has the best performance and the most robustness. A cluster number of 50 is best suitable on IEMOCAP, but it may not be robust for other corpora due to the size of dataset. In future work, we will explore the relationship between dataset size and number of clusters.§ ACKNOWLEDGEMENTS The work was supported by the National Natural Science Foundation of China (No. 62271083), the Special Fund for Military Healthcare Committee (No. 22BJZ28), the Fundamental Research Funds for the Central Universities (No. 2023RC13), open research fund of The State Key Laboratory of Multimodal Artificial Intelligence Systems (No. 202200042, No.202200012) and BUPT Excellent Ph.D. Students Foundation (No. 2023116)IEEEbib
http://arxiv.org/abs/2312.16383v1
{ "authors": [ "Qifei Li", "Yingming Gao", "Cong Wang", "Yayue Deng", "Jinlong Xue", "Yichen Han", "Ya Li" ], "categories": [ "cs.SD", "cs.AI", "eess.AS" ], "primary_category": "cs.SD", "published": "20231227030752", "title": "Frame-level emotional state alignment method for speech emotion recognition" }
These authors contributed equally to the work.These authors contributed equally to the work. National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, ChinaThese authors contributed equally to the work. Department of Applied Physics, Nanjing University of Science and Technology, Nanjing 210094, ChinaThese authors contributed equally to the work. CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China These authors contributed equally to the work.National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, ChinaInstitute of Applied Physics, Hubei Normal University, Huangshi 435002, ChinaSchool of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, ChinaCenter of Materials Science and Optoelectronics Engineering, College of Materials Science and Optoelectronic Technology, University of Chinese Academy of Sciences, Beijing 100049, ChinaNational Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Australian Nuclear Science and Technology Organisation, Lucas Heights, New South Wales 2234, Australia J-PARC Center, Japan Atomic Energy Agency (JAEA), Tokai, Ibaraki 319-1195, JapanBeijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China Spallation Neutron Source Science Center, Dongguan, 523803, China Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, 100049, ChinaSpallation Neutron Source Science Center, Dongguan, 523803, China Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, 100049, ChinaCenter of Materials Science and Optoelectronics Engineering, College of Materials Science and Optoelectronic Technology, University of Chinese Academy of Sciences, Beijing 100049, ChinaNational Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Collaborative Innovation center of Advanced Microstructures, Nanjing University, Nanjing 210093, [email protected] CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, [email protected]@[email protected] National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Collaborative Innovation center of Advanced Microstructures, Nanjing University, Nanjing 210093, China Fractional magnetisation plateaus, in which the magnetisation is pinned at a fraction of its saturated value within a range of external magnetic field, are spectacular macroscopic manifestations of the collective quantum behaviours. One prominent example of the plateau phase is found in spin-1/2 triangular-lattice antiferromagnets featuring strong geometrical frustration, and is often interpreted as quantum-fluctuation-stabilised state in magnetic field via the “order-by-disorder" mechanism. Here, we observe an unprecedented 1/3 magnetisation plateau between 5.2 and 7.4 T at 2 K in a spin-1 antiferromagnet with a honeycomb lattice, where conventionally no geometrical frustration is anticipated. By carrying out elastic neutron scattering measurements, we propose the spin structure of the plateau phase to be an unusual partial spin-flop ferrimagnetic order, transitioning from the zigzag antiferromagnetic order in zero field. Our theoretical calculations show that the plateau phase is stabilised by the bond-anisotropic Kitaev interaction. These results provide a new paradigm for the exploration of rich quantum phases in frustrated magnets and exotic Kitaev physics in high-spin systems.Observation of a 1/3 Magnetisation Plateau Phase as Evidence for the Kitaev Interaction in a Honeycomb-Lattice Antiferromagnet Jinsheng Wen January 14, 2024 ============================================================================================================================== MainFrustration, which describes the situation that competing magnetic exchange interactions cannot be satisfied simultaneously, plays an essential role in quantum magnets<cit.>. The frustration-induced quantum fluctuation could avoid the formation of ordered magnetic ground states, and lead to magnetically disordered phases such as the quantum spin liquids<cit.>. On the other hand, the quantum fluctuation, which is represented by the zero-point oscillation energy in the spin-wave theory, can lift the degeneracy of the ground state and select a specific spin state within a finite range of external magnetic field; this gives rise to an exotic magnetisation plateau phase in which the magnetisation is a fraction of its saturation value, understood as the “order-by-disorder" mechanism<cit.>. Such a quantization of a macroscopic physical quantity in a range of magnetic field is a spectacular demonstration of the macroscopic quantum phenomena.One prominent example is the 1/3 magnetisation plateau, which has been theoretically predicted and experimentally observed in spin-1/2 equilateral-triangular-lattice antiferromagnets, where the spin is small and the frustration due to the geometrical configuration is strong<cit.>. The discussions have been extended to other triangular-lattice systems with higher spins<cit.>, and other two-dimensional systems such as kagome<cit.>, and square-lattice antiferromagnets<cit.> with geometrical and exchange frustrations, respectively. There are also some attempts in honeycomb-like antiferromagnets<cit.>, but only some traces have been observed in polycrystalline samples under extremely high fields<cit.>. Since frustration is a prerequisite for the fractional magnetisation plateau phase, whether it will occur in a genuine honeycomb lattice where geometrical frustration as those in triangular and kagome lattices is absent, and how to understand it if it does occur, remain outstanding questions.Here, we report comprehensive thermodynamic and neutron scattering measurements on high-quality single crystals of the spin-1 honeycomb-lattice antiferromagnet  (Ref. <cit.>). We show that the magnetisation curve has a definite plateau at 1/3 of the saturation magnetisation between 5.2 and 7.4 T at 2 K. From our neutron scattering measurements, we obtain complete contour maps for the magnetic Bragg peaks in the (H, K, 0) plane in zero and 6.6-T field, the latter of which keeps the system in the plateau phase. By comparing experimental results with calculated magnetic structure factors for all the possible spin states within reasonably large 24 lattice sites, we propose the microscopic magnetic configuration of the 1/3 magnetisation plateau phase to be a zero-up-zero-down-up-up (∘↑∘↓↑↑) ferrimagnetic state. Such a state is a transition from the zigzag order ground state induced by the partial spin-flop process, where two of the six spins in an enlarged magnetic unit flop onto the honeycomb plane and exhibit zero magnetic moment along the out-of-plane direction.Based on these results, we establish a magnetic phase diagram including a salient 1/3 plateau phase for . Our density-functional-theory (DFT) and tensor-network calculations show that a minimal model with Heisenberg exchange couplings J, a bond-dependent anisotropic Kitaev interaction K anda single-ion anisotropy term D can well explain the experimental observations. In particular, the Kitaev interaction which was proposed earlier for this and other materials such as α-RuCl_3 and iridates<cit.>, leads to the exchange frustration and stabilises the 1/3 plateau phase. These results suggest to be a fertile ground to investigate the quantum physics in frustrated magnets on a honeycomb lattice. Anisotropic antiferromagnetic orderAs shown in Fig. <ref>a,b, has a quasi-two-dimensional structure where the spin-1 Ni^2+ ions form the honeycomb lattice on the a-b plane<cit.>. The long-range antiferromagnetic order has an onset at the Néel temperature T_ N∼10 K (Fig. <ref>c). Upon cooling below T_ N, we find that the susceptibility drops rapidly to zero for H ⊥ a-b plane. On the other hand, the susceptibility for H ∥ a-b plane drops only slightly. These results indicate that is a collinear antiferromagnet with strong anisotropy. In the inset of Fig. <ref>c, we show the inverse susceptibilities and Curie-Weiss fits, from which we obtain the effective magnetic moments, μ_eff=2.92(3) and 3.05(6) μ_B/Ni^2+ for field applied perpendicular and parallel to a-b plane, respectively. These values are close to the spin-only effective moment μ_eff=2.828 μ_B/Ni^2+ with S = 1. The Curie-Weiss temperature Θ_ CW are 18.87(7) and 33.55(6) K for fields applied in and out of plane, respectively.In Fig. <ref>e,f, we plot the temperature dependence of the magnetic susceptibility under various magnetic fields applied perpendicular and parallel to the a-b plane, respectively. In both cases, the T_ N is reduced with increasing field, but the trend is more moderate for H∥ a-b plane. For H ⊥ a-b plane, the transition disappears at μ_0H=9 T, while for parallel field of 9 T, the system has a T_ N of 8 K and the transition disappears up to 14 T, indicating strong anisotropy along the out-of-plane direction. Notably, in Fig. <ref>e, for μ_0H_⊥⩽4 T, the susceptibilities finally drop to zero as the temperature decreases, but increase by two successive large steps at 4-5 T and 7-8 T at low temperatures. This can be attributed to the presence of a 1/3 magnetisation plateau phase under perpendicular magnetic fields, as discussed in details below. The susceptibility increases progressively with field for H∥ a-b plane below 13 T, unlike the case for H⊥ a-b plane. In addition, Fig. <ref>d shows the temperature dependence of the specific heat, measured with H ⊥ a-b plane. In zero field, we obtain a T_ N of ∼10.1 K, similar to that in Ref. <cit.>. The T_ N progressively decreases with increasing magnetic field and eventually vanishes at 9 T, consistent with the magnetic susceptibility data.1/3 magnetisation plateauFigure <ref>a shows magnetisation results as a function of field at different temperatures for H⊥ a-b plane. There are three prominent plateaus of the magnetisation curves measured at low temperatures. Taking 2 K as an example, first, a zero-magnetisation plateau persists up to ∼4.6 T, which corresponds to a spin gap and is usually expected in systems with an Ising-like anisotropy<cit.>. Second, the magnetisation is pinned at ∼0.72(5) μ_B/Ni^2+ with fields ranging from 5.2 to 7.4 T, giving rise to a quantized magnetisation plateau. Finally, the magnetisation saturated completely at 8.3 T with the saturation ferromagnetic moment of ∼2.16(5) μ_ B/Ni^2+, which is within the reasonable range for the estimated saturated moment of M_s=gμ_BS=2 μ_ B/Ni^2+. Evidently, the observed fractional magnetisation plateau is close to 1/3 of the saturated ferromagnetic moment. Upon warming, the fractional plateau gradually diminishes, indicating that it is the quantum fluctuation instead of the thermal fluctuation that stabilises the plateau<cit.>. With increasing temperature, the saturation field increases, while the saturation moment decreases and the saturation plateau shrinks.To explicitly elucidate the magnetisation process for H ⊥ a-b plane, we plot the differential of magnetisation (dM/dH) versus magnetic field at different temperaturesin Fig. <ref>b. At 2 K, we find the curve displays two sharp peaks at 4.7 and 8.2 T, which represent the lower- and upper-bound fields of the 1/3 magnetisation plateau state. As the temperature increases, while the lower-bound field remains almost unchanged, the upper-bound field is reduced quickly, so that the plateau shrinks. The robustness of the lower-bound field is a probable nontrivial signature of the quantum nature of the transition wherein. At 8 K and above, the two peaks merge into one. The single peak corresponds to the onset field of the spin-flop transition, which can also be visualized from the magnetisation curves shown in Fig. <ref>a. Eventually, all peaks disappear at 10.5 K. Such a transition temperature agrees well with the T_ N obtained from the specific heat measurements.Combining the data in Fig. <ref>a,b, we estimate a prominent 1/3 magnetisation plateau located between 5.2 and 7.4 T at 2 K, which shrinks with increasing temperature and survives up to ∼8 K. Note that this is narrower than the range determined by the upper- and lower-bound fields considering the finite transition width. In contrast, as illustrated in Extended Data Fig. 1a,b, there is no similar 1/3 magnetisation plateau observed for field applied within the a-b plane up to 14 T due to the easy-axis anisotropy. However, there is a step-like transition in high fields and low temperatures, which corresponds to the spin-flop transition induced by the transverse magnetic field<cit.>.Partial spin-flop structure of the 1/3 plateau phaseTo shed light on the microscopic mechanism responsible for the above 1/3 magnetisation plateau phase, we next explore the magnetic structures of the ground state in zero field, and especially the 1/3 magnetisation plateau phase under intermediate fields, using elastic neutron scattering on a spectrometer PELICAN on high-quality single-crystalline samples. Top panels of Fig. <ref>a,c display contour maps of elastic scattering in the (H, K, 0) plane measured with μ_0H=0 and 7 T, respectively, with field applied along the c axis. In both panels, the scattering data are plotted by subtracting the 80-K data (well above the T_ N) from the 1.5-K data (below T_ N) so that we can obtain pure magnetic signals by eliminating the lattice contributions (See Extended Data Fig. 2 for the raw data at 1.5 and 80 K). In zero field, we find various magnetic Bragg peaks at (±0.5, ±0.5, 0), (±1.5, ±0.5, 0), (0, ±1, 0), (±1, ±2, 0), and (±0.5, ±2.5, 0). From Fig. <ref>a and Extended Data Fig. 2c, we do notobserve any extra magnetic or structural Bragg peaks due to the inter-occupancy of the Ni and Bi atoms<cit.>, so we believe that the honeycomb layer formed by the Ni atoms remains intact. The appearance of magnetic Bragg peaks at the Brillouin zone boundary M points is consistent with the zigzag antiferromagnetic order suggested by a preliminary neutron powder diffraction study<cit.>. In fact, our first-principles calculations also show that the zigzag order state has the lowest energy (Extended Data Table 2). To gain further insights into the magnetic structure, we have carried out new neutron diffraction measurements on the single crystals on another spectrometer AMATERAS, which has a much larger Q coverage due to the multiple-incident-energy option and finer resolutions. The results allow us to perform refinements from which we obtain the moment direction to be 19.7±2.6^∘ off the c^*, close to the c axis (see Methods and Extended Data Fig. 3a). This angle is further confirmed by our DFT calculations shown in Extended Data Fig. 3b. The zigzag magnetic order is depicted in Fig. <ref>b, where the spins align ferromagnetically along the c axis within a zigzag chain, and antiferromagnetically between the chains, exhibiting an up-up-down-down order (↑↑↓↓). These results are consistent with those in Ref. <cit.>. We calculate the magnetic structure factors for this structure and the results are shown in the bottom panel of Fig. <ref>a. It is clear that the calculations agree with the experimental data well. From these results, we are able to pin down the magnetic structure to be the zigzag antiferromagnetic order as shown in Fig, <ref>b.At μ_0H=7 T, corresponding to μ_⊥H = 6.6 T along c^*, which keeps the system in the 1/3 magnetisation plateau phase, the scattering patterns become more complicated. In the first Brillouin zone, there are actually seven magnetic Bragg peaks, with one in the zone centre (and thus cannot be observed experimentally due to the angle restriction), and another six around the centres of the triangles formed by the Γ and K points with equivalent intensity. In the second Brillouin zone, we can observe seven peaks at positions identical to those in the first Brillouin zone. But intriguingly, the six peaks around the triangle centres can be divided into two groups rotated by 180^∘ with dramatically different intensities. Furthermore, the stronger peaks in the second Brillouin zone appear to be more intense than those in the first Brillouin zone. Based on the zigzag ground state in zero field, we have exhausted all the possible magnetic structures within reasonably large lattice sites (up to 24) for the 1/3 plateau phase in Extended Data Fig. 4, and calculated the corresponding magnetic structure factors in Extended Data Fig. 5. In the calculations, we have tuned the angle for the out-of-plane moments and found the results are hardly affected. Therefore, we set the out-of-plane moment in the field to be aligned along c^* perpendicular to the a-b plane, in accordance with the magnetisation and specific heat measurement setup where the field is applied along c^*. By analysing the scattering patterns obtained from experiment and theoretical simulations both qualitatively and quantitatively,we believe that the exotic zero-up-zero-down-up-up order (∘↑∘↓↑↑)as shown in Fig. <ref>d is most likely to be the magnetic structure in the plateau phase (see details in Extended Data Figs. 4 and 5 and Extended Data Table 1). The corresponding calculated magnetic structure factors are shown in the bottom of Fig. <ref>c. It is transitioning from the zigzag order ground state with four spins as a unit by partially flopping two of the six spins in an enlarged magnetic unit onto the honeycomb plane so that their components along the out-of-plane direction become zero.Such a structure not only can produce the structure factors that closely resemble the experimental pattern as shown in the bottom of Fig. <ref>c, but also naturally explains the unexpected 1/3 magnetisation plateau in Fig. <ref>a. First, the magnetic unit consisted of six magnetic spins can satisfy the necessary condition n(S-m)= integer for the 1/3 magnetisation plateau, where n is the number of spins in a magnetic unit, and m is the fraction of the plateau<cit.>. Second, within one unit, the total spin S=2 is exactly 1/3 of S=6 in the fully polarised state. In fact, we have also attempted to examine the direction of the flopped spins in the plane. However, either parallel or antiparallel arrangement of the two flopped spins will lead to six peaks around the triangle centres in the second Brillouin zone exhibiting comparable intensities, which is in conflict with the dramatically different intensities of the two groups of peaks rotated by 180^∘ in the experimental results. We note that there are some slight discrepancies between the calculated and experimental patterns as shown in Fig. <ref>c. We think there may be some disorder effect along the enlarged 6-site chain, and therefore which two of the six sites to have the zero moment along out of plane can have some uncertainties. Considering this may partially resolve the discrepancies. Furthermore, the fluctuations of the in-plane spins at the “∘" sites should also be considered (see detailed discussions in Extended Data Fig. 4 and 5, and Extended Data Table 1). Magnetic phase diagramBy summarizing aforementioned thermodynamic and neutron scattering results, we obtain a magnetic phase diagram with H⊥ a-b plane for as shown in Fig. <ref>. In the upper left corner, there is a fully-polarised state, whose phase boundary is determined from the M-H curves in Fig. <ref>a. For the high-temperature data where the saturation is not that obvious, we obtain the values of the fully-polarised field μ_0H_ p by taking the intersections of the linear fittings between the low- and high-field curves at different temperatures. The phase boundary between the long-range order and paramagnetic state is determined by extracting the peaks from the specific heat (Fig. <ref>d) and derivatives of the susceptibility dχ T/dT (Extended Data Fig. 1d)<cit.>, which are mutually consistent. The transition temperature is gradually suppressed with increasing magnetic field. More importantly, there is a 1/3 magnetisation plateau phase at low temperatures. Both the lower and upper boundaries of this phase can be determined by extracting the peaks from dM/dH vs. μ_0H_⊥ curves as shown in Fig. <ref>b. The upper phase boundary set by dM/dH overlaps with that set by dχ T/dT and the specific heat. The lower phase boundary remains almost flat until 8 K, where it merges into the outer boundary. It is worth mentioning that the upper-bound field at low temperatures and the transition field at 8 K and above coincide with the phase boundary between the long-range order and paramagnetic state. At low temperatures such as 2 K, the system undergoes two successive phase transitions from zigzag to plateau phase, and then to the fully-polarised state. As shown in Fig. <ref>, the transitions under fields appear to be of second order with a finite transition width, making the plateau phase slightly narrower than the phase boundary determined by dM/dH.Crucial role of the Kitaev interactionSuch an observation of the partial spin-flop transition induced 1/3 magnetisation plateau phase in a honeycomb-lattice antiferromagnet is quite unusual. To stabilise such a phase, a key ingredient is the frustration<cit.>. For with a honeycomb lattice, it does not have the geometrical frustration astriangular and kagome lattices do. Although magnetic exchange frustration could be arising from the competing first- (J_1) and second-neighbour Heisenberg exchange couplings (J_2) (Refs. <cit.>),and the J_1-J_2-J_3 (third-neighbour coupling) model can even give rise to the zigzag order<cit.>, it can hardly explain the highly anisotropic responses under in- and out-of-plane fields and thus the spin-flop transition observed in . Intriguingly, taking into account the edge-shared octahedral structure similar to Kitaev materials such as α-RuCl_3 and iridates<cit.>, and strong spin-orbital coupling (SOC) in the vicinity of the heavy Bi atoms and strong Hund's coupling in Ni^2+, a strong Kitaev interaction, which is a natural source of the exchange frustration due to the bond-dependent anisotropy, may be realized in  (Ref. <cit.>). To examine the role of the Kitaev interaction in the exotic magnetic behaviours observed above, we have performed first-principle and many-body calculations (see Methods and Extended Data Fig. 6) for the compound and provided evidence that the observed 1/3-plateau phase in Na_3Ni_2BiO_6 is selected by the Kitaev interaction via an intriguing mechanism that will be clarified below.Our DFT calculations using the four-state method show that the third-nearest-neighbour exchange coupling J_3 is comparable to the nearest-neighbour coupling J_1 but the second-nearest-neighbour coupling J_2 is instead negligible. Considering the fact of easy-axis magnetic anisotropy in the compound, it is naturally to add into the spin Hamiltonian a single-ion anisotropy term D. After a thorough scan of the possible models (see Extended Data Table 3), we find that the Kitaev (K) term must be included to reproduce the 1/3 plateau. Therefore, we take the J_1-J_3-K-D model as a minimal model for the compound Na_3Ni_2BiO_6, which reads asĤ= J_1∑_⟨ i,j⟩𝐒_i·𝐒_j +J_3∑_⟨⟨⟨ i,j⟩⟩⟩𝐒_i·𝐒_j+K∑_i,α𝐒_i^α·𝐒_i+α^α + D∑_⟨ i⟩(S_i^c)^2- H∑_⟨ i⟩ S_i^c^*,where 𝐒_i is an S = 1 quantum spin operator at site i. Due to the slight lattice distortion, we set the single-ion anisotropic axis, i.e., the quantized axis of S_i^c, as tilted about 10^∘ from the perpendicular c^* axis. The last term is the Zeeman term coupled to the c^*-axis (i.e., out-of-plane) component of the quantum spin. Taking J_1 as the energy scale, we find an effective J_1-J_3-K-D model with the set of parameters (J_1=-1, J_3=1, K=0.6, D=-3) well explains the experimental observations.In Fig. <ref> we show the obtained magnetisation curve calculated from the spin-1 J_1-J_3-K-D model, which indeed has a 1/3-plateau state under intermediate fields. To understand the role of Kitaev interaction, in the insets of Fig. <ref> we show the variational energies of the zigzag, 1/3-plateau, and polarised states. The introduction of the K term increases the exchange frustration and thus quantum fluctuations of the system, and lowers the variational energy of 1/3-plateau state while raising that of the polarised states, allowing the 1/3-plateau to appear in the intermediate field regime in Fig. <ref>. It is worth mentioning that while the coupling parameters in the minimal J_1-J_3-K-D model have not been fine tuned to quantitatively fit the experimental results, we already find the ratio of the two critical fields h_c1/h_c2∼1.4 matches that of the experiment. Therefore, the tensor-network calculations support that it is the Kitaev term that selects and stabilises the 1/3 plateau in . Furthermore, with this model we have also calculated the magnetisation with field parallel to the a-b plane, and the results also match the experimental measurements  (see Extended Data Fig. 1c). DiscussionsFrom the results above, we have demonstrated that there is a 1/3 magnetisation plateau phase in a honeycomb-lattice antiferromagnet , proposed an exotic magnetic structure symbolised as ∘↑∘↓↑↑ for the plateau phase, and revealed that the Kitaev interaction is indispensable to this phase. Nevertheless, the magnetic structure calculated from the J_1-J_3-K-D model, however, is slightly different from the zero-up-zero-down-up-up (∘↑∘↓↑↑) configuration in Fig. <ref> d. It has the down-down-up-up-up-up (↓↓↑↑↑↑) as depicted in Extended Data Fig. 4e instead. We believe this discrepancy between theory and experiment is because our J_1-J_3-K-D model may still be too condensed, without taking some other anisotropic terms such as the off-diagonal ones into account. In this sense, future inelastic neutron scattering studies on single crystals to parameterize these terms should be very interesting. Despite this small deficit, this effective minimal model captures all of the essence of the experimental measurements, including the zigzag order at zero field, spin-flop transitions (Extended Data Table 3), and magnetisations under fields, in particular the 1/3 magnetisation plateau. With these, this work not only extends the study of the fractional magnetisation plateau phase to honeycomb-lattice compounds which conventionally do not exhibit geometrical frustrations, but also expands the territory of quantum magnets that host Kitaev physics from S=1/2 to higher-spin systems. 44 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Balents(2010)]nature464_199 author author Leon Balents, title title Spin liquids in frustrated magnets, 10.1038/nature08917 journal journal Nature volume 464, pages 199–208 (year 2010)NoStop [Wen et al.(2019)Wen, Yu, Li, Yu, and Li]npjqm4_12 author author Jinsheng Wen, author Shun-Li Yu, author Shiyan Li, author Weiqiang Yu,and author Jian-Xin Li, title title Experimental identification of quantum spin liquids, @noopjournal journal npj Quant. Mater. volume 4, pages 12 (year 2019)NoStop [Broholm et al.(2020)Broholm, Cava, Kivelson, Nocera, Norman, and Senthil]Broholmeaay0668 author author C. Broholm, author R. J. Cava, author S. A. Kivelson, author D. G. Nocera, author M. R. Norman,and author T. Senthil, title title Quantum spin liquids, 10.1126/science.aay0668 journal journal Science volume 367, pages eaay0668 (year 2020)NoStop [Chubukov and Golosov(1991)]Chubukov_1991 author author A V Chubukov and author D I Golosov, title title Quantum theory of an antiferromagnet on a triangular lattice in a magnetic field, 10.1088/0953-8984/3/1/005 journal journal J. Phys.: Condens. Matter volume 3, pages 69–82 (year 1991)NoStop [Honecker(1999)]Honecker_1999 author author A Honecker, title title A comparative study of the magnetization process of two-dimensional antiferromagnets, 10.1088/0953-8984/11/24/311 journal journal J. Phys.: Condens. Matter volume 11, pages 4697–4713 (year 1999)NoStop [Starykh(2015)]0034-4885-78-5-052502 author author Oleg A Starykh, title title Unusual ordered phases of highly frustrated magnets: a review, http://stacks.iop.org/0034-4885/78/i=5/a=052502 journal journal Rep. Pro. Phys. volume 78, pages 052502 (year 2015)NoStop [Zhitomirsky et al.(2000)Zhitomirsky, Honecker, and Petrenko]PhysRevLett.85.3269 author author M. E. Zhitomirsky, author A. Honecker,and author O. A. Petrenko, title title Field Induced Ordering in Highly Frustrated Antiferromagnets, 10.1103/PhysRevLett.85.3269 journal journal Phys. Rev. Lett. volume 85, pages 3269–3272 (year 2000)NoStop [Kawamura and Miyashita(1985)]kawamura1985phase author author Hikaru Kawamura and author Seiji Miyashita, title title Phase transition of the Heisenberg antiferromagnet on the triangular lattice in a magnetic field, 10.1143/JPSJ.54.4530 journal journal J. Phys. Soc. Jpn. volume 54, pages 4530–4538 (year 1985)NoStop [Henley(1989)]PhysRevLett.62.2056 author author Christopher L. Henley, title title Ordering due to disorder in a frustrated vector antiferromagnet, 10.1103/PhysRevLett.62.2056 journal journal Phys. Rev. Lett. volume 62, pages 2056–2059 (year 1989)NoStop [Alicea et al.(2009)Alicea, Chubukov, and Starykh]PhysRevLett.102.137201 author author Jason Alicea, author Andrey V. Chubukov,and author Oleg A. Starykh, title title Quantum Stabilization of the 1/3-Magnetization Plateau in Cs_2CuBr_4, 10.1103/PhysRevLett.102.137201 journal journal Phys. Rev. Lett. volume 102, pages 137201 (year 2009)NoStop [Coletta et al.(2013)Coletta, Zhitomirsky, and Mila]PhysRevB.87.060407 author author Tommaso Coletta, author M. E. Zhitomirsky,and author Frédéric Mila, title title Quantum stabilization of classically unstable plateau structures, 10.1103/PhysRevB.87.060407 journal journal Phys. Rev. B volume 87, pages 060407 (year 2013)NoStop [Yamamoto et al.(2014)Yamamoto, Marmorini, and Danshita]PhysRevLett.112.127203 author author Daisuke Yamamoto, author Giacomo Marmorini,and author Ippei Danshita, title title Quantum Phase Diagram of the Triangular-Lattice XXZ Model in a Magnetic Field, 10.1103/PhysRevLett.112.127203 journal journal Phys. Rev. Lett. volume 112, pages 127203 (year 2014)NoStop [Schotte et al.(1994)Schotte, Stusser, Schotte, Weinfurter, Mayer, and Winkelmann]Schotte_1994 author author U Schotte, author N Stusser, author K D Schotte, author H Weinfurter, author H M Mayer,and author M Winkelmann, title title On the field-dependent magnetic structures of CsCuCl_3, 10.1088/0953-8984/6/46/026 journal journal J. Phys. Condens. Matter volume 6, pages 10105–10119 (year 1994)NoStop [Ono et al.(2003)Ono, Tanaka, Aruga Katori, Ishikawa, Mitamura, and Goto]PhysRevB.67.104431 author author T. Ono, author H. Tanaka, author H. Aruga Katori, author F. Ishikawa, author H. Mitamura,and author T. Goto, title title Magnetization plateau in the frustrated quantum spin system Cs_2CuBr_4, 10.1103/PhysRevB.67.104431 journal journal Phys. Rev. B volume 67, pages 104431 (year 2003)NoStop [Tsujii et al.(2007)Tsujii, Rotundu, Ono, Tanaka, Andraka, Ingersent, and Takano]PhysRevB.76.060406 author author H. Tsujii, author C. R. Rotundu, author T. Ono, author H. Tanaka, author B. Andraka, author K. Ingersent,and author Y. Takano, title title Thermodynamics of the up-up-down phase of the S=1/2 triangular-lattice antiferromagnet Cs_2CuBr_4, 10.1103/PhysRevB.76.060406 journal journal Phys. Rev. B volume 76, pages 060406 (year 2007)NoStop [Fortune et al.(2009)Fortune, Hannahs, Yoshida, Sherline, Ono, Tanaka, and Takano]PhysRevLett.102.257201 author author N. A. Fortune, author S. T. Hannahs, author Y. Yoshida, author T. E. Sherline, author T. Ono, author H. Tanaka,and author Y. Takano, title title Cascade of Magnetic-Field-Induced Quantum Phase Transitions in a Spin-1/2 Triangular-Lattice Antiferromagnet, 10.1103/PhysRevLett.102.257201 journal journal Phys. Rev. Lett. volume 102, pages 257201 (year 2009)NoStop [Shirata et al.(2012)Shirata, Tanaka, Matsuo, and Kindo]prl108_057205 author author Yutaka Shirata, author Hidekazu Tanaka, author Akira Matsuo, and author Koichi Kindo, title title Experimental Realization of a Spin-1/2 Triangular-Lattice Heisenberg Antiferromagnet, https://link.aps.org/doi/10.1103/PhysRevLett.108.057205 journal journal Phys. Rev. Lett. volume 108, pages 057205 (year 2012)NoStop [Zhou et al.(2012)Zhou, Xu, Hallas, Silverstein, Wiebe, Umegaki, Yan, Murphy, Park, Qiu, Copley, Gardner, and Takano]prl109_267206 author author H. D. Zhou, author Cenke Xu, author A. M. Hallas, author H. J. Silverstein, author C. R. Wiebe, author I. Umegaki, author J. Q. Yan, author T. P. Murphy, author J. H. Park, author Y. Qiu, author J. R. D. Copley, author J. S. Gardner,and author Y. Takano, title title Successive Phase Transitions and Extended Spin-Excitation Continuum in the S=1/2 Triangular-Lattice Antiferromagnet Ba_3CoSb_2𝐎_9, https://link.aps.org/doi/10.1103/PhysRevLett.109.267206 journal journal Phys. Rev. Lett. volume 109, pages 267206 (year 2012)NoStop [Susuki et al.(2013)Susuki, Kurita, and Tanaka]PhysRevLett.110.267201 author author Takuya Susuki, author Nobuyuki Kurita,and author Takuya Tanaka, title title Magnetization Process and Collective Excitations in the S=1/2 Triangular-Lattice Heisenberg Antiferromagnet Ba_3CoSb_2O_9, 10.1103/PhysRevLett.110.267201 journal journal Phys. Rev. Lett. volume 110, pages 267201 (year 2013)NoStop [Kamiya et al.(2018)Kamiya, Ge, Hong, Qiu, Quintero-Castro, Lu, Cao, Matsuda, Choi, Batista, Mourigal, Zhou, and Ma]kamiya2018nature author author Y. Kamiya, author L. Ge, author Tao Hong, author Y. Qiu, author D. L. Quintero-Castro, author Z. Lu, author H. B. Cao, author M. Matsuda, author E. S. Choi, author C. D. Batista, author M. Mourigal, author H. D. Zhou,and author J. Ma, title title The nature of spin excitations in the one-third magnetization plateau phase of Ba_3CoSb_2O_9, https://doi.org/10.1038/s41467-018-04914-1 journal journal Nat. Commun. volume 9, pages 1–11 (year 2018)NoStop [Inami et al.(1996)Inami, Ajiro, and Goto]ToshiyaInami1996 author author Toshiya Inami, author Yoshitami Ajiro,and author Tsuneaki Goto, title title Magnetization Process of the Triangular Lattice Antiferromagnets, RbFe(MoO_4)_2 and CsFe(SO_4)_2, 10.1143/jpsj.65.2374 journal journal J. Phys. Soc. Jpn. volume 65, pages 2374–2376 (year 1996)NoStop [Shirata and Tanaka(2011)]doi:10.1143/JPSJ.80.093702 author author Yutaka Shirata and author Tanaka, title title Quantum Magnetization Plateau in Spin-1 Triangular-Lattice Antiferromagnet Ba_3NiSb_2O_9, 10.1143/JPSJ.80.093702 journal journal J. Phys. Soc. Jpn. volume 80, pages 093702 (year 2011)NoStop [Hwang et al.(2012)Hwang, Choi, and Ye]PhysRevLett.109.257205 author author J. Hwang, author E. S. Choi, and author F. Ye, title title Successive Magnetic Phase Transitions and Multiferroicity in the Spin-One Triangular-Lattice Antiferromagnet Ba_3NiNb_2O_9, 10.1103/PhysRevLett.109.257205 journal journal Phys. Rev. Lett. volume 109, pages 257205 (year 2012)NoStop [Zhitomirsky(2002)]PhysRevLett.88.057204 author author M. E. Zhitomirsky, title title Field-Induced Transitions in a Kagomé Antiferromagnet, 10.1103/PhysRevLett.88.057204 journal journal Phys. Rev. Lett. volume 88, pages 057204 (year 2002)NoStop [Damle and Senthil(2006)]PhysRevLett.97.067202 author author Kedar Damle and author T. Senthil, title title Spin Nematics and Magnetization Plateau Transition in Anisotropic Kagome Magnets, 10.1103/PhysRevLett.97.067202 journal journal Phys. Rev. Lett. volume 97, pages 067202 (year 2006)NoStop [Nishimoto et al.(2013)Nishimoto, Shibata, and Hotta]nc4_2287 author author Satoshi Nishimoto, author Naokazu Shibata,and author Chisa Hotta, title title Controlling frustrated liquids and solids with an applied field in a kagome Heisenberg antiferromagnet, https://doi.org/10.1038/ncomms3287 journal journal Nat. Commun. volume 4, pages 2287 (year 2013)NoStop [Lozovik and Notych(1993)]LOZOVIK1993873 author author Yu. E. Lozovik and author O. I. Notych, title title Magnetization plateaus of frustrated antiferromagnet and analogy with FQHE, https://doi.org/10.1016/0038-1098(93)90195-S journal journal Solid State Commun. volume 85, pages 873–877 (year 1993)NoStop [Kageyama et al.(1999)Kageyama, Yoshimura, Stern, Mushnikov, Onizuka, Kato, Kosuge, Slichter, Goto, and Ueda]PhysRevLett.82.3168 author author H. Kageyama, author K. Yoshimura, author R. Stern, author N. V. Mushnikov, author K. Onizuka, author M. Kato, author K. Kosuge, author C. P. Slichter, author T. Goto, and author Y. Ueda, title title Exact Dimer Ground State and Quantized Magnetization Plateaus in the Two-Dimensional Spin System SrCu_2(BO_3)_2, 10.1103/PhysRevLett.82.3168 journal journal Phys. Rev. Lett. volume 82, pages 3168–3171 (year 1999)NoStop [Kodama et al.(2002)Kodama, Takigawa, Horvatić, Berthier, Kageyama, Ueda, Miyahara, Becca, and Mila]doi:10.1126/science.1075045 author author K. Kodama, author M. Takigawa, author M. Horvatić, author C. Berthier, author H. Kageyama, author Y. Ueda, author S. Miyahara, author F. Becca,and author F. Mila, title title Magnetic Superstructure in the Two-Dimensional Quantum Antiferromagnet SrCu_2(BO_3)_2, 10.1126/science.1075045 journal journal Science volume 298, pages 395–399 (year 2002)NoStop [Chanlert et al.(2016)Chanlert, Kurita, Tanaka, Goto, Matsuo, and Kindo]PhysRevB.93.094420 author author Purintorn Chanlert, author Nobuyuki Kurita, author Hidekazu Tanaka, author Daiki Goto, author Akira Matsuo,and author Koichi Kindo, title title Field-driven successive phase transitions in the quasi-two-dimensional frustrated antiferromagnet Ba_2CoTeO_6 and highly degenerate classical ground states, 10.1103/PhysRevB.93.094420 journal journal Phys. Rev. B volume 93, pages 094420 (year 2016)NoStop [Okutani et al.(2019)Okutani, Kida, Narumi, Shimokawa, Honda, Kindo, Nakano, Nozue, and Hagiwara]doi:10.7566/JPSJ.88.013703 author author Akira Okutani, author Takanori Kida, author Yasuo Narumi, author Tokuro Shimokawa, author Zentaro Honda, author Koichi Kindo, author Takehito Nakano, author Yasuo Nozue,and author Masayuki Hagiwara, title title High-field Magnetism of the Honeycomb-lattice Antiferromagnet Cu_2(pymca)_3(ClO_4), 10.7566/JPSJ.88.013703 journal journal J. Phys. Soc. Jpn. volume 88, pages 013703 (year 2019)NoStop [Seibel et al.(2013)Seibel, Roudebush, Wu, Huang, Ali, Ji, and Cava]doi:10.1021/ic402131e author author Elizabeth M. Seibel, author J. H. Roudebush, author Hui Wu, author Qingzhen Huang, author Mazhar N. Ali, author Huiwen Ji,and author R. J. Cava, title title Structure and Magnetic Properties of the α-NaFeO_2-Type Honeycomb Compound Na_3Ni_2BiO_6, 10.1021/ic402131e journal journal Inorg. Chem. volume 52, pages 13605–13611 (year 2013)NoStop [Stavropoulos et al.(2019)Stavropoulos, Pereira, and Kee]PhysRevLett.123.037203 author author P. Peter Stavropoulos, author D. Pereira,and author Hae-Young Kee, title title Microscopic Mechanism for a Higher-Spin Kitaev Model, 10.1103/PhysRevLett.123.037203 journal journal Phys. Rev. Lett. volume 123, pages 037203 (year 2019)NoStop [Winter et al.(2017)Winter, Tsirlin, Daghofer, van den Brink, Singh, Gegenwart, and Valentí]0953-8984-29-49-493002 author author Stephen M Winter, author Alexander A Tsirlin, author Maria Daghofer, author Jeroen van den Brink, author Yogesh Singh, author Philipp Gegenwart,and author Roser Valentí, title title Models and materials for generalized Kitaev magnetism, http://stacks.iop.org/0953-8984/29/i=49/a=493002 journal journal J. Phys. Conden. Matter volume 29, pages 493002 (year 2017)NoStop [Takagi et al.(2019)Takagi, Takayama, Jackeli, Khaliullin, and Nagler]nrp1_264 author author Hidenori Takagi, author Tomohiro Takayama, author George Jackeli, author Giniyat Khaliullin,and author Stephen E. Nagler, title title Concept and realization of Kitaev quantum spin liquids, https://doi.org/10.1038/s42254-019-0038-2 journal journal Nat. Rev. Phys. volume 1, pages 264–280 (year 2019)NoStop [Chaloupka and Khaliullin(2016)]PhysRevB.94.064435 author author Jiř ří Chaloupka and author Giniyat Khaliullin, title title Magnetic anisotropy in the kitaev model systems na_2iro_3 and rucl_3, 10.1103/PhysRevB.94.064435 journal journal Phys. Rev. B volume 94, pages 064435 (year 2016)NoStop [Oshikawa et al.(1997)Oshikawa, Yamanaka, and Affleck]PhysRevLett.78.1984 author author Masaki Oshikawa, author Masanori Yamanaka,and author Ian Affleck, title title Magnetization plateaus in spin chains: “haldane gap” for half-integer spins, 10.1103/PhysRevLett.78.1984 journal journal Phys. Rev. Lett. volume 78, pages 1984–1987 (year 1997)NoStop [Bragg and Seehra(1973)]PhysRevB.7.4197 author author E. E. Bragg and author M. S. Seehra, title title Magnetic Susceptibility of MnF_2 near T_N and Fisher's Relation, 10.1103/PhysRevB.7.4197 journal journal Phys. Rev. B volume 7, pages 4197–4202 (year 1973)NoStop [Smirnova et al.(2009)Smirnova, Azuma, Kumada, Kusano, Matsuda, Shimakawa, Takei, Yonesaki, and Kinomura]jacs131_8313 author author Olga Smirnova, author Masaki Azuma, author Nobuhiro Kumada, author Yoshihiro Kusano, author Masaaki Matsuda, author Yuichi Shimakawa, author Takahiro Takei, author Yoshinori Yonesaki,and author Nobukazu Kinomura, title title Synthesis, Crystal Structure, and Magnetic Properties of Bi_3Mn_4O_12(NO_3) Oxynitrate Comprising S = 3/2 Honeycomb Lattice, @noop journal journal J. Am. Chem. Soc. volume 131, pages 8313–8317 (year 2009)NoStop [Okumura et al.(2010)Okumura, Kawamura, Okubo, and Motome]doi:10.1143/JPSJ.79.114705 author author Soichiro Okumura, author Hikaru Kawamura, author Tsuyoshi Okubo,and author Yukitoshi Motome, title title Novel Spin-Liquid States in the Frustrated Heisenberg Antiferromagnet on the Honeycomb Lattice, 10.1143/JPSJ.79.114705 journal journal J. Phys. Soc. Jpn. volume 79, pages 114705 (year 2010)NoStop [Fouet et al.(2001)Fouet, Sindzingre, and Lhuillier]epjb20_241 author author J. B. Fouet, author P. Sindzingre, and author C. Lhuillier, title title An investigation of the quantum J_1-J_2-J_3 model on the honeycomb lattice, @noop journal journal Euro. Phys. J. B volume 20, pages 241–254 (year 2001)NoStop [Yu et al.(2013)Yu, Mole, Noakes, Kennedy, and Robinson]yu2013pelican author author Dehong Yu, author Richard Mole, author Terry Noakes, author Shane Kennedy,and author Robert Robinson, title title Pelican—a time of flight cold neutron polarization analysis spectrometer at OPAL, 10.7566/JPSJS.82SA.SA027 journal journal J. Phys. Soc. Jpn. volume 82, pages SA027 (year 2013)NoStop [Nakajima et al.(2011)Nakajima, Ohira-Kawamura, Kikuchi, Nakamura, Kajimoto, Inamura, Takahashi, Aizawa, Suzuya, Shibata, Nakatani, Soyama, Maruyama, Tanaka, Kambara, Iwahashi, Itoh, Osakabe, Wakimoto, Kakurai, Maekawa, Harada, Oikawa, E. Lechner, Mezei, and Arai]doi:10.1143/JPSJS.80SB.SB028 author author Kenji Nakajima, author Seiko Ohira-Kawamura, author Tatsuya Kikuchi, author Mitsutaka Nakamura, author Ryoichi Kajimoto, author Yasuhiro Inamura, author Nobuaki Takahashi, author Kazuya Aizawa, author Kentaro Suzuya, author Kaoru Shibata, author Takeshi Nakatani, author Kazuhiko Soyama, author Ryuji Maruyama, author Hiromichi Tanaka, author Wataru Kambara, author Takaaki Iwahashi, author Yukihiro Itoh, author Toyotaka Osakabe, author Shuichi Wakimoto, author Kazuhisa Kakurai, author Fujio Maekawa, author Masahide Harada, author Kenichi Oikawa, author Ruep E. Lechner, author Ferenc Mezei,and author Masatoshi Arai, title title AMATERAS: A Cold-Neutron Disk Chopper Spectrometer, 10.1143/JPSJS.80SB.SB028 journal journal J. Phys. Soc. Jpn. volume 80, pages SB028 (year 2011)NoStop [Cirac et al.(2021)Cirac, Pérez-García, Schuch, and Verstraete]Cirac2021RMP author author J. Ignacio Cirac, author David Pérez-García, author Norbert Schuch,and author Frank Verstraete, title title Matrix product states and projected entangled pair states: Concepts, symmetries, theorems, 10.1103/RevModPhys.93.045003 journal journal Rev. Mod. Phys. volume 93, pages 045003 (year 2021)NoStopMethodsSingle-crystal growth and characterisations. High-quality single crystals of were successfully grown by the flux method<cit.>. The crystals were thin and transparent brown, with a typical mass of ∼7 mg for each piece. Magnetisation and specific heat measurements were conducted in a physical property measurement system PPMS-9T from Quantum Design. Later, we extended the magnetisation measurements up to 14 T in a PPMS-14T. Magnetisations were measured under magnetic fields both perpendicular and parallel to the a-b plane with fixed-field sweeping-temperature and fixed-temperature sweeping-field modes. Specific heat measurements were carried out in the temperature region between 2 and 18 K in a series of magnetic fields applied perpendicular to the a-b plane. Magnetisation was also measured as a function of field up to 14 T at High Magnetic Field Laboratory of the Chinese Academy of Sciences.Neutron scattering experiments. Neutron scattering measurements were performed on PELICAN, a cold-neutron time-of-flight spectrometer located at the OPAL of ANSTO in Australia<cit.>. The sample array consisted of ∼140 pieces of single crystals weighing about 1 g in total. They were glued onto four rectangular aluminum plates by hydrogen-free Cytop grease and well coaligned using a backscattering Laue x-ray diffractometer. The aluminum plates were well tilted by 18.56^∘ so that the horizontal plane was the (H, K, 0) plane. The assembly with (H, K, 0) as the horizontal plane was installed in a 7-T superconducting magnet. We set the angle where the [100] direction was parallel to the incident beam direction to be zero. Data were collected at 1.5 and 80 K with E_ i= 3.70 meV by rotating the sample about the vertical direction with a range of 360^∘ in a 2^∘ step. At 1.5 K, measurements were done in two magnetic fields of μ_0H = 0 and 7 T. For refinement purpose, additional neutron diffraction measurements on these single crystsals were carried out on AMATERAS, a cold-neutron time-of-flight spectrometer with a multiple-E_ i option located at the MLF of J-PARC in Japan<cit.>. We also performed powder neutron diffraction measurements on GPPD, a time-of-flight diffractometer located at CSNS in China. We adopted the monoclinic C2/m structure with the refined lattice parameters in Ref.<cit.>, with a = 5.3998 Å, b = 9.3518 Å, c = 5.67997 Å, and β = 108.56^∘. The wave vector Q is expressed as (H, K, L) in reciprocal lattice unit (r.l.u.) of (a^*, b^*, c^*) = (2π/acosθ, 2π/b,2π/ccosθ) with θ = 108.56^∘-90^∘=18.56^∘, representing the angle between a(c) and a^*(c^*). Since we tilted the a-b plane by 18.56^∘ from the horizontal plane, but applied a magnetic field of μ_0H = 7 T along the vertical direction (c axis), the component along the direction perpendicular to the a-b plane (c^*) was ∼6.6 T, which pinned the system in the 1/3 magnetisation plateau phase.Calculations of the magnetic structure factors. The intensity of elastic neutron scattering was proportional to the square of the component of the magnetic structure factor,I(𝐤)∝∑_αβ(δ_αβ-k̂_αk̂_β)M^α(𝐤)M^β(𝐤),where M(k) was the magnetic structure factor, the Fourier transform of the spin distribution with M( k)=∑_r_iS_r_i e^i k·r_i. S_r_i of spin-1 had the value of 1, 0, and -1 for each site in the calculations. We tried the configurations of the honeycomb-lattice clusters with 6 and 24 sites in order to satisfy the period for the 1/3 magnetisation plateau phase. However, the corresponding magnetic structure factors would result in major Bragg peaks at Γ or K points, which were not present in the experimental results. Reviewing the scattering patterns, the Bragg peaks at approximately 1/3 and 2/3 of the distance between Γ and Γ' implied a possible order with a 6-site chain possessing the translational symmetry instead of the 4-site magnetic unit of the zigzag order. Therefore, we calculated the magnetic structural factors based on a series of magnetic structures with a 6-site chain with translational symmetry.First-principles and tensor-network calculations. The DFT calculations were performed by using the Vienna ab initio simulation package (VASP) with Perdew-Burke-Ernzerhof functional. The experimental geometric structure<cit.> was used and the effective Coulomb interaction U_ eff = 4 eV and SOC were considered. In the quantum many-body calculations of the spin-1 Kitaev-Heisenberg model, we employed the infinite projected entangled pair state (iPEPS)<cit.> and density matrix renormalization group (DMRG) approaches, and obtained accurate ground state properties. Data availabilityData supporting the findings of this study are available from the corresponding author J.S.W. (Email: [email protected]) upon reasonable request.AcknowledgementsThe work was supported by National Key Projects for Research and Development of China with Grant No. 2021YFA1400400, National Natural Science Foundation of China with Grant Nos. 12225407, 12074174, 12074175, 11904170, 11834014, 11974036, 12222412, 12004191 and 12204160, Natural Science Foundation of Jiangsu province with Grant Nos. BK20190436 and BK20200738, China Postdoctoral Science Foundation with Grants No. 2022M711569 and 2022T150315, Jiangsu Province Excellent Postdoctoral Program with Grant No. 20220ZB5, Hubei Provincial Natural Science Foundation of China with Grant No. 2021CFB238, CAS Project for Young Scientists in Basic Research (YSBR-003), and Fundamental Research Funds for the Central Universities. We acknowledge the neutron beamtime from ANSTO with Proposal No. P9334 and the great support from Gene Davidson in setting up and operation of the 7-T superconducting magnet, and the beamtime from J-PARC with Proposal No. 2022A0039. We would like to thank Yuyan Han at High Magnetic Field Laboratory of the Chinese Academy of Sciences for assisting us in measuring the magnetisation under high magnetic fields.Author contributionsJ.S.W. conceived the project. Y.Y.S.G. prepared the samples. Y.Y.S.G. carried out the magnetisation and specific heat measurements with assistance from S.Z. and F.Q.S. for the 14-T field measurements. Y.Y.S.G., S.B., D.H.Y., R.A.M, N.M., S.O.-K., L.H.H. and J.Z.H. performed the neutron scattering experiments. Y.Y.S.G., S.B. and J.S.W. analysed the experimental data. Z.-Y.D., N.X., Y.-P.G., Z.Q., Q.-B.Y., W.L., S.-L.Y. and J.-X.L. performed the theoretical calculations and analyses. J.S.W., Y.Y.S.G., W.L. and J.-X.L. wrote the paper with inputs from all co-authors. Competing InterestsThe authors declare no competing financial interests.Additional informationCorrespondence and request for materials should be addressed to J.S.W.  ([email protected]), J.-X.L. ([email protected]), S.-L.Y. ([email protected]) or W.L. ([email protected]).
http://arxiv.org/abs/2312.15932v1
{ "authors": [ "Yanyan Shangguan", "Song Bao", "Zhao-Yang Dong", "Ning Xi", "Yi-Peng Gao", "Zhen Ma", "Wei Wang", "Zhongyuan Qi", "Shuai Zhang", "Zhentao Huang", "Junbo Liao", "Xiaoxue Zhao", "Bo Zhang", "Shufan Cheng", "Hao Xu", "Dehong Yu", "Richard A. Mole", "Naoki Murai", "Seiko Ohira-Kawamura", "Lunhua He", "Jiazheng Hao", "Qing-Bo Yan", "Fengqi Song", "Wei Li", "Shun-Li Yu", "Jian-Xin Li", "Jinsheng Wen" ], "categories": [ "cond-mat.str-el", "cond-mat.supr-con" ], "primary_category": "cond-mat.str-el", "published": "20231226075724", "title": "Observation of a 1/3 Magnetisation Plateau Phase as Evidence for the Kitaev Interaction in a Honeycomb-Lattice Antiferromagnet" }
Vol.0 (20xx) No.0, 000–000School of Mathematics and Statistics, Hanshan Normal University, Chaozhou 521000, ChinaSchool of Computer Science, South China Normal University, Guangzhou 510631, China; [email protected] 20xx month day; accepted 20xx month day Pulsar detection has become an active research topic in radio astronomy recently. One of the essential procedures for pulsar detection is pulsar candidate sifting (PCS), a procedure of finding out the potential pulsar signals in a survey. However, pulsar candidates are always class-imbalanced, as most candidates are non-pulsars such as RFI and only a tiny part of them are from real pulsars. Class imbalance has greatly damaged the performance of machine learning (ML) models, resulting in a heavy cost as some real pulsars are misjudged. To deal with the problem, techniques of choosing relevant features to discriminate pulsars from non-pulsars are focused on, which is known as feature selection. Feature selection is a process of selecting a subset of the most relevant features from a feature pool. The distinguishing features between pulsars and non-pulsars can significantly improve the performance of the classifier even if the data are highly imbalanced. In this work, an algorithm of feature selection called K-fold Relief-Greedy algorithm (KFRG) is designed. KFRG is a two-stage algorithm. In the first stage, it filters out some irrelevant features according to their K-fold Relief scores, while in the second stage, it removes the redundant features and selects the most relevant features by a forward greedy search strategy. Experiments on the dataset of the High Time Resolution Universe survey verified that ML models based on KFRG are capable for PCS, correctly separating pulsars from non-pulsars even if the candidates are highly class-imbalanced.H.-T. Lin & X. -R. Li Pulsar candidates sifting based on feature selection Dealing with the data imbalance problem on pulsar candidates sifting based on feature selectionHaitao Lin 1Xiangru Li 2================================================================================================§ INTRODUCTIONPulsars are highly magnetized, rotating, compact stars that emit beams of electromagnetic radiation out of their magnetic poles. They are observed as signals with short and regular rotation periods when their beams are received by the earth. The study of pulsars is of great significance to promote the development of astronomy, astrophysics, general relativity, and other fields. As a remarkable laboratory, it can be used for the research of the detection of gravitational wave <cit.>, the observation of the interstellar medium <cit.>, the conjecture of dark matter <cit.> and other research fields. Therefore, many pulsar surveys (projects) have been carried on or ongoing to search for more new pulsars. These pulsar surveys have produced massive observation data in the form of pulsar candidates. For example, the number of pulsar candidates from the Parkes Multibeam Pulsar Survey <cit.> is about 8 million; the High Time Resolution Universe Pulsar Survey <cit.> has returned 4.3 million candidates <cit.>; the Low-Frequency Tied-Array All-Sky survey <cit.> has accumulated 3 million candidates, etc. With the development of modern radio telescopes, such as the Five-hundred-meter Aperture Spherical radio Telescope <cit.> and Square Kilometre Array <cit.>, the amount of pulsar candidates increases exponentially. However, of this vast amount of candidates, only a small part of these candidates are from real pulsars, while others are radio frequency interferences (RFI) or other kinds of noises <cit.>. Thus, one essential process of pulsar search is to separate the real pulsar signals from non-pulsar ones, which is known as pulsar candidate sifting (PCS).Recently, quite a few machine learning (ML) methods have been applied to PCS. They are mainly divided into two types according to their inputs—-models based on artificial features and models based on image-driven approaches.Artificial features are designed in accordance with the different nature between pulsars and non-pulsars.These features were extracted by their physical background (we called empirical features) or statistical characteristics (statistic features) and can be clearly explained. Typically, <cit.> first extracted 12 empirical features from candidates as inputs of an Artificial Neural Network <cit.> model for PCS. The ANN model was experimented in PMPS survey <cit.> and achieved a recall rate of 93% (recall rate is a performance measure defined as a ratio between the number of the successfully predicted pulsars and the total number of the real pulsars). And Bates et al. <cit.> constructed another ANN with 22 features as inputs in HTRU-Medlat <cit.> and achieved a recall of 85%. To improve the performance of PCS, Morello et al. <cit.> empirically designed 6 empirical features to build a model called Straightforward Pulsar Identification using Neural Networks (SPINN), which even achieved both a high recall of 100% and a low false positive rate. Then, a purpose-built tree-based model called Gaussian Hellinger Very Fast Decision Tree (GHVFDT) <cit.> was applied to PCS with 8 newly designed features. These features are statistics computed from both the folded profile and the dispersion measure (DM) searching curve (defined in Section <ref>). They are evaluated, using the joint mutual information criterion to helps identify relevant features.Later,Tan et al. <cit.> pointed out that the GHVFDT based on these 8 features is insensitive to pulsars with wide integrated profiles. They therefore proposed eight other new features and built an ensemble classifier with five different decision trees to improve the performance of the detection. As for image-driven PCS models, they are based on deep learning networks, where deep features were extracted from their diagnostic plots (Fig. <ref>). <cit.> first proposed the pulsar imaged-based classification system (PICS), whose inputs are four mainly diagnostic plots, i.e., sub-integration, sub-band, the folded signal and the DM-curve ( their definitions can be referred to Section <ref>). <cit.> then improved the PICS and designed a PICS-ResNet model which is composed of two Residual Neural Networks (ResNets), two Support Vector Machines (SVMs), and one Logistic Regression (LR). <cit.> used a combination of a deep convolutional generative adversarial network (DCGAN) and a support vector machine (SVM) to apply to the HTRU Medlat and PMPS surveys. Then they raised a model by combining the DCGAN and MLP neural networks trained with pseudo inverse learning auto encoder (PILAE) algorithm, achieving excellent results on class-imbalanced data sets (<cit.>). Recently, <cit.> designed a 14-layer deep residual network for PCS, using an over-sampling technique to adjust the imbalance ratio of the training data. The experiments on HTRU achieved both a high precision and a 100% recall. <cit.>As far as intelligent identification is concerned, deep learning methods have shown great significant application in the PCS. More related works can be referred to <cit.>. Although these models showed an advantage in performance, they failed to quantify significantly difference between pulsars and non-pulsars since the deep features extracted from them were inexplicable or incomprehensible.One of the greatest challenges for in ML is the class imbalance problem <cit.>, where the distribution of instances with labels is skewed. In the case of a binary classification problem, class imbalance implies that the number of one class is far less than the number of the other class in a data set. We refer to these two categories as the majority and the minority, respectively. The ratio between the total number of the majority and that of the minority is called imbalance ratio (IR). A machine classifier with high IR tends to judge an unknown item as a major class, resulting in low recall rates. For instance, HTRU dataset is a highly class-imbalanced set, as the number of non-pulsar signals is close to 90,000 while the number of pulsar signals is only 1196. To address the class imbalance problem, oversampling methods were implemented before training. For instance, Morello et al. <cit.> balanced their training set by randomly oversampling to 4:1 ratio of non-pulsars to pulsars. Bethapudi et al. <cit.> and Devine et al. <cit.> adopted the Synthetic Minority Over-sampling Technique <cit.> to produce more positive samples and raise the recall of the models. SMOTE is one of the most commonly used oversampling methods to handle the imbalanced data distribution problem. It generates virtual instances by linear interpolation for the minority class. These instances are generated by randomly choosing one or more of the k-nearest neighbors for each example in the minority class. After the oversampling process, the data are reconstructed to be class-balanced. However, this is not reflective of the real problem faced and is an artefact of data processing since these generative instances are virtual, random and not from the real world. In our work, instead of raising the performance of PCS models by balancing the training data of candidates, we improve the pulsar accuracy of the models in perspective of the features representation, which is called feature selection or variable selection <cit.> in ML terminology. Exactly, feature selection is the process of selecting a subset of relevant features from the feature candidate pool. Feature selection is necessary in the data preprocessing stage, as some of the features may be redundant or irrelevant. These redundant or irrelevant features will decrease the performance of the sifting model. A well-designed feature selection algorithm will significantly improve the predictive ability of the ML sifting model. For the above considerations, an algorithm of feature selection called K-fold Relief-Greedy (KFRG) is proposed in this work. KFRG is a purpose-built two-stage algorithm: the first stage is to filter out some irrelevant features from the candidate features by Relief score, while the second stage is to select the most relevant features in a greedy way. To verify the effectiveness of KFRG for PCS, several typical ML classifiers are evaluated, including C4.5 <cit.>,Adaboost <cit.>, Gradient Boosting Classification <cit.>,XGboost <cit.> etc. Our experiments were performed on the public data of HTRU <cit.>.The article is arranged as follows. Section 2 gave the description of the HTRU dateset as well as some related works. In Section 3, as many as 22 artificial features were introduced, including 6 empirical features from <cit.>, 8 statistical features designed by <cit.> and 8 additional statistical features proposed by <cit.>. These features were collected to be selected in the next progress. In Section 4, KFRG as an algorithm of feature selection for PCS was proposed. Experiments based on KFRG were carried on HTRU survey data in Section 5, and the discussion and conclusion was made in Section 6.§ DATA AND PRELIMINARY WORKS §.§ Pulsar candidates and the HTRU dataset A pulsar candidate is originally a piece of signal from the receiver of a radio telescope during the observation time.Most commonly, it is processed by PulsaR Exploration and Search TOolkit <cit.>, a typical software for pulsar search and analysis. Then the candidate is represented using a series of physical values and a series of diagnostic plots as Fig. <ref> shows. On the left, the plots from top to bottom are :a sub-band plot, a sub-integration plot and a folded profile of the signal.A sub-band plot displays the pulse in different bands of observed frequencies; a sub-integration plot shows the pulse in the time domain; a folded profile is the folded signal of its sub-bands by frequency or the folded signal of its sub-integrations by period. On the right are two plots. One is a grid searching plot for dispersion measure (DM) and period. The other is a DM-searching curve.It is known that DM measures the number of electrons which the pulsar’s signal travel through from the source to the Earth. However, real DM is unknown and should be obtained by trials. Then a grid searching plot on the right top of Fig. <ref> records the change of SNR as the trial DMs and the trial periods vary. A DM searching curve on the right middle describes the relationship between trial DM and its corresponding SNR, and the peak of the curve implies the most likely value of DM.Our work is conducted on HTRU survey. The HTRU data set is observed with two Observatories, the Parkes radio telescope in Australia and the Effelsberg 7-beam system in Germany. HTRU is an ambitious (6000 hours) project to search for pulsars and fast transients in the entire sky, which is split into three areas: Low-latitude (covers ± 3.5^∘), Mid-latitude (Medlat; covers ± 15^∘) and High-latitude (covers the remaining sky < 10^∘). The pipeline searched for pulsar signals with DMs from 0 to 400 pc· cm^-3 (DM is often quoted in the units of pc· cm^-3, which makes it easy to estimate the distance between a given pulsar and the earth), and also performed an acceleration search between-50 to +50 m· s^-2.HTRU data set was publicly released by <cit.>) and available online [http://astronomy.swin.edu.au/∼vmorello/] available. It consists of 1196 real pulsar candidates and 89,995 non-pulsars, which are highly class-imbalanced, as only a tiny fraction of the candidates are from real pulsar signals (Table <ref>).§.§ Machine Learning ClassifiersPCS can be described as a binary supervised classification issue in ML. Supervised learning <cit.> is an ML task of learning a function that maps instances to their labels. Particularly,a classifier of PCS aims to learn a function mapping features of the pulsar candidates to their categories—-pulsar or non-pulsar. To evaluate the effectiveness of selected features, the performance of classifiers should be estimated.In our work, feature selection algorithms are evaluated by 7 classifiers. Among them, Decision Tree <cit.>, Logistic Regression <cit.> and Support Vector Machine <cit.> are normal classifiers, while Adaptive boosting <cit.>, Gradient Boosting Classification <cit.>, eXtreme Gradient boosting <cit.> and Random Forest <cit.> are ensemble learning classifiers. The principles of these classifiers are different and representative. For example, typical DT classier is based on information gain ratio while SVM tries to find the best hyperplane which represents the largest separation between the two classes. Ensemble methods <cit.> use multiple weak classifiers such as DT to obtain a strong classifier. Interested readers can refer to their references for details <cit.>. §.§ Performance metric To evaluate the performance of a classifier on class-imbalanced data, typically on pulsar candidates data, four most relevant metrics are given. There are the Recall rate, the Precision rate, the F_1 score and the False Positive Rate (FPR). They can be expressed by True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN).In binary classification, recall, defined by TP/(TP+FN), where TP denotes the amount of true pulsars predicted and TP+FN the total amount of true pulsars. It measures how many pulsars could be correctly predicted pulsars from all the real pulsars. Precision, defined by TP/(TP+FP), measures how many true pulsars would be predicted correctly out of the candidates predicted tasks. However, recall and precision compose the relationship that opposite to each other, as it is probable to increase one at the cost of reducing the other. Therefore, F_1 score, defined as the harmonic mean of recall and precision, i.e., (2· Precision · Recall)/(Precision +Recall), is a trade-off between them. As for FPR, it measures the ratio of mislabelled non-pulsars out of all the non-pulsar candidates by FP/(TN+FP). It can be inferred by recall and precision. Therefore, we just need to focus on the recall, the precision and the F_1 score of a classifier.§ FEATURE POOLBefore feature selection, a set of candidate features should be collected to be further selected, which is called a feature pool. Feature selection algorithm will be implemented on this pool to output a feature subset of better representation. Considering that some of the candidate features are trivial for PCS, several guidelines for a feature pool were discussed in this section.§.§ Guidelines of candidate featuresTo extract some robust and useful features, guidelines of feature design were proposed by PCS researchers.<cit.> gave several suggestions, such as "ensuring complete robustness to noisy data" in order to "exploit properly in the low-SNR regime", "limiting the number of features" to deny "the curse of dimensions".<cit.> suggested that features should be designed to maximize the separation between positive and negative candidates, reducing the impact of class imbalance.Based on suggestions from <cit.> and <cit.>, guidelines for candidate features in a feature pool were summarized as follows. Candidate features: i. should be distinguishable enough between pulsars and non-pulsars. A distinguishable feature will greatly improve the performance of the classifier.ii. should be diversified. Considering the diversity of the feature source, both empirical features and statistical features should be included in the feature pool.iii. should be full-covered. Features should be extracted from the mainly diagnostic images, especially the sub-integration plot, the sub-band plot, the folded profile and the DM searching curve.iv. can be easily extracted and calculated.v. should be controlled in a moderate total number. If the total number is small, some relevant features may be missed; if it is large, it will enlarge the computing cost of the feature selection algorithms. §.§ Candidate features in our workFollowing the guidelines in Section <ref>, twenty-two features have been collected (Table <ref>), which are candidate features from <cit.> and <cit.>. Among them, six features were defined by Morello et al.<cit.> to build the SPINN model, eight statistical features were proposed by Lyon et al. <cit.> as inputs to GHVFDT, and eight additional features were introduced by Tan et al.<cit.> to developed an ensemble classifier comprised of five different decision trees. Details of these features were described in Table <ref>.To demonstrate the discriminating capabilities of these features, one statistic approach is to show the distributions of pulsars and non-pulsars from each feature by box plots, which can graphically demonstrate the locality, spread, and skewness groups of the features. Figure <ref> gives box plots of our candidate features on HTRU. There are two box plots per feature. The red boxes describe the feature distribution for known pulsars, while the white ones are for non-pulsars mainly consisting of RFI. Note that the data of each feature was scaled by z-score, with the mean zero and the standard deviation one. The resulting z-score measures the number of standard deviations that a given data point is from the mean. Generally, the less the overlap of the red box and white box in a feature, the better separability of the feature. However, the usefulness of features according to their box plots are only on a visual level. Measurable investigation of these features will be given in the next section.§ FEATURE SELECTION ALGORITHMS §.§ MotivationFeature selection algorithm is a search technique for a feature subset from a feature pool. Irrelevant or redundant features not only increase the calculation efficiency of ML models but also damaged their performances. Better selected features can be helpful when facing data-imbalanced problem <cit.> and some practical algorithms have been proposed for two-class imbalanced data problem <cit.>.There are mainly three categories of feature selection algorithms: filters, wrappers, and embedded methods. Filter methods use a proxy measure to score a feature subset. Commonly, they include the mutual information <cit.>, the point-biserial correlation coefficient <cit.>, and Relief <cit.> score. Wrapper methods score feature subsets using a predictive model. Common methods include grid search, greedy <cit.> and recursive feature elimination <cit.>. An embedded feature selection method is a machine learning algorithm that returns a model using a limited number of features. Our proposed algorithm of feature selection will combine the idea of a filter called Relief with a wrapper method named greedy. On one hand, PCS as a mission of binary supervised classification, is highly dependant on the constructive features between pulsars and non-pulsars. Thus, methods of filter are preferably considered as they measure the relation between features and labels. In fact, measures by filters are able to capture the usefulness of the feature subset only based on the data, which are independent of any classifier. Relief algorithm is one of the best measures of filters when compared with other filter methods. It weights features and avoids the problem of high computation cost in combinatorial search. Thus, our proposed approach of feature selection for PCS is a Relief-based algorithm. On the other hand, the selected features according to their Relief scores may be redundant. It involves that more than two features with high Relief scores but they are strongly correlated, since one relevant feature may be redundant in the presence of another relevant feature <cit.>. To remove these redundant features, it follows the evolution of Relief scores to implement a greedy technique.However, Greedy or Relief for feature selection has some shortages. Although Relief score is able to filter out some irrelevant features, it could not detect the redundant ones, which implies that if two features share the same information in terms of correlation measure, both of them are very likely judged as relevant or irrelevant. As for Greedy, it can be utilized to reduce the number of features. Its computational cost increases in a quadratic way as the number of features increases, which makes the computer unaffordable.Based on the considerations above, we combine Relief with the Greedy algorithm to propose an KFRG algorithm of feature selection for PCS. In the first stage, it filters out some irrelevant features according to their Relief scores, while in the second stage, it removes the redundant features and selects the most relevant features by a forward greedy search strategy. Experimental investigations on HTRU showed that it improved the performance of most classifiers and achieved high both recall rate and precision (Section <ref>). §.§ Relief AlgorithmRelief (<cit.>) is a filter algorithm of feature selection which is notably sensitive to feature interactions. It calculates a feature score for each feature which can then be applied to rank and select top scoring features for feature selection. Alternatively, these scores may be applied as feature weights to guide downstream modeling. Relief is able to detect conditional dependencies between features and their labels (pulsars and non-pulsars) and provide a unified view on the feature estimation in regression and classification. It is described as Formula (<ref>). The greater the Relief score of a feature, then the more distinguishable the feature is, and corresponding features are more likely to be selected.Let D={(X_i,y_i) | X_i=(x_i^1,x_i^2,⋯,x_i^d), i∈ I} denote a dataset, where y_i is the label of X_i and x_i^j the jth component of X_i.Denote x_i,nh^j the jth feature of its nearest instance whose label is the same as that of X_i (a 'nearest hit'), while x_i,nm^jthe jth feature of its nearest instance whose label isdifferent from that of X_i (a 'nearest miss'). Then Relief score of the jth feature (denoted as δ^j) is defined asδ^j =1/|I|∑_i∈ I(-diff(x_i^j,x_i,nh^j)^2+diff(x_i^j,x_i,nm^j)^2),where diff represents the difference of two components. diff(x_a^j,x_b^j)=0, y_a=y_b1, y_a≠ y_b if the jth feature is discrete, while diff(x_a^j,x_b^j)=|x_a^j-x_b^j| if the jth feature is continuous. §.§ Greedy Algorithm A greedy algorithm is an algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage <cit.>. Greedy strategy does not usually produce an optimal solution. Nonetheless, a greedy heuristic method may yield locally optimal solution that approximates the global optimal solution in a reasonable amount of time.In our algorithm, the objective function (target) is to maximize the F_1 score of a classifier,and thus our greedy is designed to choose the best feature step by step from the rest of the feature pool. Here, “the best feature candidate" is the feature which contributes best to increase of the F_1 score of the classifier. Accordingly, the stopping criterion of our greedy in the iteration procedures is that the F_1 score does not increase anymore, or the maximum number of the selected features is more than a preset threshold Maxlen. Maxlen is a hyperparameter to control the maximum number of the selected features. On one hand, Maxlen should be large enough to enable as many as possible candidate features. A small Maxlen may miss some relative features and result in low performance.On the other hand, as Maxlen increases, so dose the computational complexity <cit.>.Computational complexity is an important part of an algorithm design, as it gives useful information about the amount of resources required to run it. In fact, the computational complexity of Greedy can be expressed as O(Maxlen^2)× O(𝔏) according to Algorithm <ref>, where the notation O(Maxlen^2) means the run time or space requirements grow as the square of Maxlen grows, while O(𝔏) represents the computational complexity of 𝔏 which relies on both the choice of a classifier and the size of the input (feature). Thus, a large Maxlen implies a large computing cost.To evaluate a fit Maxlen, experiments with Maxlen ranging from 2 to 10 were carried out. Experimental investigation shows that as Maxlen increases, the average performance metrics improve rapidly at first, while they keeps in a similar level when Maxlen is more than 8. The average recall and precision keep around 97.2% and 98.0% for Maxlen larger than 8 as Table <ref> shows. Considering both the computational complexity and the performance, Maxlen is set to be 8.Following the description above, our Greedy Algorithm of feature selection can be described as Algorithm <ref>.§.§ KFRG AlgorithmRelief is easy to operate, but it is not very satisfied in some class-imbalanced scenarios, since it may underestimate those features of high discriminative ability in minority, and ignores the sparse distributional property of minority class samples <cit.>. That is, a feature with high Relief score pays more attention to the non-pulsars which are the majority class and hard to identify the pulsars which are the minority class and thus more promising pulsars will miss.To overcome these flaws, the K-fold Relief algorithm (KFR) was designed. The key improvement of KFR is to balance the data by recycling the minority and sampling from the majority. Firstly, split the training data into minority samples and majority ones. Then, produce K disjoint subsets from the majority samples randomly, and merge each subset with the minority into K new data sets, each of which is relatively balanced with the same minority samples. Here, K is a preset integer which is normally the ratio of the majority classes to minority classes in training data set. Finally, calculate the mean of Relief scores of each set. KFR is able to promote the importance of the minority classes for the estimation of relevant features. Combining KFR with Greedy algorithm, we get KFRG. KFRG is a two-stage algorithm: the first stage aims to remove some irrelevant features from the candidate features according to their Relief score, while the second stage is designed to select the most relevant features in a greedy way. It can be described in Algorithm <ref>. § EXPERIMENTS AND ANALYSIS In this section, experiments based on KFRG were implemented on HTRU. Firstly, selected features by KFR and KFRG were calculated, respectively. Then, to demonstrate the improvement of KFRG, ablation study was given to see the contribution of the component to the KFRG. Afterwards, comparative experiments with different feature groups were carried out to verify the effectiveness of KFRG. Finally, comparative experiments between our proposed KFRG and oversampling approachs were given, and their advantages and disadvantages of each were discussed. §.§ Results of KFG and KFRG §.§.§ Selected features based on KFR scores The KFR algorithm is the first stage of Algorithm <ref>, which outputs the mean of Relief scores of the K-fold training sets. The Relief scores standing for the weights of features were calculated and shown by a bar graph in Fig. <ref> for HTRU, where both the scores and their ranks were given. To select the more relevant features, a preset threshold will keep the features with higher scores. Here, a ratio of 0.618 is preset to ensure that the number of selected features is more than half of the total number of the candidate features. That is, about 12 features out of 22 are considered to be much relevant ones (blue bars) and 10 others (gray bars) are less relevant.§.§.§ Selected features based on KFRGBased on KFRG (Algorithm <ref>), the selected features as well as their size were obtained and the results were given in Table <ref>. • The dimension of features is greatly reduced. Most of the features in the feature pool were removed after KFRG algorithm. The dimension of the selected features is cut down from 22 to less than 8. Some classifiers even need only 3 features to build their models.• The selected features and their sizes vary with the classifiers. For example, only three features M5, L4, T5 were finally chosen for DT while five features M2, M3, M5, L4, T1 were left for RF.• Features M5 and L4 are frequently used in all of the classifiers. It is shown that features M5 from <cit.> and L4 from <cit.> are frequently selected. Further discussion is given in Section <ref>. §.§ Ablation study of KFRG KFRG feature selection algorithm is an improvement of the Relief algorithm by combining K-folded Relief (KFR) with a greedy algorithm. To demonstrate the effectiveness of KFRG, an ablation study of stack mode is given. Step by step, the performance metrics of all the classifiers were calculated from Relief to KFR, and finally, to KFRG.Table <ref> gives the numerical calculation of recall, precision, F_1 score and false positive rate of each classifiers with three different feature select algorithms—-Relief, KFR and KFRG, while Figure <ref> plots their averaged performance metrics. It shows that KFR performs better than Relief, and KFRG performs best of all. For one thing, KFR has better recall rates, better precision and lower FPRs for all classifiers than the original Relief techniques. For example, the recall raises from 94.8% to 96.1%. These improvements come from the step of K-fold Relief operation, as KFR is designed for the imbalance problem. For another, KFRG keeps a precision rate as high as KFR, and raises the recall rate by 0.9%. Further, KFRG achieves a best F_1 score of 97.5%. These improvements come from the step of Greedy as it aims to maximize the F_1 score by removing the redundant features.§.§ Comparison of performance with different features To verify the effectiveness of KFRG, comparative experiments with different feature groups were carried out, where three subsets of the feature pool were considered as the inputs of the classifiers, including features from <cit.>, features from <cit.>, and features from KFRG. The performance metrics of recall (Rec), precision (Pre), F_1 score and false positive rate (FPR) were given in Table <ref>. The experimental results in Table <ref> are summarized as follows.• Both the recall and the precision have significantly improved.The average recall of the classifiers is 97.2%, and most of the classifiers achieve recall rates ranging from 96% to 99% after KFRG algorithm, which implies that most of the real pulsar signals were well detected after feature selection. For instance, the recall rate and the precision rate of DT are respectively 89.7% and 95.4%with features M1-M6, while they are up to 96.4% and 98% based on KFRG. Also, the average precision is as large as 98.0% based on KFRG, which has increased by an average of 3.9% compared with M1-M6 and L1-L8. • The FPR of the classifiers is reduced to 0.05% in average.Most of the classifiers achieve a FPR less than 0.05%. A low FPR of a classifier implies that the selected features are very sensitive in excluding non-pulsars.•F1 scores based on selected features have increased. The best F_1 is 98.3% in our experiment in the following cases: using five selected features [M3, M5, L3, L4, T1], GBDT achieved a recall of 97.8% and a precision of 98.9% ; Using another five selected features [M5, M6, L4, T1, T5], XGBoost also achieved a high F_1 score of 98.3%, which is as good as the GBDT classifier. A better F_1 score implies that both recall and precision increase since F_1 score is the harmonic mean of recall and precision. In other words, more potential pulsar signals are correctly recognized and fewer non-pulsar signals are misjudged in these cases.§.§ Comparison of performance with different data-balancing techniques As feature selection of KFRG alleviates the imbalance problem on ML, the performances based on KFGR were compared with some other widely-used data-balancing techniques of over sampling, including randomly oversampling, SMOTE <cit.>, Borderline SMOTE <cit.> and ADASYN <cit.>. Furthermore, we even implement KFRG-SMOTE method which is a combination of the proposed KFRG and SMOTE technique. We evaluated the metrics of each algorithm on the different classifiers, and then took the mean of the performance of each classifier on each evaluation statistic to compare between feature selection metrics and oversampling metrics (Table <ref>). It shows that KFRG offers better performance than oversampling techniques such as SMOTE, Borderline SMOTE and ADASYN, and randomly oversampling performs worst among them. KFRG achieve a similar recall rate but an improved precision and a lower FPR, which imply that the classifiers on KFRG is very strict in the criteria for classifying the candidates as pulsars and only a few of non-pulsars were misjudged. Moveover, the performance of KFRG-SMOTE is as good as KFGR, which share a high F_1 scores of 97.5% and whose FPRs both range at a low level between 0.03% to 0.04%. Compared with oversampling techniques, KFRG has its advantages. One of the advantages is that KFRG is of good generalization ability and able to avoid overfitting problems. In fact, randomly oversampling tends to suffer from overfitting problems as the minority samples in its training set were duplicated at random. As a result, the trained model becomes too specific to the training data and may not generalize well to new data. Other oversampling techniques are all based on SMOTE, which eliminate the harms of skewed distribution by creating new minority class samples. They generates a synthetic sample x_new by using the linear interpolation of x and y with the expression of x_new = x + (y-x)×α, where α is a random number in the range [0,1] and y is a k nearest neighbour (kNN) of x in the minority set. However, in many cases the kNN-based approach may generate wrong minority class samples as the above equation says that x_newwill lie in the line segment between x and y. In addition, it would be difficult to find an appropriate value of k for kNN a priori. The parameter k varies with the distribution of samples between minority and majority. Different from data-level methods of oversampling, feature selection of KFGR neither duplicate nor generate any additional data. It keeps the distribution and class imbalance ratio of the data.The other advantage is that the result of KFRG is interpretable. The selected features are the most distinguishing ones between pulsars and non-pulsars. In addition, KFRG perform well based on the data characteristics that regardless of the classifier used. Although the sets of selected features by KFRG may be changed with different classifiers, most of the selected features are the same, as it will be explained next section. § DISCUSSION KFRG has been evaluated on HTRU. It proves that models based on KFRG achieve larger recall, precision F_1 score and less FPR than those without any feature selection. In other words, these selected features are distinguishable enough to pick out pulsar signals from the candidates. The improvement of performance metrics comes for two reasons. For one thing, the Relief algorithm can filter out most of the irrelevant features from the feature pool. As we have explained, Relief score is based on the identification of feature value differences between nearest neighbor instance pairs with both the same class and different class. Thus, a feature with lower Relief score implies that the overlapping part of the two categories is large on the feature, which is considered as an irrelevant feature. For another, KFRG enables a classifier to select its most relevant features in a greedy way, as its objective function is to maximize the F_1 score of a given classifier. Although the selected features by KFRG may be various for different classifiers, the most relevant features are almost the same.The importance of features was evaluated by their frequency of being selected and ranked with stars according to the KFRG results (Table <ref>). Features M5 and L4 are three-star, as they are definitely selected by all the classifiers. Feature M3, M2, T1 and T5 are ranked as two-star, as they are chosen by about half of these classifiers. Features selected by only one or two classifiers are marked with one star. Those features left were never chosen by KFRG, implying they are redundant or irrelevant according to our experiments.Notice that L4 and M5 are the most important features for most of the classifiers. They will be discussed in detail. L4 is the other feature of the most relevant features for all the classifiers. It notes the skewness on the folded profile (Pf_s), which is a statistic value for the distribution of the pulse folded profile P={p_i}_i=1^n, i.e.,Pf_s=1/n∑_i=1^n(p_i-μ/σ)^3 =1/n∑_i=1^n(p_i-μ)^3/(1/n∑_i=1^n(p_i-μ)^2)^3 / 2,where, μ, σ are the mean and the standard deviation of p_i, respectively. A candidate with large L4 implies that there is a great skewness of the folded profile. Skewness describes the symmetry of the distribution of a signal. A signal with a great skewness is likely a signal with a distinctly detectable pulse.M5, standing for χ_(SNR), represents the persistence of the signal in the time domain, which is defined as the average score of χ_(s) <cit.>, i.e., χ_(SNR) = 1/N∑_i=1^Nχ_(s), andχ_(s)= 1-exp(-s/b), s≥ 0, s/b, s<0,where s is the SNR of the candidate in a sub-integration, and b=16/√(n_sub) presents the benchmark of the SNR, where n_sub is the total number of sub-integrations. The design basis of M5 is the fact that a genuine pulsar is expected to be consistently visible during most of an observation. As for man-made signals like RFIs, most of them last for a very short time, and then become invisible in a part of the observation. Therefore, M5 provides an effective selection criterion against these impulsive artificial signals. A scatter plot with coordinates of Feature M5 and L4 in Fig. <ref> shows that: most of the non-pulsars can be easily separated from pulsars with these two features, as candidates with large L4 and M5 tend to be judged as pulsars. That could explain why they are frequently selected by most of the classifiers and very significant for the pulsar candidates sifting.In this work, a novel feature selection algorithm KFRG is proposed to improve the performance of PCS models in the class-imbalanced case. KFRG combines Relief scores with Greedy algorithm to removes most of the redundant and irrelevant features. Experiments based on HTRU show that KFRG is effective. Compared with models without any feature selection, the recall rate of the models based on KFRG features is higher and the FPR is lower. Compared with some typical oversampling techniques, KFRG is more robust and interpretable besides better performance metrics. Also, the importance of selected features by KFRG are described and explained in our work.These experimental conclusions based on KFRG are efficient and practical, providing the potential guide to study machine learning methods for the candidate sifting, and serve other surveys of the next-generation radio telescopes.§ ACKNOWLEDGEMENTSAuthors are grateful for support from the National Natural Science Foundation of China (grant No: 11973022,12373108), the Natural Science Foundation of Guangdong Province (No. 2020A1515010710), and Hanshan Normal University Startup Foundation for Doctor Scientific Research (No. QD202129). § DATA AVAILABILITYThe data underlying this article are publicly available in Centre for Astrophysics and Supercomputing, at <h ttp://astronomy.swin.edu.au/ vmorello/>, which was released by <cit.>. The detailed description of the data is in Section <ref>.raa
http://arxiv.org/abs/2312.16366v1
{ "authors": [ "Haitao Lin", "Xiangru Li" ], "categories": [ "astro-ph.IM", "astro-ph.HE" ], "primary_category": "astro-ph.IM", "published": "20231227001927", "title": "Dealing with the data imbalance problem on pulsar candidates sifting based on feature selection" }
[email protected] School of Science, Walailak University, Thasala, Nakhon Si Thammarat, 80160, Thailand. [email protected] Strong Gravity Group, Department of Physics, Faculty of Science, Silpakorn University, Nakhon Pathom 73000, Thailand [email protected] Strong Gravity Group, Department of Physics, Faculty of Science, Silpakorn University, Nakhon Pathom 73000, Thailand We construct asymptotically flat, static spherically symmetric black holes with regular centre in f(R,T) gravity coupled to nonlinear electrodynamics Lagrangian. We obtain generalized metric function of the Bardeen and Hayward black holes. The null, weak and strong energy conditions of these solutions are discussed. All the energy conditions hold outside the black hole's outer event horizon by appropriated choices of parameters. Quasinormal mode of massive scalar perturbation is also investigated. Quasinormal frequencies are computed via the sixth order Wentzel-Kramers-Brillouin (WKB) with Padé approximation. All the imaginary parts of the frequencies are found to be negative. Finally, we provide an analysis in the eikonal limit. Magnetically charged regular black holes in f(R,T) gravity coupled to nonlinear electrodynamics Supakchai Ponglertsakul January 14, 2024 ===============================================================================================§ INTRODUCTION The most well-knowngravitational theory describing the relation between spacetime and matter is Einstein's general relativity (GR). For over a decade, this theory has been well-tested by the observations and experiments in the weak field limit like our solar system, and the highly dense binary systems <cit.>. However, there are numerous open questions that GR fails to provide answers, for instance, an accelerated expansion of the universe <cit.>, and galaxy rotation curve <cit.>. Rather than applying auxiliary fields to the theory, one could construct the modification of the GR as the extension based on the original Einstein's general relativity. One of the modifications of GR is the f(R) gravity where the Ricci scalar R in the Einstein-Hilbert action is replaced with an arbitrary function of R <cit.>. This modification can describe the accelerated expansion of the Universe without relying on the exotic matters <cit.>. Moreover, the generalization of the f(R) gravity theory leads to extra degrees of freedom related to curvature invariants and scalar fields, which are called Extended Theories of Gravity (ETG) <cit.>. These additional degrees of freedom play a major role as effective fluids unlike the fluids of ordinary matter which is adopted as sources of the field equations. One class of the ETG is the f(T) gravity theory where the extension of the torsional gravity with an arbitrary function of the torsion scalar f(T) plays a major role in explaining the cosmological and astrophysics problems <cit.>. Additionally, another class of the extension of the Einstein's gravity theory is the f(Q) gravity theory constructed from the symmetric teleparallel gravity which is based on the non-metricity scalar Q. The modification of this theory represents the stable dark energy causing the accelerated universe in which the matter perturbation remains constant <cit.>.In addition, the f(R,T) gravity theory is designed to add the matter components into the gravitational action by applying the arbitrary function of the Ricci scalar R along with the trace of the energy momentum tensor T. This is proposed in <cit.> where the modified field equation is derived and cosmological solution is analysed by introducing a self-interacting scalar field.Numerous works of the f(R,T) gravity theory have been investigated. The cosmological solutions based on a homogeneous and isotropic spacetime through a phase-space analysis are done in <cit.>. In addition, several cosmological solutions from the f(R,T) theory have been exclusively explored in refs <cit.>. The violation of the energy conditions is investigated in <cit.>. Moreover, thermodynamics properties of the the f(R,T) gravity theory are explored in <cit.>. On the other hand, within the f(R,T) framework, various compact objects are constructed and studied e.g., wormhole <cit.> and compact stars <cit.>.Black holes are ones of the most fundamental objects in the universe. They play a crucial role in almost all relativistic gravitational field theories. The detection of gravitational waves <cit.> and the first capture of black hole's image <cit.> marked the beginning of black hole's astronomy era. This makes black holes extremely important in astrophysics research nowadays. Black holes are solutions of relativistic gravitational field equations. According to GR, there is an essential singularity hidden behind each black hole. The regular black hole proposed by Bardeen <cit.> offers a new possibility to obtain black hole without a singularity. Later, it is shown that regular black holes are the solutions of Einstein's gravity coupled to nonlinear electrodynamics <cit.> and the Bardeen black hole can be regarded as a nonlinear magnetic monopole <cit.>. The Bardeen black hole is later extended to include a cosmological constant <cit.>. Charged regular black holes with various mass functions are studied in <cit.>. In addition, a modification of the Reissner-Nordström black hole yields regular charged black hole, and its entropy obeys Bekenstein's area law <cit.>. We refer interested readers to ref <cit.> for a recent review on regular black holes with nonlinear electrodynamics sources. Beyond GR, the regular black holes with nonlinear electrodynamics are extensively explored e.g. in Einstein-Gauss-Bonnet theory <cit.> and f(R) gravity <cit.>.In f(R,T) gravity, an exact black hole solution surrounded by anisotropic fluid is explored <cit.>. The energy conditions for each particular equation of state parameter w are discussed in <cit.>. This prompts a research question whether there are other black hole solutions in the f(R,T) gravity. Thus, in this work, we construct asymptotically flat, static spherically symmetric regular black holes within the framework of f(R,T) gravity. There are two approaches to obtain the black hole solutions. Firstly, we specifically choose a mass function that yields a regular black hole, and find the corresponding the nonlinear electrodynamics Lagrangian (L_NED). Secondly, we specify the L_NED, and find the corresponding mass function. From both approaches, we obtain novel magnetically charged regular black holes. Remarkably, from the second approach, we obtain a metric function that can be considered as a generalization of Bardeen and Hayward black hole <cit.>. Then, we analyse the null, weak and strong energy conditions of these solutions. The quasinormal modes and the eikonal limit of these black holes are also investigated.This paper is organized as follows. In Sec <ref>, we discuss the f(R,T) gravity coupled to nonlinear electrodynamics. The modified field equation is derived and corresponding energy-momentum tensor is given. Then, modified field equations are solved and the regular black holes are explored in Sec <ref>. Then, we discuss the energy conditions in Sec <ref> . We study quasinormal modes and the eikonal limit in Sec <ref>. Lastly, we summarize our results and discuss possible extensions of this work in Sec <ref>. § BASIC EQUATIONSWe consider f(R,T) gravity coupled to nonlinear electrodynamics (NED). This theory is described by S= 1/2∫√(-g)d^4 x f(R,T) + ∫√(-g) d^4 x L_NED,where f(R,T) is an arbitrary function of the Ricci scalar R and the trace T of the energy-momentum tensor of the matter T_μν. The nonlinear electrodynamics Lagrangian is given by L_NED(F) where F=-1/4F_μνF^μν. The Faraday-Maxwell tensor is defined in term of the gauge potential F_μν=∂_μA_ν - ∂_νA_μ.Varying this action with respect to δ g^μν, yields the modified Einstein field equation𝒢_μν≡ f_R R_μν + (g_μν-∇_μ∇_ν)f_R - 1/2f g_μν =T_μν - f_T (T_μν + Θ_μν),where f_R = ∂ f/∂ R, f_T = ∂ f/∂ T and = ∇_α∇^α. The energy-momentum tensor T_μν and θ_μν are computed fromT_μν ≡ -2/√(-g)δ(√(-g)L_NED)/δ g^μν,θ_μν ≡ g^αβδ T_αβ/δ g^μν.With nonlinear electrodynamics sources, the explicit forms of T_μν and Θ_μν are expressed asT_μν = g_μνL_NED + L_F F_μγF_ν^γ,Θ_μν = -g_μνL_NED - F_μγF_ν^γ[L_FF/2F_ρσF^ρσ+L_F],where L_F = ∂ L_NED/∂ F and L_FF = ∂^2 L_NED/∂ F^2. Moreover, taking the trace of (<ref>) gives the following f_R= 1/3( T - f_T (T+Θ) + 2 f - f_R R),where T≡ g_μνT^μν and Θ≡ g_μνΘ^μν.The equation of motion of the gauge field is ∂_μ[√(-g)(4f_T L_FFF-L_F ) F^μν]=0.Now, we consider a static spherically symmetric solution. The line element written in Schwarzschild-like coordinate is given byds^2 = - A(r) dt^2 + B(r) dr^2 + r^2(dθ^2 + sin^2θ dϕ^2).We also consider a purely magnetic ansatz of the Faraday-Maxwell tensor <cit.>F^θϕ = q_m θ/r^4,where q_m is an integration constant that can be interpreted as the magnetic charge of the source. With this choice, the invariant F is -q_m^2/2r^4. One can show that the ansatz (<ref>) satisfies the equation of motion (<ref>).§ SOLVING THE MODIFIED FIELD EQUATIONSHere we consider the f(R,T) = R + 2 β T where β is anarbitrary constant. Together with purely magnetic field strength (<ref>), the modified field equations i.e., 𝒢_μ^ν = T_μ^ν - f_T ( T_μ^ν + Θ_μ^ν), are -1/r^2 + 1/B r^2 - B'/B^2r =(1+4β) L_NED + 2β q_m^2/r^4L_F, -1/r^2 + 1/B r^2 - A'/AB^2r =(1+4β) L_NED + 2β q_m^2/r^4L_F,A”/2AB - A'B'/4AB^2 + A'/2ABr - A'^2/4A^2B - B'/2B^2r = (1+4β) L_NED + q_m^2/r^4(1+2β)L_F + 2β q_m^4/r^8L_FF,where prime denotes derivative with respect to r. The first two equations imply that A=B^-1. Therefore, the remaining field equations are A'/r + A/r^2 - 1/r^2 = (1 + 4β)L_NED + 2β q_m^2/r^4L_F, A” + 2A'/r =2(1+4β)L_NED + 2q_m^2/r^4(1+2β)L_F + 4β q_m^4/r^8L_FF,In addition, the Ricci scalar is R= -(A” + 4A'/r + 2A/r^2 - 2/r^2).Substituting this into the trace of the modified field equations (<ref>) allows us to eliminate A” in (<ref>). After eliminating A”, we find that (<ref>) and (<ref>) are identical. Thus, we are left with a single first order ordinary differential equation (recalls that F=F(r)) m'(r)= -r^2/2(1+4β)L_NED - β q_m^2/r^2L_F,where the mass function m(r) is defined via A(r) ≡ 1- 2m(r)/r. There are two ways to solve this equation. Firstly, we may choose a particular form of m(r), then solving (<ref>) for L_NED. Secondly, we fix the form of NED Lagrangian and solve for the mass function m(r).Before solving for a new solution, let's examine the consistency of (<ref>). We consider the case where L_NED=-F, L_F = -1. Therefore (<ref>) can be solved asm(r)= -2M + q_m^2/4r,where M is an integration constant. By letting q_m = 2Q_m, we obtain a special case of dyonic Reissner-Nordström black hole <cit.>. We remark that when the Lagrangian matter reduces to the U(1) electromagnetic, β automatically disappears from f(R,T) = R + 2β T since the energy-momentum tensor is traceless. §.§ Fixed mass function In this subsection, we shall solve the modified field equation (<ref>) for the spherically symmetric regular black hole solution. We choose the mass function to be in the formm(r) = M e^-q_m^2/2Mr, .Here M is a constant parameter and q_m is the charge of the regular black hole. We remark that this form of mass function is considered to obtain regular black holes within the context of GR <cit.> and f(R) <cit.> gravity coupled to NED.Since  lim_r→∞m(r)/r = M, one can interpret the constant M as the black hole's mass. The black hole's event horizons are determined by A(r_h) =0, and the location of the outer event horizon is given byr_h= - q_m^2/2M Ω(-q_m^2/4M^2),where Ω(z) is the omega function or the Lambert W function. This mass function (<ref>) allows for three possible outcomes regarding number of the horizons, i.e., two positive real roots (inner and outer horizon), one degenerated root (extremal case) and no real root (horizonless case). Remark that, throughout this work, we shall particularly focus on the first two cases. The behaviour of A(r) are shown in Fig <ref> for three possible solutions. It can be observed from the figure that at small r, these solutions are finite i.e., A(r) ∼ 1. The solutions are clearly asymptotically flat since A(r) → 1 as r→∞. As q_m increases, the minimum value of A increases until A(r)>0 for all r.The regularity of the solution can be observed by considering two curvature scalar quantities, i.e., the Ricci scalar R and the Kretchmann scalar K. For the mass function (<ref>), we obtainR= e^-q_m^2/2Mrq_m^4/2Mr^5, K= R_μνσρR^μνσρ = e^-q_m^2/Mr/4M^2r^10(q_m^8 - 16Mq_m^6r + 96M^2q_m^4r^2 - 192M^3q_m^2r^3 + 192M^4r^4).In Fig <ref>, we display example plots of the Ricci and the Kretchmann scalars. The curvature scalars are finite everywhere for various values of q_m. Moreover, R and K behave asO(r^-5),O(r^-6), respectively as r→∞. In addition, the maximum value of R locates at r=q_m^2/10M. On the other hand, the radius renders K_max is not trivial. For instance, when M=1 and q_m=0.8, R_max is 1,285 at r=0.064 while K_max is 327,982 at r=0.057. We emphasize that the mass function (<ref>), together with these scalar curvatures, is already considered in GR and f(R) gravity coupled to NED<cit.>.To obtain L_NED, we substitute (<ref>) into (<ref>) and solve for L_NED. Thus, we obtainL_NED(F)= 𝒞F^1+1/4β + 2F/βEi_x(y),where 𝒞 is an integration constant and Ei_x(y) is the exponential integral function where x≡ 1+1/β and y ≡ -(-1)^3/4q_m^3/2F^1/4/2^3/4M. Since F is negative, this restricts the value of β i.e., 1+1/4β = n where n is an integer. In Fig <ref>, we illustrate the behaviour of L_NED as a function of invariant F. These plots demonstrate clearly a modification of standard Maxwell Lagrangian. As can be seen from the plots, L_NED approaches zero as F→ 0. With a given 𝒞 and requiring that L_NED should be a real value, one can show that L_NED∼ F + O(F^5/4) at small F. Interestingly, the no-go theorem states that the Einstein field equation couples to Lagrangian with the Maxwell behaviour at small F (i.e., L→ 0, L_F→ 1 as F→ 0) does not admit static spherically symmetric purely electric solution with a regular centre <cit.>. Let us remark that, our attempts to find regular black holes with electric charge are not successful. This is because with the purely electric gauge potential, the field equations reduces to a much more complicated second order differential equation comparing to (<ref>).§.§ Fixed Lagrangian Here, we solve (<ref>) for the mass function when the matter Lagrangian is fixed. The Lagrangian of nonlinear electrodynamics is chosen to beL_NED(F)= -2a/α(-4α F)^(b+3)/4/(1+(-4α F)^b/4)^1+a/b,where a,b and α are positive-valued arbitrary constants. This Lagrangian is adopted from <cit.> where the authors construct regular black holes in Einstein-NED and Einstein Cubic gravity, respectively. By inserting this Lagrangian into (<ref>), the following mass function is obtained m(r) = M - q^3/α(1+β) + q^3/α𝒬^-a/b(1+β) + a q^3/α𝒬^-1-a/b(𝒬-1)β,where 𝒬(r) ≡ 1 + (q/r)^b. M is the gravitational mass and q is an integration constant related to the magnetic charge q_m=q^2/√(2α). As it was pointed out in <cit.>,we may define the effective mass M_eff as the difference between gravitational mass M and the magnetically induced mass M_em = q^3/α(1+β), i.e., M_eff = M - M_em. Then, regular black hole is obtained by letting M = M_em. Therefore, the metric function of regular black hole in f(R,T) gravity coupled to nonlinear electrodyanmics source is A(r) =1 - 2q^3/α r𝒬^-a/b[(1+β) + aβ(𝒬-1)/𝒬] = 1 - 2q^3/α(r^b + q^b)^-a/b[(1+β) + aβ q^b/r^b + q^b]r^a-1,From (<ref>), it appears that to avoid the singularity, one must take a≥ 1. But a closer investigation on the Ricci and the Kretechmann scalar reveals that to ensure the regularity of the solution as r→ 0, a must be equal or greater than three (a≥ 3). This is demonstrated in Fig <ref>. We observe that both scalar curvatures diverge as r→ 0 for a<3. The leading order term of both scalar curvatures are R∼ r^a-1,     K ∼ r^2a-6.This agrees with the results found in <cit.>. For a=4, the maximum value of R and K are 12.11 and 24.85 for r=0.3 and r=0.29 respectively. For the remaining part of this work, we consider only the case where a≥ 3.Now, we consider asymptotic structures of A, R and K. As r→∞, we find that for a≥ 3A∼ 1 - 2q^3/α r(1+β) + O(1/r^b+1), R∼ O(1/r^5),          for b≤ 2 , ∼ O(1/r^b+3),      for b>2, K∼ O(1/r^6).The leading order of A suggests that the solution (<ref>) is asymptotically flat while the others display the regularity of the scalar curvatures at large r. The location of the black hole's event horizon is subtle without specifying a and b. For the sake of demonstration, we consider three particular cases, i.e., (i) a=3,b=2, (ii) a=3,b=3 and (iii) a=4,b=2. The first two cases are chosen such that the Lagrangian (<ref>) gives rise to the Bardeen-like and Hayward-like solutions <cit.>. The regular black holes for (i-iii) in f(R,T) gravity are (i) a=3,b=2 A_B(r)= 1 - 2q^3r^4/α(r^2 + q^2)^5/2[(1+β) + q^2/r^2(1+4β)],(ii) a=3,b=3A_H(r)= 1 - 2q^3r^5/α(r^3 + q^3)^2[(1+β) + q^3/r^3(1+4β)],(iii) a=4,b=2A(r)= 1 - 2q^3r^5/α(r^2 + q^2)^3[(1+β) + q^2/r^2(1+5β)].As β→ 0, the solutions A_B and A_H become regular black holes in general relativity i.e., the Bardeen and the Hayward solutions, respectively. The behaviours of these solutions are illustrated in Fig <ref>. The Bardeen-like and the Hayward-like solutions are shown for varying β. For the right figure, we fix β=0.5 and vary q instead. It can be seen that extremal black holes and horizonless solution are also possible by varying β or q. For example, in the third case, the two horizons coincide at r=1.157 with q=0.85617015. Moreover, it is clear that they are asymptotically flat as A→ 1 at large r. § ENERGY CONDITIONSIn this section, we consider null, weak and strong energy conditions (NEC, WEC, SEC) of the solutions discussed in the previous section. To consider the energy conditions in f(R,T) gravity, let us re-write (<ref>) as R_μν - 1/2Rg_μν = f^-1_R [ T_μν - f_T ( T_μν + Θ_μν)- (g_μν - ∇_μ∇_ν)f_R+ 1/2g_μν(f - R f_R)], ≡ T^(eff)_μν.where we have defined the effective energy-momentum tensor T^(eff)_μν. We identify T^(eff)0_0=-ρ^(eff), T^(eff)1_1=p^(eff)_1, T^(eff)2_2=p^(eff)_2 and T^(eff)3_3=p^(eff)_3. The energy conditions in f(R) gravity coupled to NED are discussed in <cit.>. In addition, the energy conditions of f(R,T) gravity have been addressed properly in <cit.>. These are NEC: ρ^(eff) + p^(eff)_1,2,3≥ 0, WEC: ρ^(eff)≥ 0, ρ^(eff) + p^(eff)_1,2,3≥ 0, SEC: ρ^(eff) + p^(eff)_1 + p^(eff)_2 + p^(eff)_3≥ 0, ρ^(eff) + p^(eff)_1,2,3≥ 0.To clarify the notation, ρ^(eff) + p^(eff)_1,2,3≥ 0 is simply ρ^(eff) + p^(eff)_i≥ 0 for i=1,2,3 separately. In this model, the non-vanishing diagonal components of the effective energy-momentum tensor are given explicitly byρ^(eff) = -(β T + L_NED),p^(eff)_1 = (β T + L_NED),p^(eff)_2 = p^(eff)_3 = L_NED + q_m^2/r^4L_F + β(T + 2q_m^4/r^8L_FF).Therefore, the energy conditions mentioned above reduce toNEC: ρ^(eff) + p^(eff)_1,2≥ 0, WEC: ρ^(eff)≥ 0, ρ^(eff) + p^(eff)_1,2≥ 0,SEC: 2p^(eff)_2≥ 0, ρ^(eff) + p^(eff)_1,2≥ 0, .Overall, we have four distinct inequalities. These will be considered in the following subsection for each regular black holes.§.§ Energy conditions INow, we consider the Lagrangian (<ref>). We also assume that q_m and M ≥ 0. It turns out that the energy conditions demand the followingNEC_2 & WEC_3 & SEC_3: e^-q_m^2/2Mrq_m/r(8Mr - q_m^2) ≥ 0, SEC_1: e^-q_m^2/2Mrq_m/r(4Mr - q_m^2) ≥ 0.Note that, SEC_1 and SEC_3 refer to 2p^(eff)_2≥ 0 and ρ^(eff) + p^(eff)_2≥ 0 respectively. The NEC_1, WEC_1, WEC_2 and SEC_2 are automatically satisfied. The NEC_2, WEC_3 and SEC_3 share similarities. Moreover, the SEC_1 provides another constraint on radial coordinate r (<ref>). However, all the energy conditions are satisfied simultaneously in a region r ≥q_m^2/4M.For parameter set chosen in Fig <ref>, we find that the NEC and the WEC are violated in the region r < {0.08, 0.125, 0.184} for q_m=0.8,1 and 1.213 respectively. However, these radii are much smaller than the inner event horizons. Thus, in these cases, the NEC and the WEC are satisfied in the exterior regions of the black holes. In contrast, the SEC_1 is found to be violated in a region between inner and outer horizon in q_m=0.8 and 1 cases. While the SEC_3 (<ref>) for these two cases requires r ≥ 0.08,0.125, the inner horizon locates at r=0.110 and r=0.232 respectively. Therefore, the SEC_3 holds very well inside of the inner horizon. However, in the near extremal limit q_m=1.213, the SEC holds between two horizons. For these parameter sets, we find that all the energy conditions hold outside the outer event horizon. The NEC, WEC and SEC are not met in a region deep inside the black holes. §.§ Energy conditions IIFor the Lagrangian (<ref>), the energy conditions become complicated and lengthy without specifying a and b. For this reason, we explicitly discuss the energy conditions for a=4 and b=2 case only. Similarly, the NEC_2, WEC_3 and SEC_3 are identical. The energy conditions are, therefore,WEC_1: r^2 (β-1) + q^2 (1+5β) ≥ 0, SEC_1: r^4 (1-β) + 10 q^2 r^2 β - q^4 (1+5β)≥ 0, NEC_2 & WEC_3 & SEC_3: 5r^4(1-β) + 2 q^2 r^2 (2+19β) - q^4 (1+5β) ≥ 0 ,where we replace q_m with q to match with the notation used in subsection <ref>. The NEC_1, WEC_2 and SEC_2 are naturally satisfied. In Fig <ref> and <ref>, we display the energy conditions as a function of r for fixed β and q, respectively. The energy conditions are violated if these curves become negative. When q=0.86, the WEC_1 holds continuously from the origin toward the exterior region. However, at certain radius outside of the black hole, the WEC_1 is violated (x-intercept is at r=2.28). In contrast, the other energy conditions hold from the certain radius inside the black hole all the way toward black hole's exterior. As we move away from near-extremal scenario, the EC_1 becomes negative relatively faster than the near-extremal case (x-intercept is at r=3.97). These are shown in Fig <ref>. In addition, we explore how β affects the energy conditions in Fig <ref>. The left figure shows similar behaviours of the energy conditions as mentioned in the previous figure. The WEC_1 is violated just prior to the outer event horizon (x-intercept is at r=2.65 while the outer horizon is at r=2.69). While the others are positive from the inner horizon. When β=1, the EC_1 becomes positive constant in r, therefore it is always satisfied. The other two remain positive right after the inner horizon. Remarkably, the energy conditions change dramatically for β=2. The EC_2 and EC_3 are positive for some particular region inside the black hole's outer horizon before rapidly become negative. The EC_1, on the other hand, holds throughout spatial coordinate r.For this particular case i.e., a=4 and b=2, we find that the NEC and SEC are easily met at the black hole's exterior while the WEC will be violated at certain radius. However, appropriate selections of parameters can possibly make all the energy conditions satisfied. § QUASINORMAL MODESA massive scalar field (Φ) on curved spacetime is described by the Klein-Gordon equation∇_γ∇^γΦ - μ^2 Φ = 0,where μ is the scalar field's mass. In a spherical symmetric spacetime, the scalar field can be expressed as Φ(t,r,θ,ϕ)= R(r)/re^-iω tY(θ,ϕ),where Y(θ,ϕ) is the spherical harmonics. Under the spacetime metric (<ref>) (with B=A^-1), the Klein-Gordon equation takes the formd^2R/dr_∗^2 + (ω^2 - V(r))R= 0.The effective potential isV(r)= A(r)(μ^2 + ℓ(ℓ+1)/r^2 + A'(r)/r),where ℓ is the spherical harmonic index. Moreover, we have introduced the tortoise coordinate defined byr_∗ = ∫dr/A(r).The appropriate boundary conditions that lead to the quaisnormal mode are purely ingoing at the black hole's event horizon r→ r_h or r_∗→ -∞ and no incoming flux at infinity r,r_∗→∞. The frequencies ω corresponding to these boundary conditions will be discrete complex number or quasinormal frequencies. This complex frequency can be written in the form ω = ω_R ± i ω_I.Let's first consider the effective potential (<ref>) more explicitly. All the solutions considered in this work are asymptotically flat, therefore V→μ^2 as r→∞. Unless A'(r) < 0, the location where V vanished is only determined by the roots of A(r). The effective potential for several types of regular black holes are shown in Fig <ref>-<ref>. For the mass function (<ref>), the effective potential is illustrated in Fig <ref>. As the charge q_m increases, the height of V increases. We observe that the zeroth of V occurs at the location of the black hole's outer horizon. More precisely, these potentials have another zeroth located at smaller r which corresponds to the inner horizon. However, these are not explicitly displayed in the plots. In the extremal case (black solid line in the left figure), the potential possesses only one root. The central figure of Fig <ref> demonstrates the effect of harmonic index ℓ to the height of the effective potential. As ℓ decreases, the peak of V decreases. The last figure illustrates that the peak of V increases with scalar field's mass. Asymptotic value of V approaches μ^2 as expected. Remark that, similar plots are already explored in the case of scalar perturbations on the Bardeen solution <cit.>.Now, we consider the effective potential of the solutions reported in subsection <ref>. We name the Bardeen-like, the Hayward-like and a=4,b=2 solution as ansatz 1, 2 and 3, respectively. The potentials are illustrated in Fig <ref>. In these plots, the ansatz 1, 2 and 3 are represented by solid, dashed and dot-dashed lines, respectively. The effects of ℓ and μ on the effective potential are qualitatively similar to the previous case as demonstrated in the central and right figures. We notice that the differences between the ansatz 1, 2 and 3 become more significant as ℓ or μ increases. In contrast, the differences are less apparent as β increases. This is shown in the left figure. Moreover, the height of the potential decreases as β increases. Remark that, in the case that β=0, the effective potential of the ansatz 1 (red solid line) is plotted in <cit.>. §.§ The Padé averaged WKB approximation method To calculate the quasinormal frequencies, we employ the sixth order WKB approximation technique. With this method, it is possible to obtain the quasinormal frequencies ω via the following expression (up to sixth order) <cit.>i(ω^2 - V_max)/√(-2V”_max) - Λ_2 - Λ_3 - Λ_4 - Λ_5 - Λ_6= n+1/2,where V_max, V”_max are the effective potential and second derivative of the effective potential with respect to the tortoise coordinate evaluated at the maximum point of the potential. The overtone number is denoted by n. Iyer and Will find the correction terms up to the third order (hence Λ_2,Λ_3) <cit.>. While later, Konoplya find three more correction terms Λ_4,Λ_5 and Λ_6 which are defined in <cit.>. To improve the numerical accuracy, the WKB approximation is extended to thirteen order including the Padé averaged in <cit.> where the quasinormal frequencies of the Schwarzschild and Reissner-Nordström black hole are reproduced. It turns out that with the Padé averaged technique, many known results can be reproduced with great accuracy <cit.>. The Mathematica code for calculating quasinormal frequencies up to thirteen order WKB with improved Padé averaged is provided in <cit.>. Thus, we will implement the code for computing quasinormal frequencies in this work.We remark that, throughout this section, the parameter α is substituted by q^3/M(1+β). As a consistency check, we list the n=0 quasinormal frequencies in Table <ref>. In this table, we reproduce the results already obtained in refs <cit.> which are shown in the rightmost column. We implement the sixth order WKB with Padé average to compute the quasinormal frequencies of massive scalar perturbation with spacetime background given by the mass function (<ref>) (the upper table) and ansatz 1 with β=0 (the lower table). The error estimation denotes the root mean square error corresponding to the sixth order WKB with Padé approximation. The black hole's mass is set to unity. The upper table displays ω as a function of q_m for ℓ=1 and ℓ=2 (in parentheses). Both real and imaginary parts increase as the black hole's charge increases (in magnitude). The lower table investigates the effect of scalar field's mass μ on the ω with fixed q=0.76 and ℓ=2. The real part of ω increases with μ whereas the imaginary part decreases with μ. Apparently, the sixth order Padé averaged WKB approximation method agrees with those results found earlier.Now, we shall turn our attention to the QNMs of regular black holes of ansatz 1, 2 and 3, (<ref>)–(<ref>). In Table <ref>–<ref>, the quasinormal frequencies with ℓ=0-2are displayed as a function of q for Bardeen-like, Hayward-like and a=4,b=2 solutions respectively. For comparison, in these tables, we fix M=1,β=0.1 and scalar field's mass is 0.1. Since the WKB approximation works very well when ℓ>n <cit.>, we consider ℓ=1,n=0, ℓ=2,n=0 and n=1 cases. Despite the ℓ=n=0 case might not be well-approximated by the WKB method, we will include them in the tables since they are the most fundamental modes. It turns out that the quasinormal frequencies of the ansatz 1–3 share similar trends. As the black hole's charge q increases, the real part of ω increases while the imaginary part becomes less negative. Various studies on QNMs of regular black holes also report the similar trend <cit.>. With increasing angular index ℓ, the real part increases. In contrast, the effect of ℓ on ω_I is non-trivial. At first, the imaginary part decreases (in magnitude) when ℓ moves from zero to one. Later, the imaginary part increases once again as ℓ=2. Lastly, both ω_R and ω_I decrease as the overtone number n is larger. We observe that the quasinormal frequencies of these regular black holes (ansatz 1–3) marginally differ from each other. This should not be surprised because the effective potentials of these ansatzes (Fig <ref>) are nearly identical. Therefore, for the remaining part of this article, we will particularly focus only on the ansatz 3 for the sake of presentation. We explore how the coupling constant β affects the quasinormal frequencies ω in Fig <ref>. In this plot, we choose four particular values of black hole's charge q=0.1,0.5,1.0 and 1.4. As β increases, the real part and imaginary part of ω decrease. The decrease of frequencies becomes less obvious at larger β. In addition, the change in ω can be clearly seen as q increases. As q gets larger, both ω_R and ω_I are smaller (in magnitude). It is worth mentioning that at q=1.0 and q=1.4, there are no regular black holes for β < 0.76 and β<6.2 respectively.In Fig <ref>, the dependence of ω on spherical harmonic index ℓ is illustrated. The real part of ω increases monotonically with ℓ. In addition, the differences in ω_R between each fixed β become more evident at large ℓ. To demonstrate the change in ω_I, we express them with ln (|ω_I|). At small ℓ, ω_I varies drastically with ℓ, but as ℓ gets bigger, the change in ω_I becomes less significant. Remark that,the imaginary parts of quasinormal frequencies are more negative as ℓ increases. These trends are also observed for QNMs of regular black holes <cit.>, Bardeen black holes <cit.> and Bardeen de-Sitter black holes <cit.>. It can be seen from the plots that ω_R and ω_I decrease as β increases which are in agreement with what was discussed in the previous figure.In Fig <ref>, we demonstrate the effect of scalar field's mass μ on the quasinormal frequencies. As the field's mass increases, the real part of the frequencies increases, while the imaginary part becomes smaller as μ increases. The lowest overtone mode (n=0) has larger ω_R and ω_I comparing to the higher overtone modes. We notice that ω_R increases monotonically along with the μ. In contrast, for the first overtone (n=1) of QNMs of Bardeen black hole <cit.>, its real part approaches a certain maximum value and then decreases with the scalar field mass.In addition, a study on massive scalar perturbation on Reissner-Nordström black hole reveals that it is possible to have arbitrarily long live modes or quasi-resonance modes as the scalar field's mass increases <cit.>. Similarly, our results are expected to respect this behaviour. However, the WKB approximation method is not sufficient to accurately provide the quasi-resonance modes <cit.>. §.§ The eikonal limit When solving for quasinormal frequencies of black holes, various numerical schemes are applicable. Nevertheless, there is an approximation that provides a useful formula for quasinormal frequencies with great accuracy. To calculate QNMs of black holes, one can consider the so-called geometric-optics or eikonal limit as suggested by Mashhoon and Ferrari <cit.>. In the eikonal limit (ℓ→∞), the effective potential (<ref>) is simply V_eik(r)≈Aℓ^2/r^2This greatly simplifies the radial wave equation (<ref>). The reduced radial equation can be solved given that the effective potential V_eik satisfies the quantization condition i(ω^2 - V_max)/√(-2V”_max) = n+1/2.where V_max now denotes the maximum point of V_eik i.e., dV_eik/dr_∗|_r=r_0=0. It turns out that the eikonal QNMs can be expressed as the first order of the WKB formula (<ref>) (e.g. see <cit.>). The higher order terms Λ_2-Λ_6 can be considered as correction terms to the eikonal limit. Remarkably, it is pointed out in <cit.> that QNMs in the eikonal limit can be related to unstable circular null orbit around black holes in any dimensions. The real part of quasinormal frequency is determined by the angular velocity at the unstable null geodesics (Ω). The imaginary part is related to the Lyapunov exponent (λ_L) which corresponds to an inverse of instabiltity timescale of the null orbit. In <cit.>, an upper bound of the Lyapunov exponent of a particle near the horizon is considered. The upper bound is determined by the surface gravity at the horizon <cit.>. Very recently, the violation of the Lyapunov exponent bound is found for Kerr-Newmann de Sitter black hole <cit.>. The approximation formula of the quasiormal frquencies in the eikonal limit can be expressed as <cit.>ω_eik = Ωℓ - i (n + 1/2)|λ_L|,whereΩ = . √(A/r^2)|_r=r_0,λ_L= 1/√(2).√( A(A” - 6A'/r + 6A/r^2) + A'^2)|_r=r_0.Both the angular velocity and the Lyapunov exponent are evaluated at r_0 and the prime refers to derivative with respect to r. We have checked and confirmed that (<ref>) agrees with the Padé averaged WKBin the limit ℓ≫ 1. In Fig <ref>, we illustrate the behaviour of the angular velocity (Ω) and the Lyapunov exponent (λ_L) as a function of β and q. As can be seen from the plots, the angular velocity decreases with β. At the lower β, the angular velocity drops significantly comparing to the larger β. Notice that, Ω changes rapidly with β at higher black hole's charge q. In addition, as β increases, the Lyapunov exponent becomes larger before approaching a certain asymptotic value. From the bottom figure, we observe that the angular velocity (the Lyapunov exponent) increases (decreases) monotonically with q. The behaviour of Ω against q is in agreement with those found earlier in <cit.> for the Reissner-Nordstrom, the Bardeen black holes and the regular black holes with exponential mass function. In contrast, our results on the Lyapunov exponent plotted against q resemble the Bardeen black holes but substantially different from the Reissner-Nordström and the regular black holes <cit.>. Remark that, the Lyapunov exponent of the Reissner-Nordström black hole increases with q until it reaches its maximum value at some certain q and then it decreases.§ CONCLUSIONSIn this work, we study the f(R,T) gravity coupled to nonlinear electrodynamics Lagrangian. With purely magnetic component of the gauge field, asymptotically flat static spherically symmetric black holes with regular centre are constructed. The black hole solutions are obtained via two approaches: I.) Fix the mass function and solve for L_NEDII.) Fix the L_NED and solve for the mass functions. The first approach yields a functional form of a novel L_NED. Figure <ref> clearly shows the difference from the standard U(1) electromagnetic Lagrangian. From the second approach, we find a generalized metric function that can be reduced to the Bardeen and the Hayward black holes in an appropriate limit. From both approaches, we find that these charged black holes possess two event horizons without essential singularities. These are shown in Fig <ref> and Fig <ref> where the Ricci scalar and the Kretchmann scalar are plotted. The energy conditions (null, weak and strong) of these solutions are also explored. For the regular black holes obtained via the first approach, all the energy conditions considered here hold in exterior region of the black holes. The black hole solutions from the second approach respect the null and strong energy conditions outside the black hole's outer horizon for small value of β. As β increases, the null and strong energy conditions are not guaranteed to hold. In contrast, the weak energy condition is violated at some certain radius inside the outer horizon for small β. As β increases, the weak energy condition will be satisfied outside the black hole's outer horizon.We investigate a massive scalar perturbation on these regular black holes. The corresponding quasinormal frequencies are computed via the Padé average WKB method. For all cases considered in this work, the imaginary parts of the frequencies are all negative. We find that the real parts of the frequencies increase with q,ℓ and μ while they decrease with β. In addition, the imaginary parts of the frequencies become less negative as q and μ increase, and become more negative as β and ℓ increase. In the eikonal limit, the angular velocity (the Lyapunov exponent) decreases (increases) with β. Furthermore, the dependence of λ_L on q is different from those of the Reissner-Nordström black holes and the regular black holes with exponential mass function. There are several ways to extend this work further. Despite what the no-go theorem states in <cit.>, it is crucial to show whether the f(R,T) admits electrically charged regular black holes. It is interesting to consider the thermodynamics properties of regular black holes in f(R,T) gravity. This problem could be challenging since the first law of thermodynamics in f(R,T) is violated <cit.>. Moreover, the photon motions around the black holes discussed in this article are also important since this could lead to the study of optical appearances of these black holes. Additionally, there are various forms of the function f(R,T), and it is of great interest to explore whether they admit an exact black hole solution. This work (Grant No. RGNS 64-217) was supported by Office of the Permanent Secretary, Ministry of Higher Education, Science, Research and Innovation(OPS MHESI), Thailand Science Research and Innovation (TSRI) and Silpakorn university. M. Youk was supported by the Faculty of Science, Silpakorn University, Thailand, through grant SCSU-STA-2566-11.
http://arxiv.org/abs/2312.16614v1
{ "authors": [ "Takol Tangphati", "Menglong Youk", "Supakchai Ponglertsakul" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20231227154525", "title": "Magnetically charged regular black holes in $f(R,T)$ gravity coupled to nonlinear electrodynamics" }
Effect of detachment on Magnum-PSI ELM-like pulses: II. Spectroscopic analysis and role of molecular assisted reactions Magnum-PSI Team======================================================================================================================= Tomato leaf diseases pose a significant challenge for tomato farmers, resulting in substantial reductions in crop productivity. The timely and precise identification of tomato leaf diseases is crucial for successfully implementing disease management strategies. This paper introduces a transformer-based model called TomFormer for the purpose of tomato leaf disease detection. The paper's primary contributions include the following: Firstly, we present a novel approach for detecting tomato leaf diseases by employing a fusion model that combines a visual transformer and a convolutional neural network. Secondly, we aim to apply our proposed methodology to the Hello Stretch robot to achieve real-time diagnosis of tomato leaf diseases. Thirdly, we assessed our method by comparing it to models like YOLOS, DETR, ViT, and Swin, demonstrating its ability to achieve state-of-the-art outcomes. For the purpose of the experiment, we used three datasets of tomato leaf diseases, namely KUTomaDATA, PlantDoc, and PlanVillage, where KUTomaDATA is being collected from a greenhouse in Abu Dhabi, UAE. Finally, we present a comprehensive analysis of the performance of our model and thoroughly discuss the limitations inherent in our approach. TomFormer performed well on the KUTomaDATA, PlantDoc, and PlantVillage datasets, with mean average accuracy (mAP) scores of 87%, 81%, and 83%, respectively. The comparative results in terms of mAP demonstrate that our method exhibits robustness, accuracy, efficiency, and scalability. Furthermore, it can be readily adapted to new datasets. We are confident that our work holds the potential to significantly influence the tomato industry by effectively mitigating crop losses and enhancing crop yields. § INTRODUCTIONSolanum lycopersicum is the scientific name for tomatoes, which can grow on almost any well-drained soil <cit.> and is grown in fields by nine out of ten farmers. To use freshly produced tomatoes in their kitchens and enjoy excellent meals, many gardeners cultivate tomatoes in their home gardens. Depending on unfavourable seasonal and environmental conditions, plant diseases and pests significantly reduce plant yield, resulting in economic and societal losses. It takes time and money for people to identify pests and pathogens. Farmers still face difficulties in accurately identifying plant diseases. They are limited to speaking with other farmers or respective agricultural professionals as their only choices. The ability to recognize leaf diseases requires knowledge of plant diseases. As a result, farmers want automated AI image-based solutions.The present state of computer vision applications, specifically in image and video analysis, involves utilising and acknowledging images as a dependable approach to disease diagnosis. This recognition is primarily facilitated through the accessibility of suitable software packages or tools. These technologies employ sophisticated image processing techniques, contributing to intelligent image identification, thereby enhancing recognition efficiency, cost reduction, and overall recognition accuracy <cit.>.Transformer networks have demonstrated their efficacy in various natural language processing tasks, such as machine translation, summarising texts, and question answering <cit.>.Recently, there has been a surge in the inclination towards employing transformer networks for computer vision tasks, encompassing image classification and object detection. This research presents a novel methodology for identifying tomato leaf diseases using a customized transformer network integrated into the sophisticated Hello Stretch robot. Our methodology is founded upon the incorporation of the subsequent two fundamental concepts:* Use a transformer network to extract features from images of tomato leaves. Transformer networks are ideally suited for this task because they can discover long-range dependencies in data sequences. This is important for tomato leaf disease detection, as the symptoms of different diseases often appear in different parts of the leaf. * Use the Hello Stretch robot to collect images of tomato leaves. The Hello Stretch robot is a mobile manipulator designed for indoor use. It has a depth camera to capture high-quality images of tomato leaves. § RELATED WORK Computer vision has experienced rapid growth in recent years due to advancements in modern science and technology. This has led to a broader range of computer vision applications, including identifying and categorising plant diseases.Numerous artificial intelligence methods are currently employed for this purpose, encompassing a range of techniques such as k-nearest neighbors algorithm (K-NN), logistic regression (LR), decision trees (DTs), support vector machines (SVMs), and deep convolutional neural networks (DCNNs) <cit.>.These methods improve feature extraction when applied with image preprocessing.However, these methods are still weak regarding the model's efficiency. Within the realm of supervised learning, CNNs can be regarded as comprehensive solutions for the purposes of classification or detection tasks. In their study, Brahimi et al. <cit.> employed a convolutional neural network (CNN) architecture to detect diseases in tomato leaves.Xibei et al.<cit.> proposed in their work a network known as Fully Convolutional – Switchable Normalisation Dual Path Networks (FC-SNDPN) that was developed for autonomous recognition and detection of crop leaf diseases. Specifically, the focus was classifying eight types of diseases and insect pests commonly found on tomato leaves in southern China. Using a spatial pyramid-oriented encoder–decoder cascade CNN architecture, Wang et al. <cit.> developed a method for detecting plant diseases in leaf tissue and segmenting the affected areas.In recent years, CNNs have been effective in plant disease identification, as the studies discussed previously proved. Additionally, it has come to light that CNN-based architectures for detecting plant diseases predominantly take their cues from core frameworks that include Inception, GoogleNet, and ResNet<cit.>. In the domain of plant disease identification, a relatively new model known as the vision transformer (ViT) has recently been implemented <cit.>. The attention mechanism was incorporated into a deep residual CNN by Karthik et al. <cit.> to detect infections in tomato leaves.Zeng et al. <cit.> introduced a residual CNN structure enhanced with a self-attention mechanism to capture and extract relevant features from crop disease spots effectively. This approach was employed to identify and classify crop diseases accurately.In another study, for the purpose of disease identification in vegetables with complicated backgrounds, Zhou et al. <cit.> developed a progressive learning network with an attention block. Most recently, the research study conducted by Alshammari et al. <cit.> involved the development of a sophisticated hybrid model that integrates the visionary transformer architecture with the CNN architecture to ascertain the utmost efficacy and pertinence in the identification and categorization of olive diseases. The process of identifying plant diseases necessitates a high level of attentiveness toward the nuanced distinctions present within leaf images. Henceforth, we have amalgamated the Vision Transformer (ViT) with a CNN block to extract intricate features effectively. Additionally, we have introduced object queries to the encoder section of the transformer, following the implementation paradigm of the DETR architecture. The details are in the section <ref>.§ THEPROPOSED METHOD The Tomato TransFormer (TomFormer) model, introduced as a novel object detection approach, draws inspiration from two influential models: Vision Transformer (ViT) <cit.> and DEtection TRansformer (DETR) <cit.>. While DETR revolutionized object detection by replacing conventional two-stage methods with a single-stage transformer-based approach, ViT demonstrated the efficacy of transformers in image classification tasks. Building upon these advances, TomFormer incorporates critical modifications to address object detection requirements effectively and achieve superior performance in this domain. In TomFormer, the task of object detection is framed as a direct set prediction problem. Instead of generating bounding boxes through region proposal networks, TomFormer predicts the set of bounding boxes directly from the input image. It accomplishes this using a transformer encoder architecture with object queries as a side input. The encoder part of TomFormer processes the input image and extracts a set of feature maps. The positional embeddings allow the model to incorporate spatial information during the decoding process. To train TomFormer, it uses a bipartite matching loss and a Hungarian algorithm-based bipartite matching procedure to associate predicted boxes with ground-truth boxes. This loss ensures that each predicted box corresponds to a unique ground-truth box and vice versa, aiding in better learning and stable training. §.§ Image Processing Head In TomFormer, the cls token used in ViT for image classification is removed, and instead, N learnable object queries are introduced, enhancing the model's capability for object detection. During training, TomFormer adopts the bipartite matching loss from DETR, facilitating precise object detection. Moreover, TomFormer leverages both positional embeddings and convolutional features extracted by the CNN block, effectively merging low-level and high-level features for a comprehensive representation.TomFormer exhibits a well-defined structure that effectively processes input images to facilitate object detection. The model's architecture consists of two pathways, wherein the input image undergoes distinct processing stages. The first pathway involves the utilization of a CNN block, which is responsible for extracting relevant features using convolutional layers. Subsequently, the output of the convolutional layer undergoes further refinement through max-pooling layers, where the dimension of each feature is reduced for enhanced computation. Following the successful extraction of critical features, the input is fused with patch embeddings of the input image. The original image is transformed into a series of 2D image patches to accomplish this, where the dimensions are represented as x ∈ℝ^P^2 × C× N. where C is the number of channels, P is the dimension of a single image patch, and N is the number of resulting patches which is determined by N = HW/P^2, signifying the size of the input sequence of image patches for the subsequent transformer module.Using a trainable linear projection, the reshaped patches are mapped to positional embeddings with dimensions D, making them fully compatible with the Transformer's constant latent vector size D across all layers E ∈ℝ^(P^2 · C) × D. The output of this projection is referred to as x_PE. These positional embeddings capture crucial positional information within the image patches, contributing to a comprehensive representation of the input data. Notably, the output of this projection is concatenated with the output from the CNN block, effectively merging the learned positional embeddings with the CNN-derived features. Remarkably, this strategic concatenation ensures a harmonious integration of both low-level and high-level visual information. This concatenated output is known with x_PE + CNN. §.§ TomFormer EncoderFollowing the successful merge of input features, they are further processed in the encoder block, where they are represented as a Token. This Token representation is a compact and informative encapsulation of the input image, facilitating subsequent object detection tasks. By incorporating these design principles, TomFormer adeptly combines the strengths of CNN-based feature extraction and the Transformer's attention mechanism, culminating in an efficient and robust model for object detection tasks.In addition to the merging of positional embeddings and convolutional features, the TomFormer model introduces 20 randomly initialized learnable tokens, also referred to as object queries, represented as x_oq∈ℝ^20 × D. These queries are appended to the combined input features, further enriching the representation for object detection. A practical rationale drives the choice of 20 tokens: in typical scenarios, the number of leaf objects in an image is expected to be relatively small up to 15 at max, and therefore, a small number of x_oq is deemed sufficient to account for potential objects of interest. Specifically, the x_oq tokens effectively serve as learnable representations of distinct tomato leaves that appear in the image. By introducing a set number of tokens, TomFormer intuitively balances model complexity and efficiency, ensuring that the architecture remains well-suited for object detection tasks with realistic object counts. The resulting sequence, denoted as y_Res, constitutes the enhanced input for the subsequent TomFormer encoder as shown in Equation <ref>, enabling the model to effectively process and attend to the essential visual information necessary for accurate object detection. Through the thoughtful incorporation of these x_oq tokens, TomFormer demonstrates its adaptability to handle various object detection scenarios with notable efficiency and robustness. y_Res = [ x^1_PE + CNN; x^2_PE + CNN; x^3_PE + CNN; … ; x^N_PE + CNN] ∪[ x^1_oq; x^2_oq; x^3_oq; … ; x^20_oq] In each encoder layer of the TomFormer, there are two fundamental components: the multi-head self-attention (MSA) block and the multi-layer perception (MLP) block. Both of these blocks are accompanied by LayerNorm (LN) <cit.> to normalize the intermediate results, and residual connections to facilitate information flow within the network <cit.>. The MSA block allows the model to attend to different parts of the input sequence while capturing the long-range dependencies between elements. On the other hand, the MLP block consists of two hidden layers with the GELU <cit.> activation function, enabling the model to introduce non-linearity and enhance its capacity for complex pattern recognition in leaf images.Formally, for the n-th TomFormer encoder layer, these components, namely MSA, LN, GELU-activated MLP, and residual connections, are combined to ensure efficient information propagation and effective feature extraction as shown in Equation <ref> and <ref>, where n is the number of encoder layers. The use of these blocks within each encoder layer contributes to the overall expressive power of the TomFormer model and facilitates its ability to capture and learn intricate relationships and patterns in the images, ultimately leading to improved performance in various tasks and applications.y'_n= MSA(LN(y_n-1)) + y_n-1 y_n= MLP(LN(y'_n)) + y'_n§.§ Feed Forward Network The detector head of TomoFormer features a simplified and streamlined design reminiscent of the elegant image classification layer found in ViT. Classification and bounding box regression tasks are accomplished by utilizing a single feed-forward network (FFN) with separate parameters. This FFN consists of two hidden layers with intermediate ReLU activation functions.During fine-tuning of our datasets, a bipartite matching loss is introduced for each forward pass, establishing an optimal association between the predictions generated by object queries and the ground truth objects. This approach eliminates the need for reinterpreting ViT's output sequence into 2D feature maps for label assignment. Consequently, TomoFormer can perform object detection across any dimension without requiring precise spatial structure and geometry information as long as the input remains flattened to a consistent sequence format during each pass. This flexibility enhances the model's potential for diverse object detection tasks. §.§ The Hello Stretch robot <cit.> The Hello stretch robot is a mobile manipulator designed for indoor use. It is a tall, thin robot with a single arm that extends from its head. The robot is made of a combination of aluminium and plastic, and it weighs 23 kg. The stretch hello robot is shown in Figure <ref>.The structure of the Hello Stretch robot is as follows: * Head:The head of the robot houses the robot's computer, sensors, and actuators. The computer is a Jetson Nano, and the sensors include a depth camera, an inertial measurement unit (IMU), and a laser range finder. The actuators include the motors that drive the robot's wheels and the motor that drives the arm.* Arm:The arm of the robot is a telescoping arm that can reach up to 43 inches high and extend outward 20.5 inches. The arm is made of custom carbon fibre, and a single motor drives it. The arm has a gripper at the end that can hold objects up to 1.5 kg. * Base:The base of the robot is a two-wheeled mobile base. The base is made of aluminium, and it has a diameter of 34 cm. Two motors drive the base, which can move at a maximum speed of 0.6 m/s. The Hello Stretch robot is versatile and can be used for various tasks, such as picking and placing objects, assembling products, and providing customer service. The robot is also open source, meaning it can be customized and further developed by the community.The amalgamation of the Stretch Robot and the TomFormer model represents a significant advancement in agricultural technology, providing an intelligent, autonomous, and rapid disease monitoring system. With its ability to cover extensive areas and detect diseases early, the robot empowers farmers and agronomists with timely insights for targeted interventions and disease management strategies. The real-time feedback mechanism further enhances decision-making, as the robot promptly marks the affected areas on the plants, facilitating prompt and precise treatments. As we continue to explore the potential of this integrated system, its deployment in tomato detection showcases the transformative impact of robotics and artificial intelligence in revolutionizing the agricultural landscape.§ EXPERIMENTS AND RESULTS We conduct a comprehensive analysis of the performance of our approach on multiple datasets, including our proprietary dataset and two publicly available datasets known as Plant Village <cit.> andplantDoc <cit.> datasets. The sample images from each dataset are shown in Figure <ref>. We comprehensively analyse the results and perform a comparative evaluation of our method against state-of-the-art transformer-based approaches. The performance evaluation of our proposed method in this study involved using the mean average precision (mAP) metric.The mAP metric considers the balance between precision and recall while also considering the presence of false positives (FP) and false negatives (FN). This particular characteristic renders mAP a suitable metric for various detection applications. The mathematical equation for mAP is shown in Equation <ref>. mAP= 1/N∑_i=1^NAP_i §.§ Experimental Setup The proposed framework has been successfully implemented utilizing the PyTorch deep learning framework. The initial learning rate is set to 0.01 and is decreased by 5% after each epoch. The experimental setup incorporated the following components: NVIDIA GeForce RTX 4090 Ti GPUs with a combined memory capacity of 24GB, the Ubuntu 20.04 operating system, an Intel i9 CPU, and 64GB of RAM. The proposed model comprises 1.30 million network parameters, with an estimated inference time of approximately 200 seconds. After training our model on the aforementioned system, we implemented it on the stretch robot for real-time inference. The stretch robot used Robot Operating System (ROS) and python3. The seamless integration of the TomFormer model into the software environment of the Hello Stretch Robot was instrumental for effective tomato disease detection. We ensured that all the necessary dependencies for the TomFormer model were installed. Subsequently, we created a dedicated interface within the ROS environment. This interface allowed us to receive image data from the robot's built-in depth camera. These images were then passed to the TomFormer model for inference. The process continued with the model being loaded using Python3. Once loaded, the model executed the inference process, which resulted in accurately detecting tomato diseases within the images. The output images containing detected objects are then saved in the robot repository. This integration and inference workflow was pivotal in enhancing the robot's capabilities for real-time disease management in tomato fields. Figure <ref> shows the inference results displayed on the monitor for an image captured by a stretch robot.§.§ KUTomaDATA Dataset The dataset used in this study comprises 939 images of tomatoes captured within greenhouses in Al Ajban, Abu Dhabi, United Arab Emirates. These images were acquired using mobile phone cameras and encompass a wide range of leaf images, ranging from healthy ones to those affected by various diseases. To ensure diversity and representation of different disease categories, the dataset has been meticulously partitioned into eight distinct classes, which were determined based on the visual appearance of each disease from publicly available datasets, i.e., PlantDoc <cit.>, PlantVillage <cit.>. The specific classes included in the KUTomaDATA dataset are as follows: healthy, bacterial spots, early blight, late blight, leaf mold, septoria leaf spot, mosaic virus, and yellow leaf curl. The dataset consists of varying numbers of images for each class, with 118, 113, 122, 109, 116, 119, 124, and 118 images, respectively.As a result, each image underwent labelling using the Roboflow <cit.> annotator, and the resulting annotations were exported in JSON format. These annotations include the object labels, coordinates, and image dimensions.§.§ PlantDoc Dataset <cit.> The dataset was created through the annotation of publicly available images, requiring a total of 300 human hours. It consists of a total of 2,598 data points, encompassing 13 different plant species and up to 17 types of diseases. For our study, we picked 700 leaf images of healthy and disease classes for the tomato dataset to be used for training and testing. For scientists and programmers engaged in plant disease identification, the PlantDoc dataset is a valuable resource. The dataset is vast and varied, and it includes information on many different plant species and ailments. Additionally, the dataset is well-annotated, making it simple to use for developing and testing a variety of models. §.§ PlantVillage Dataset <cit.> The tomato disease images were sourced from the Plant Village dataset <cit.>, which comprises a collection of over 50,000 images representing 14 distinct crops, such as tomatoes, potatoes, grapes, apples, corn, blueberries, raspberries, soybeans, squash, and strawberries. These images were captured under carefully controlled conditions. From this dataset, we separated eight types of tomato leaf diseases: bacterial spots, early blight, late blight, leaf mold, Septoria leaf spot, mosaic virus, yellow leaf curl virus, and healthy. The experimental dataset consists of 700 images in total.§ DISCUSSIONThe comprehensive evaluation of the object detection models across the three class labels reveals important insights for tomato leaf disease detection. TomFormer consistently emerged as the top-performing model across all three datasets (KUTomaDATA, PlantDoc <cit.>, PlantVillage <cit.>), indicating its robustness and efficacy in detecting and localizing plant diseases and healthy leaf conditions. Detailed results are presented in the later parts of this section. However, it is essential to consider various factors, such as model complexity, computational resources, and real-world applicability, when selecting the most suitable model for a specific plant disease identification scenario. The computational capabilities of the Hello Stretch Robot are of paramount significance within our integrated framework. The robot is equipped with a high-performance computing unit with a multi-core Intel i5-8259U processor paired with 16GB of RAM. This configuration offers a substantial computational capacity, rendering it well-suited for tasks including image processing and the execution of inference for object detection. The competitive performance of YOLOS <cit.>, DETR <cit.>, ViT <cit.>, and Swin transformer <cit.> underscores their potential as viable alternatives for tomato disease detection. Their comparative results are presented in the Table <ref>. §.§ Results on KUTomaDATA In evaluating object detection models on the KUTomaDATA classes, which comprise a diverse set of plant health conditions and diseases, we observed varying performance levels across mAP. TomFormer emerged as the top-performing model with an impressive mAP score of 87%.The demonstrated outcome showcases TomFormer's remarkable ability to effectively identify and pinpoint various plant diseases, such as bacterial spots, early blight, late blight, leaf mold, septoria leaf spot, mosaic virus, yellow leaf curl, and healthy leaves within the KUTomaDATA. The mAP score of 80% achieved by YOLOS <cit.>, 82% by DETR <cit.>, 73% by ViT <cit.>, and 77% by Swin <cit.> demonstrates their competence in detecting and classifying plant health conditions. However, TomFormer's superior mAP score reaffirms its effectiveness in handling the complexities and diversities inherent in this class. §.§ Results on PlantDOCThe evaluation results provided further insights into the models' performances in the context of the PlantDoc dataset, which encompasses a subset of plant healthy and disease classes. TomFormer demonstrated promising performance with an mAP score of 81%, indicating its ability to accurately detect and classify bacterial spots, early blight, late blight, leaf mold, septoria leaf spot, and healthy leaves. Similarly, the models YOLOS <cit.>, DETR <cit.>, ViT <cit.>, and Swin <cit.> achieved competitive mAP scores of 77%, 79%, 71%, and 76%, respectively. The close proximity of the mAP scores suggests that all models possess the competence to address the complexities present in the PlantDoc class effectively. Nevertheless, TomFormer's edge in performance reiterates its potential as a strong contender for tomato disease detection on this dataset.§.§ Results on PlantVillageThe PlantVillage class, characterized by a broad spectrum of tomato leaf images with uniform and simple backgrounds, was the subject of our final evaluation. TomFormer, once again, exhibited exceptional performance, achieving the highest mAP score of 83%. This outcome highlights TomFormer's proficiency in accurately detecting and localizing various tomato diseases in a uniform environment. While the other models, YOLOS <cit.>, DETR <cit.>, ViT <cit.>, and Swin <cit.>, achieved competitive mAP scores of 80%, 83%, 75%, and 80%, respectively, TomFormer's consistent superiority in this class underscores its versatility and adaptability in handling the images with uniform background. §.§ Critical EvaluationThe mAP scores for each class vary significantly across different models. For instance, the performance of TomFormer and YOLOS <cit.> is consistently higher than DETR <cit.>, ViT <cit.>, and Swin <cit.> for most of the classes. This indicates that these models are more suitable for classes except for early blight and septoria leaf spots as these classes consistently show lower mAP scores across all models. This suggests that these classes might be more challenging to detect and classify accurately, which is mostly due to their visual similarity with other classes. Upon analyzing the table, a notable trend emerges, indicating that all the models, except for TomFormer, demonstrated excellent performance on the PlantVillage dataset as compared to the PlantDoc and KUTomaDATA. This observation aligns with the dataset's characteristics, as it consists of leaf images with a uniform background. The uniformity in background images facilitates easier object detection and classification, leading to higher mAP values for the models. The higher mAP scores achieved by the other models on the PlantVillage dataset validate their effectiveness in the accurate detection of plant diseases in scenarios where the visual appearance of leaves is consistent and well-defined. However, TomFormer, despite its exceptional performance on KUTomaDATA, exhibits a relatively lower mAP score on the PlantVillage dataset. This result may be attributed to the dataset's specific challenges, where TomFormer faces difficulty distinguishing between diseases with subtle visual differences due to the uniform background setting. Overall, this comparison underscores the importance of using domain-specific datasets like PlantVillage, which mimic real-world scenarios and provide a robust evaluation of object detection models' performance in practical disease diagnosis.§ CONCLUSIONIn conclusion, this paper introduces TomFormer, a transformer-based model designed for detecting diseases in tomato leaves. It combines a visual transformer and a convolutional neural network to provide an innovative approach to disease detection. The Hello Stretch robot, which uses TomFormer, can diagnose tomato leaf diseases in real time, making it a practical agricultural solution. Additionally, the inclusion of the KUTomaDATA dataset expands the research area. Extensive experiments and comparisons with other transformer models demonstrate TomFormer's robustness, accuracy, efficiency, and scalability, with mAP scores of 87%, 81%, and 83% on the KUTomaDATA, PlantDoc, and PlantVillage datasets, respectively. This work has the potential to significantly benefit the tomato industry by reducing crop losses and improving yields, offering an effective tool for early disease detection and promoting sustainable agricultural practices. 10pico1996viral B. Picó, M. J. Díez, and F. Nuez, “Viral diseases causing the greatest economic losses to the tomato crop. ii. the tomato yellow leaf curl virus—a review,” Scientia Horticulturae, vol. 67, no. 3-4, pp. 151–196, 1996.wang2020rin R. Wang, M. Lammers, Y. Tikunov, A. G. Bovy, G. C. Angenent, and R. A. de Maagd, “The rin, nor and cnr spontaneous mutations inhibit tomato fruit ripening in additive and epistatic manners,” Plant Science, vol. 294, p. 110436, 2020.LIN2022111 T. Lin, Y. Wang, X. Liu, and X. Qiu, “A survey of transformers,” AI Open, vol. 3, pp. 111–132, 2022.Bharate2017ARO A. A. Bharate and M. S. Shirdhonkar, “A review on plant disease detection using image processing,” 2017 International Conference on Intelligent Sustainable Systems (ICISS), pp. 103–109, 2017.jab-201803-0002 M. E. El Houby, “A survey on applying machine learning techniques for management of diseases,” Journal of Applied Biomedicine, vol. 16, no. 3, pp. 165–174, 2018.brahimi2017deep M. Brahimi, K. Boukhalfa, and A. Moussaoui, “Deep learning for tomato diseases: classification and symptoms visualization,” Applied Artificial Intelligence, vol. 31, no. 4, pp. 299–315, 2017.DBLP:journals/mta/HuangCZZWPYJ23 X. Huang, A. Chen, G. Zhou, X. Zhang, J. Wang, N. Peng, N. Yan, and C. Jiang, “Tomato leaf disease detection system based on FC-SNDPN,” Multim. Tools Appl., vol. 82, no. 2, pp. 2121–2144, 2023.hughes2015open D. Hughes, M. Salathé, et al., “An open access repository of images on plant health to enable the development of mobile disease diagnostics,” arXiv preprint arXiv:1511.08060, 2015.YU2023100650 S. Yu, L. Xie, and Q. Huang, “Inception convolutional vision transformers for plant disease identification,” Internet of Things, vol. 21, p. 100650, 2023.borhani2022deep Y. Borhani, J. Khoramdel, and E. Najafi, “A deep learning based approach for automated plant disease classification using vision transformer,” Scientific Reports, vol. 12, no. 1, p. 11554, 2022.alshammari2022olive H. Alshammari, K. Gasmi, I. Ben Ltaifa, M. Krichen, L. Ben Ammar, and M. A. Mahmood, “Olive disease classification based on vision transformer and cnn models,” Computational Intelligence and Neuroscience, vol. 2022, 2022.karthik2020attention R. Karthik, M. Hariharan, S. Anand, P. Mathikshara, A. Johnson, and R. Menaka, “Attention embedded residual cnn for disease detection in tomato leaves,” Applied Soft Computing, vol. 86, p. 105933, 2020.zeng2020crop W. Zeng and M. Li, “Crop leaf disease recognition based on self-attention convolutional neural network,” Computers and Electronics in Agriculture, vol. 172, p. 105341, 2020.zhou2021vegetable J. Zhou, J. Li, C. Wang, H. Wu, C. Zhao, and Q. Wang, “A vegetable disease recognition model for complex background based on region proposal and progressive learning,” Computers and Electronics in Agriculture, vol. 184, p. 106101, 2021.dosovitskiy2020image A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.carion2020end N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European conference on computer vision, pp. 213–229, Springer, 2020.ba2016layer J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.baevski2018adaptive A. Baevski and M. Auli, “Adaptive input representations for neural language modeling,” arXiv preprint arXiv:1809.10853, 2018.wang2019learning Q. Wang, B. Li, T. Xiao, J. Zhu, C. Li, D. F. Wong, and L. S. Chao, “Learning deep transformer models for machine translation,” arXiv preprint arXiv:1906.01787, 2019.hendrycks2016gaussian D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),” arXiv preprint arXiv:1606.08415, 2016.stretch C. C. Kemp, A. Edsinger, H. M. Clever, and B. Matulevich, “The design of stretch: A compact, lightweight mobile manipulator for indoor human environments,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 3150–3157, IEEE, 2022.DBLP:journals/corr/HughesS15 D. P. Hughes and M. Salathé, “An open access repository of images on plant health to enable the development of mobile disease diagnostics through machine learning and crowdsourcing,” CoRR, vol. abs/1511.08060, 2015.singh2020plantdoc D. Singh, N. Jain, P. Jain, P. Kayal, S. Kumawat, and N. Batra, “Plantdoc: A dataset for visual plant disease detection,” in Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pp. 249–253, 2020.Roboflow B. Dwyer, J. Nelson, and J. Solawetz, “Roboflow annotate.” <https://roboflow.com/annotate>, 2022.fang2021you Y. Fang, B. Liao, X. Wang, J. Fang, J. Qi, R. Wu, J. Niu, and W. Liu, “You only look at one sequence: Rethinking transformer in vision through object detection,” Advances in Neural Information Processing Systems, vol. 34, pp. 26183–26197, 2021.liu2021swin Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 10012–10022, 2021.
http://arxiv.org/abs/2312.16331v1
{ "authors": [ "Asim Khan", "Umair Nawaz", "Lochan Kshetrimayum", "Lakmal Seneviratne", "Irfan Hussain" ], "categories": [ "eess.IV", "cs.AI", "cs.CV" ], "primary_category": "eess.IV", "published": "20231226204723", "title": "Early and Accurate Detection of Tomato Leaf Diseases Using TomFormer" }
Optimistic and Pessimistic Actor in RL:Decoupling Exploration and UtilizationJingpu Yang^1, Qirui Zhao^1, Helin Wang^1, Yuxiao Huang^1, Zirui Song^2, Miao Fang^1,3** Corresponding author ([email protected]) ^1Northeastern University, Shenyang, China ^2University of Technology Sydney, Sydney, Australia ^3Olimei Company, Guangzhou, ChinaJanuary 14, 2024 =================================================================================================================================================================================================================================================================================================== Deep neural network(DNN) generalization is limited by the over-reliance of current offline reinforcement learning techniques on conservative processing ofexisting datasets. This method frequently results in algorithms that settle for suboptimal solutions that only adjust to a certain dataset.Similarly, in online reinforcement learning, the previously imposed punitive pessimism also deprives the model of its exploratory potential. Our research proposes a novel framework,Optimistic and Pessimistic Actor Reinforcement Learning (OPARL).OPARL employs a unique dual-actor approach: an optimistic actor dedicated to exploration and a pessimistic actor focused on utilization, thereby effectively differentiating between exploration and utilization strategies.This unique combination in reinforcement learning methods fosters a more balanced and efficient approach. It enables the optimization of policies that focus on actions yielding high rewards through pessimistic utilization strategies, while also ensuring extensive state coverage via optimistic exploration. Experiments and theoretical study demonstrates OPARL improves agents' capacities for application and exploration.In the most tasks of DMControl benchmark and Mujoco environment, OPARL performed better than state-of-the-art methods.Our code has released onhttps://github.com/yydsok/OPARLDecoupling, DMControl Benchmark, Dual Actor Method, Mujoco Environment, Ensemble Q § INTRODUCTIONIn recent years, Reinforcement Learning (RL) has begun to show significant empirical success<cit.>, and DNNs have played an important role in experiments through value function approximation. Deep RL has achieved considerable success in various fields such as robotics<cit.>, recommendation systems<cit.>, and strategy games<cit.>.However, there are some problems in the generalization of DNN in RL.Previous approaches introduced conservative ideas to alleviate these problems,such as Behavior cloning<cit.>.And Offline RL aims to overcome this problem by using only existing data to learn strategies without further interaction with the environment. It aims to figure out the optimal strategy from existingstatic datasets, but if the coverage of the datasets is insufficient,the conservative constraints make further exploration difficult,ordinary RL algorithms are severely affected by extrapolation errors and overestimate the Q value of out-of-distribution (OOD) state action pairs<cit.>. Ensure that the system surrounding learners does not receive honest evaluations of this value.Consequently, many methods for addressing OOD situations involve imposing behavioral constraints on existing algorithms,guiding the learning process towards a more conservative approach<cit.>, or to penalize the Q value of OOD state-action pairs to force the Q value to be more pessimistic<cit.>. Online reinforcement learning often focuses on the challenge of exploring and deciding what kind of data to collect. In areas with high data collection costs, such as robotics<cit.>, healthcare<cit.>, and operational research<cit.>, the ability to reuse data from previous tasks is particularly important for RL. However, previous online learning works<cit.>have not fully utilized the generalization ability of DNNs, frequently resulting in suboptimal solutions that are overly reliant on the specific characteristics of the training dataset. losing the exploration ability of the model. resulting in weak generalization ability of the model. This tendency has led to a diminution in the model's exploratory ability, consequently weakening its overall generalization performance,We visualize it in <ref>. Therefore, previous works take care of a general form of the optimism-under-uncertainty principle, which means overestimation of expected rewards could trigger exploration of states and actions that would not be explored. To trigger exploration of states and actions that would not otherwise have been explored by appealing to the general form of the optimistic uncertainty principle—overestimation of expected rewards. However, without a clear understanding of the nature of overestimation, this exploration may be dangerous.There have always been two approximately opposing methods in RL for continuous control problems. On the one hand, some authors attempt to correct overestimation, for example, by using the minimum value estimated or by utilizing two or more values as an approximate lower bound. Such as using the minimum value estimated and utilizing two or more values as a lower bound. This method can be seen as a form of pessimism towards the current value function. On the other hand, it is believed that inherent optimism in estimating approximations actually encourages exploration of the environment and action space to achieve greater reward returns. Both sides have derived the most advanced algorithms based on their respective positions<cit.>, indicating that the two methods are not opposed. Therefore, we combine the two seemingly opposing approaches to maximize the advantages of both sides and achieve complementary advantages.With combining the two perspectives of our predecessors, we establish the concept of tactical optimism and tactical pessimism.Both methods have been demonstrated significant variationS across different environments. Our goal is to reconcile these seemingly contradictory viewpoints, assuming that the relative contributions ofboth components can vary according to the nature of the task.We decouple the role of the Actor in Actor-Critic models by employing two distinct actors, namely the Optimism actor and the Pessimism actor, for exploration and exploitation tasks, respectively. An optimistic actor can assist in exploration, however, if there is a significant estimation error, a pessimistic actor is needed to stabilize learning.The framework we propose allows for the application of a highly optimistic state overlay in exploration, increasing the likelihood of discovering areas with high rewards. However, this approach may not yield optimal strategies. Conversely, we employ tactical pessimism in policy recuperation to maximize the utilization of high-rewards behaviors that have been identified. In our approach, we engage with the environment using a tactically optimistic exploration strategy, while employing tactical pessimism for evaluating all data observed so far. This tactically pessimistic utilization strategy is designed to maximize long-term rewards, effectively mitigating biases introduced by short-term localized rewards. Therefore, even if the exploration strategy leads to suboptimal behavior in task rewards, utilizing the strategy can still restore the exploration strategy to support further exploration. More subtly, this allows exploration strategies to search for better states and rewards. Ultimately, the exploration strategy can generate better data, making the final performance less sensitive to the balance between internal and external rewards, which is beneficial for the model to explore better areas in different environments.We have demonstrated in a series of experiments that OPARL can balance both pessimistic and optimistic algorithms, with a wider exploration range (optimistic exploration methods) and more stable performance (pessimistic utilization methods), achieving the best of both worlds and achieving new technological levels for challenging continuous control problems.Our main contributions are:1. Our research substantiates that conducting exploration and training within an environment where there is a tendency towards highly optimistic estimations, particularly characterized by an overestimation of expected rewards, is not only feasible but also yields significant results.2. Our research introduces a novel framework that decouples the two functions of the Actor: an optimistic Actor conducts a broader range of exploration, while a pessimistic Actor provides stable adjustments to the model. This allows for a comprehensive balance between optimistic and pessimistic value estimations in online reinforcement learning. The framework, termed OPARL, integrates tactical optimism with tactical pessimism, enabling the model to explore a wider array of high-reward areas and select more rational actions to maximize rewards.3. Our experiments have shown that OPARL effectively enhances the model's generalization ability. Our model can achieve better states and rewards in Mujoco and DMControl environments while still maintaining the stability of the original model.§ RELATED WORK§.§ Optimistic Exploration and Pessimistic Exploration For a long time, optimistic exploration and pessimistic exploration have been controversial points in exploratory learning<cit.>. Some scholars believe that in the face of uncertain environments, it is necessary to use optimistic principles (reward maximization) algorithms to explore state action pairs with high cognitive uncertainty<cit.>, but this overestimation bias may not always be harmful. In some cases, errors that underestimate bias can be harmful. Overestimation bias can help encourage exploration<cit.> of overestimated actions, while underestimation bias may hinder exploration. If these highly random regions correspond to high-value regions, encouraging exploration may be beneficial, while underestimating bias may prevent agents from learning the high values of that region. However, if highly random areas also have lower values, overestimation bias may lead to the agent excessively exploring low-value areas. Therefore, another group of scholars believes that it is necessary to standardize highly uncertain state-action pairs. TD3 selects the smaller of two Q values<cit.> as an approximate lower bound based on the DDPG<cit.> algorithm and explores the environment in a pessimistic form. By reducing the pessimistic exploration of value estimation, agents can experience better long-term rewards.In the recent works, optimistic exploration and pessimistic exploration have been controversial points in exploratory learning<cit.>. Some scholars believe that in the face of uncertain environments, it is necessary to use optimistic principles (reward maximization) algorithms to explore state action pairs with high cognitive uncertainty<cit.>, but this overestimation bias may not always be harmful. In some cases, errors that underestimate bias can be harmful. Overestimation bias can help encourage exploration of overestimated actions, while underestimation bias may hinder exploration. If these highly random regions correspond to high-value regions, encouraging exploration may be beneficial, while underestimating bias may prevent agents from learning the high values of that region. However, if highly random areas also have lower values, overestimation bias may lead to the ant excessively exploring low-value areas. Therefore, another group of scholars believes that it is necessary to standardize highly uncertain state-action pairs. TD3<cit.> selects the smaller of two Q values as an approximate lower bound based on the DDPG<cit.> algorithm and explores the environment in a pessimistic form. By reducing the pessimistic exploration of value estimation, agents can experience better long-term rewards.Our model retains this approach to minimize extrapolation errors.§.§ Ensemble QFor a long time, people have realized that maximizing bias in Q-learning<cit.> can significantly hinder learning. Thrun & Schwartz first emphasized the existence of maximizing bias<cit.>. Van Hasselt proposed dual Q-learning<cit.> to solve the problem of overestimation, which can generally reduce bias. Van Hasselt et al showed that adding dual Q-learning<cit.> to deep Q-networks (DQN)<cit.> provided a significant performance improvement for Atari game benchmarks. For continuous action spaces, previous researchers have introduced Cut Double Q-Learning (CDQ)<cit.>, which further reduces the maximization bias and brings significant improvements to the Deep Deterministic Policy Gradient (DDPG)<cit.> algorithm. CDQ<cit.> was later combined with entropy maximization in SAC to achieve stronger performance<cit.>. Other bias reduction techniques include using bias correction terms<cit.>, using weighted Q-estimation<cit.>, punishing deterministic strategies in the early stages of training<cit.>, using multi-step methods<cit.>, performing weighted Bellman updates to mitigate error propagation<cit.>, and using distribution networks to truncate sampled Q-estimation<cit.>. It has long been recognized that using integration can improve the performance of DRL algorithms. For Q-learning based methods, Anschel et al. used the average of multiple Q-estimates to reduce variance<cit.>. Agarwal et al. introduced Random Ensemble Mixing (REM), which enforces optimal Bellman consistency on random convex combinations of multiple Q-estimates<cit.>. Lan et al. introduced Maxmin Q-learning<cit.>, which can improve performance on high UTD, REDQ<cit.> ensembles more Q, and further improving performance. §.§ Actor-CriticIn the realm of reinforcement learning, the Actor-Critic approach synergistically combines the merits of policy-based and value-based methods<cit.>. Since its inception by Konda and Tsitsiklis in 2000, the AC methodology has evolved into various forms. Notably, the A2C (Advantage Actor-Critic) and A3C (Asynchronous Advantage Actor-Critic) algorithms, through the incorporation of advantage functions and parallel training, have respectively enhanced learning stability and efficiency<cit.>. These methodologies have demonstrated efficacy in applications such as robotic control and high-dimensional game AI<cit.>. Despite their superiority in continuous action spaces over traditional Q-learning, AC methods still confront challenges regarding sample efficiency and training stability. Future developments integrating deep learning techniques with AC approaches, as well as exploring hybrid models combining AC with other reinforcement learning paradigms, hold significant potential<cit.>.§ PROBLEM SETUPWe will formulate the problem as a Markov Decision Process (MDP) M ≡ (S, A, R, p, γ), where: * S represents the state space,* A represents the action space,* p represents the transition dynamics,* R represents the reward function,* γ∈ [0, 1) represents the discount factor. For a given state s ∈ S, the policy π maps the state to an action (deterministic policy), and the agent selects the action a ∈ A based on policy π, receiving a reward r and a new state s' from the environment. Our goal is to learn behavior that maximizes rewards, where the benefit is defined as the total discounted reward R_t = ∑_i=t^Tγ^i-t r(s_i, a_i), and the state-action value function Q_π(s, a) = 𝔼_π[ ∑_t=0^∞γ^t r(s_t, a_t) | s_0=s, a_0=a ].To learn behavior that maximizes rewards, we first need to obtain an experience pool buffer containing high-reward regions. The buffer is {(s, a, s', r, (d_b), which is collected using the tactically optimistic behavior strategy π_opt. The optimistic action a_opt is obtained through π_opt, and a new state s' is obtained by interacting with the environment. The reward r and whether the action is completed d_b are added to the experience pool to obtain more high-reward regions. When using data from the experience pool, we extract states s, actions a, new states s', rewards r, and whether the action is completed d_b from the experience pool and use tactical pessimism to obtain more reasonable actions, thereby maximizing the reward. The specific algorithms of both will be presented in detail in the fourth part. § OPTIMISTIC AND PESSIMISTIC ACTORS IN RL The purpose of this section is to develop a framework to decouple the two functions of actors. One is to collect experience optimistically with tactics to explore more high-return areas; the other is used to punish the Q value of the OOD state-action pair for tactical pessimism to obtain more reasonable actions in the high-return area and maximize the reward. Decoupling enables us to explore the environment optimistically while eliminating the bias of overestimation of expected rewards in the evaluation strategy. Most online reinforcement learning approaches incorporate some degree of noise in their exploration strategies to enable a wider exploration scope, this fact is implicitly utilized in many standard RL algorithms. Such as ϵ-greedy in DQN, Gaussian noise in grey, and SAC or OU noise in DDPG and TD3. However, these methods are often insufficient for comprehensive exploration. Our framework, OPARL, integrates the Q values of multiple state-action pairs to obtain the widest exploration area.We introduce the general OPARL framework in Section A,which can be amalgamated enhanced with any RL algorithm. In Section B,we delve into the mathematical perspective of combining optimistic exploration with pessimistic exploitation, discussing how this approach enables an agent to achieve high reward returns when faced with challenging exploration problems.In Section C, we will explore the advantages of combining offline algorithms with online algorithms. §.§ The OPARL Framework Our framework is divided into two parts: optimistic exploration and pessimistic utilization of the two exploration parts. In the exploration phase, we carry out optimistic strategy exploration every k steps, through the stateafter the optimistic strategy function to get the actionwith the largest Q value, loop v times to get v actions, and then input each action and state into critical to get V Q-values.We select the action corresponding to the Q-value with the highest variance as the action derived from optimistic exploration. The reward r and the next state s_t+1 are obtained through environment interaction, then r and s_t+1 are stored into thebuffer together; otherwise, select the action corresponding to the smallest Q-value obtained through the pessimistic strategy to acquire the reward and the subsequent state s_t+1, and then incorporate them into the buffer. For every p step of exploration for p times of training, through the high exploration ratio, agents can explore more high-reward regions. In the utilization phase, the pessimistic strategy is selected, the state is obtained through the pessimistic strategy function to get the actionwith the smallest Q value,and input the obtained state and action to calculate the loss for alignment, thereby facilitating the update of parameters. The parameters of the optimistic strategy are passed to the pessimistic strategy function at every w steps to achieve a deeper combination of pessimistic and optimistic strategies. §.§ The Algorithm of OPARLFormulas<ref>, <ref>, <ref> represent the exploration ranges that we consider to be pessimistic, optimistic, and randomly determined Qi values from 1 to N, respectively. As we can see from the formulas<ref>, <ref>, optimistic exploration has the potential to explore more high-rewards regions since it can do more activities while guaranteeing that the expectations of those actions stay the same. Likewise, formulas<ref>, <ref> show that, while expectations do not change, the range of actions undertaken by pessimistic exploration does. This indicates that, rather than selecting the actions with the highest short-term returns, pessimistic exploration is more likely to select reasonable actions from high-return areas. OPARL combines optimism with pessimism, allowing it to explore a wider range and achieve more long-term returns compared to other models.y1 = r + γmin_i = 1,2,…,N Q_θ_i'(s', a) y2 = r + γmax_i = 1,2,…,N Q_θ_i”(s', a) y' = r + γ Q^*_i = 1,2,…,N(s', a) min_θ_iN^-1∑(y1-Q_θ_i'(s,a))^2 ≤min_θ_iN^-1∑(y'-Q_θ_i'(s,a))^2 max_θ_iN^-1∑(y2-Q_θ_i'(s,a))^2 ≤max_θ_iN^-1∑(y'-Q_θ_i'(s,a))^2 Q_θ_i(s_t,a_t)= r_t + γ𝔼[Q_θ(s_t+1,a_t+1)] §.§ Parameter ResetWe shall investigate the boundaries of negative approaches ϕ_pes loading into the parameters of an optimistic exploration approach ϕ_opt.The objective of this integration is to enable the optimistic exploration strategy to assimilate experience and knowledge from the pessimistic exploration strategy, thereby enhancing its exploratory efficacy.By loading the state dictionary of the pessimistic exploration strategy into the optimistic exploration strategy, the optimistic strategy can commence with parameters similar to those of the pessimistic strategy.This approach facilitates the avoidance of excessive deviation from well-established exploration methodologies. Meanwhile, as training progresses,optimistic exploration strategies can optimize their behavior policies by continually updating parameters, thereby gradually developing more effective exploration techniques tobetter execute their intended tasks. Furthermore, it might ensure that the target policy network consistently adheres to the current policy network for training purposes. This approach can be used to refine any exploration model that incorporates both optimism and pessimism.§ EXPERIMENTS Our experimental investigates the effectiveness of decoupling exploration and exploitation in a series of tasks with varying levels of exploration requirements. We introduced the basline and environment requirements in Section AIn Section B, we compared several basic online RL algorithms (TD3, SAC, and PPO<cit.>) in the Mujoco environment<cit.> withour model. In Section C,We tested our model in more complex DMControl environment<cit.>, comparing it with baseline models such as TD3 and SAC, as well as advanced models like REDQ, followed by an analysis. In Section D, we conducted several ablation studies aimed at demonstrating the effectiveness of decoupling training. These studies analyze the performance of OPARL under conditions of complete optimism, complete pessimism, and the absence of an ensemble Q.§.§ Evaluation SettingBaseline: We compared the proposed OPARL algorithm with several well-established baselines in the literature. Specifically, we chose TD3, SAC, and PPO as our main baselines because they are typically used to perform well in various tasks. In addition, in the DMControl environment, we also compared it with REDQ, which uses ensemble Q technology to enhance the exploration range. To ensure a fair comparison, we use the cleanRL TD3, SAC<cit.>, and PPO available to the author on Github and implement them using the default hyperparameters provided by the author, while REDQ cites data from relevant papers.Environments: The experimental suite is a state-based DMControl suite<cit.> that evaluated its performance on four state-based continuous control tasks using the Mujoco<cit.><cit.> framework using OpenAI Gym<cit.>, as it provides various environments to benchmark the capabilities of the RL algorithm. We use gyms to facilitate the interaction between algorithms and the environment. We evaluate each algorithm over one million time steps in the Mujoco environment, randomly select 5 seeds every 5k time steps to obtain the average rewards of the algorithm, and calculate the variance to represent the upper and lower limits of thefluctuation. In the DMControl environment, in order to maintain consistency with relevant paper citation data, we randomly select 10 seeds every 10k time steps to obtain the average reward of the algorithm and use half of the variance as the upper and lower limits of the model's fluctuation.Setup:The degree of optimism and pessimism is controlled by the number of ensembles Q. We initially set Q number to 5. To ensure a balance between optimism and pessimism, we set the ratio of optimistic and pessimistic exploration to 1:1 and set the parameters of the pessimistic exploration strategy every 20000 time steps ϕ_pes loading into optimistic exploration strategy parameters ϕ_opt. The other parameters are consistent with the parameters of the original model, TD3<cit.>. We label each testing algorithm with five random seeds. For more implementation details, please refer to the appendix. §.§ Evaluation of OPARL in Mujoco Environment Our results in the Mujoco environment are shown in the figure. Firstly, we found that our model demonstrated great performance in Hopper and Ant environments. In most cases, OPARL consistently outperforming its backbonealgorithm TD3. As the number of steps increases, the model gradually stabilizes, and our model has the smallest amplitude among all comparison models which indicating OPARL can improve learning stability. In summary our model performs much better than other models in the Ant environment, with 44.04%, 51.25%, and 773.02% higher than SAC, TD3, and PPO<cit.> models, respectively. §.§ Evaluation of OPARL in DMControl EnvironmentThe appendix shows the learning curve from scratch. Table 1 describes the average performance after 1M time step training. The results demonstrate the following points: i) OPARL outperforms other testing algorithms in most (14 out of 18) environments. Specifically, it outperforms REDQ, SAC, and TD3 by 14.30%, 4.08%, and 7.36%, respectively; ii) OPARL greatly improves its skeleton algorithm TD3 by ensemble multiple Q and balancing exploration and utilization. Performance in complex environments such as swimmer-swimmer6 and swimmer-swimmer6 improved by 71.04% and 100.59%, respectively. §.§ Ablation StudyWe conducted a series of ablation experiments on the structural components which introduced into our model. These included exclusively using optimistic exploration (selecting the maximum value from the integrated five Q values),using only pessimistic exploration (selecting the minimum value in the ensemble five Q), and without additional ensemble Q, which is keeping Q at 2.exclusively using pessimistic exploration (selecting the minimum value from the integrated five Q-values), and not incorporating additional Q-values, thus maintaining a scenario where Q equals 2. We show the evaluation results in Figure 2. Although Yarats<cit.> showed that with sufficient exploration and state coverage, the standard RL can restore the performance strategy without pessimism.In fact, previous work on decoupling strategy learning used the standard RL training development strategy<cit.>. The results of ablation experiments show that optimistic exploration needs to be combined with pessimistic learning to achieve better performance.§ CONCLUSION We propose OPARL, a simple reinforcement learning framework that decouples exploration and utilization strategies, utilizing optimistic exploration and pessimistic learning to achieve better results in online RL. We have proventhat varying degrees of optimism play a significant role in tasks and learning processes. Due to the previous deep Actor-Critic algorithms relying solely on a fixed degree of optimism, strategies were unable to select the most reasonable actions from a long-term perspective. Therefore, we introduced OPARL, which decoupling allows us to more actively combine exploration rewards, thereby improving the coverage of online data and the final strategy used for evaluation.Our experiments demonstrate that our approach significantly enhances the performance cross a broad range of scenarios.Notably, it achieves rapid learning in challenging environments such as Ant and Humanoid. Furthermore, it adapts effectively to the more complex DMControl settings, yielding impressive results.unsrt
http://arxiv.org/abs/2312.15965v1
{ "authors": [ "Jingpu Yang", "Qirui Zhao", "Helin Wang", "Yuxiao Huang", "Zirui Song", "Miao Fang" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231226090323", "title": "Optimistic and Pessimistic Actor in RL:Decoupling Exploration and Utilization" }
Sobolev Institute of Mathematics, Acad. Koptyug ave. 4, 630090 Novosibirsk, Russia. Novosibirsk State Agrarian University, Dobrolyubova str., 160, 630039 Novosibirsk, Russia. Regional Scientific and Educational Mathematical Center of Tomsk State University, Lenin ave. 36, 634009 Tomsk, Russia. [email protected] Department of Mathematical Sciences, Indian Institute of Science Education and Research (IISER) Mohali, Sector 81, SAS Nagar, P O Manauli, Punjab 140306, [email protected] of Mathematical Sciences,Indian Institute of Science Education and Research (IISER) Mohali, Sector 81, SAS Nagar, P O Manauli, Punjab 140306, [email protected] [2020]Primary 20F55, 20F36; Secondary 18N50 Twin groups are planar analogues of Artin braid groups and play a crucial role in the Alexander-Markov correspondence for the isotopy classes of immersed circles on the 2-sphere without triple and higher intersections. These groups admit diagrammatic representations, leading to maps obtained by the addition and deletion of strands. This paper explores Brunnian twin groups, which are subgroups of twin groups composed of twins that become trivial when any of their strands are deleted. We establish that Brunnian twin groups consisting of more than two strands are free groups. Furthermore, we provide a necessary and sufficient condition for a Brunnian doodle on the 2-sphere to be the closure of a Brunnian twin. Additionally, we delve into two generalizations of Brunnian twins, namely, k-decomposable twins and Cohen twins, and prove some structural results about these groups. We also investigate a simplicial structure on pure twin groups that admits a simplicial homomorphism from Milnor's construction of the simplicial 2-sphere. This gives a possibility to provide a combinatorial description of homotopy groups of the 2-sphere in terms of pure twins. Brunnian planar braids and simplicial groups Mahender Singh============================================§ INTRODUCTIONThe twin group, or the planar braid group T_n on n ≥ 2 strands, is a right angled Coxeter group generated by n-1 involutions that admit only far commutativity relations. These groups appeared in the work of Khovanov <cit.> on real K(π, 1) subspace arrangements and were further investigated in <cit.>. Twin groups have a geometrical interpretationsimilar to the one for Artin braid groups <cit.>. We fix parallel lines y = 0 and y = 1 on the plane ℝ^2 with n marked points on each line. Consider the set of configurations of n strands in the strip ℝ× [0,1] connecting the n marked points on the line y=1 to those on the line y=0 such that each strand is monotonic and no three strands have a point in common. Two such configurations are equivalent if one can be deformed into the other by a homotopy of strands, keeping the end points fixed throughout the homotopy. Such an equivalence class is called a twin. Placing one twin on top of another and rescaling the interval turns the set of all twins on n strands into a group isomorphic to T_n. The generators t_i of T_n can be geometrically represented by configurations as shown in Figure <ref>. Analogous to classical knot theory, it is evident that the closure of a twin gives a doodle on the 2-sphere. In general, a doodle on a closed surface is a collection of finitely many piecewise-linear closed curves without triple intersections. These objects first appeared in the work of Fenn and Taylor <cit.>.In <cit.>, Khovanov proved that every oriented doodle on the 2-sphere is the closure of a twin.A Markov theorem for doodles on the 2-sphere has been established by Gotin <cit.>, although the idea has been implicit in <cit.>.These constructions have been generalised by Bartholomew-Fenn-Kamada-Kamada <cit.>, where they consider a collection of immersed circles in closed oriented surfaces of arbitrary genus. The pure twingroup, denoted as PT_n, is defined as the kernel of the natural homomorphism from the twin group T_n to the symmetric group S_n, which maps the twin t_i to the transposition (i, i + 1). A nice topological interpretation of PT_n isknown due to Khovanov. Consider the spaceX_n = ℝ^n ∖{ (x_1, …, x_n) ∈ℝ^n |x_i = x_j = x_k for  i ≠ j ≠ k ≠ i },which is the complement of triple diagonals x_i = x_j = x_k. In <cit.>, Khovanov proved that the fundamental group π_1(X_n) is isomorphic to PT_n. Prior to this, Björner and Welker <cit.> had investigated the cohomology of these spaces, establishing that each H^i(X_n, ℤ) is free. Simplicial structures on braid groups are connected with homotopy groups of some manifolds <cit.>. Notably, they provide a description of elements in homotopy groups of the 2-sphere in terms of Brunnian braids <cit.>, with a generalization to higher dimensional spheres <cit.>. A set of generators for the Brunnian braid group of a surface other than the 2-sphere and the projective plane has been provided in <cit.>. Furthermore, Brunnian subgroups of mapping class groups have been considered in <cit.>. In this paper, we exploresimplicial structures on pure twin groups. The geometrical interpretation of elements in twin groups allows us to define face and degeneracy maps obtained by the deletion and addition of strands, thereby transforming the family of pure twin groups into a simplicial group. We adopt the approach introduced by Cohen and Wu in <cit.> for Artin pure braid groups. The paper is organised as follows. In Section <ref>, we prove that the natural maps of deletion and addition of strands turn the sequence { T_n }_n ≥ 1 into abi-Δ-set, whereas the sequence {PT_n}_n ≥ 1 is turned into abi-Δ-group (Proposition <ref>). In Section <ref>, we investigate Brunnian twins, which are twins that become trivial when any one of their strands is removed. We prove that the group (T_n) of Brunnian twins on n strands is free for n≥ 3 (Proposition <ref>), and give an infinite free generating set for (T_4) (Theorem <ref>). In Section <ref>, we consider two generalisations of Brunnian twins, namely,k-decomposable twins and Cohen twins. A twin is k-decomposable if it becomes trivial after removing any k of its strands. We give a complete description of k-decomposable twins on n ≥ 4 strands (Proposition <ref>). A twin on n strands is said to be Cohen if the twins obtained by removing any one of its strands are all the same. We give a characterisation for a twin to be Cohen(Theorem <ref>).In Section <ref>, we consider Brunnian doodles on the 2-sphere, and prove that an m-component Brunnian doodle on the 2-sphere is the closure of a Brunnian twin if and only if its twin index is m (Theorem <ref>). In Section <ref>, we observe that pure twin groups admit the structure of a simplicial group SPT_*. We relate it with the well-known Milnor's construction for simplicial spheres by establishing a homomorphism Θ : F[S^2]_* ⟶SPT_* of simplicial groups. We also identify some low degree terms of the image of Θ as free groups (Theorem <ref>). A complete description of the image of Θ gives a possibility to provide a combinatorial description of homotopy groups of the 2-sphere in terms of pure twins.§ BI-Δ-SET STRUCTURE ON TWIN AND PURE TWIN GROUPSFor n ≥ 2, the twin group T_n on n strands is generated by {t_1, …, t_n-1} and it is defined by the following relations:t_i^2 =1 1 ≤ i ≤ n-1andt_i t_j=t_j t_i |i-j|≥ 2.Clearly, each T_n is a right angled Coxeter group. Further, there is a surjective homomorphismν: T_n → S_n, that sends the generator t_i to the transposition τ_i=(i,i+1) in the symmetric group S_n. It's kernel, denoted as PT_n, is called the pure twin group. It is not difficult to see that PT_2 is trivial and PT_3 is the infinite cyclic group generated by the pure twin (t_1 t_2)^3 <cit.>. Figure <ref> represents the pure twin(t_1 t_2)^3. Let us consider the following definitions <cit.>. A sequence of sets {G_n}_n ≥ 0 is called a Δ-set if there are maps d_i: G_n → G_n-1 for each 0 ≤ i ≤ n such thatd_j d_i=d_i d_j+1 for all j ≥ i. The maps d_i are called face maps. If each G_n is a group and each face map is a group homomorphism, then {G_n}_n ≥ 0 is called a Δ-group.A sequence of sets {G_n}_n ≥ 0 is called a bi-Δ-set if there are face maps d_i: G_n → G_n-1 and coface maps d^i: G_n-1→ G_n for each 0 ≤ i ≤ n such that the following identitieshold: * d_j d_i=d_i d_j+1 forj ≥ i,* d^j d^i=d^i+1 d^j forj ≤ i,* d_j d^i= d^i-1 d_j forj<i,* d_j d^i=𝕀 for j=i, * d_j d^i=d^i d_j-1 for j>i.Moreover, if each G_n is a group and each face and coface map is a group homomorphism, then {G_n}_n ≥ 0 is called a bi-Δ-group. We define a bi-Δ-set structure ontwin groups that would induce abi-Δ-group structureon pure twin groups. For geometrical reasons, we take G_n=T_n+1 or PT_n+1 for each n ≥ 0. For each 0 ≤ i ≤ n, define the mapd_i: T_n+1→ T_nthat deletes the (i+1)-th strand from the diagram of a twin on n+1 strands. Note that, d_i is not a group homomorphism, but itsatisfiesd_i(u w)= d_i(u)d_ν(u)(i+1)-1(w)for all u, w ∈ T_n+1, where ν: T_n+1→ S_n+1 is the natural surjection. On the other hand, we have d_i(PT_n+1)⊆ PT_n for each0 ≤ i ≤ n.Further, it follows from (<ref>) that d_i: PT_n+1→ PT_n is a surjective group homomorphismfor each0 ≤ i ≤ n.Thehomomorphism d_i: PT_n+1→ PT_n has an alternative interpretation. Consider the spaceX_n=ℝ^n∖{(x_1, …, x_n) ∈ℝ^n |  x_i=x_j=x_k,i ≠ j ≠ k ≠ i},which is the complement of triple diagonals x_i=x_j=x_k in ℝ^n. For each 1 ≤ i ≤ n+1, let(p_i)_#: π_1(X_n+1) →π_1(X_n)be the group homomorphism induced by the coordinate projection p_i:X_n+1→ X_n, where p_i(x_1, …, x_n+1)=(x_1, …, x_i-1, x_i+1, …, x_n+1).By <cit.>, we identify π_1(X_n) with PT_n, and observe that (p_i)_#=d_i-1 for each 1 ≤ i ≤ n+1.In analogy with <cit.>, we define a bi-Δ-group structure on the family of twin groups. Consider the sequence of groups { T_n }_n ≥ 1. For each0 ≤ i ≤ n, let d_i: T_n+1→ T_n be the map satisfying (<ref>) and d^i: T_n → T_n+1 the map defined byd^i (t_j)={[ t_j forj<i,; t_i+1 t_i t_i+1 forj=i,; t_j+1 forj>i. ].Then{ T_n, d_i, d^i }_n ≥ 1 is abi-Δ-set and { PT_n, d_i, d^i }_n ≥ 1 is abi-Δ-group.For each 0≤ i ≤ n, the map d_i: T_n+1→ T_n clearly satisfies (<ref>). Next, consider the mapsd^i (t_j)={[ t_j forj<i,; t_i+1 t_i t_i+1 forj=i,; t_j+1 forj>i. ].See Figure <ref> for a geometrical interpretation of these coface maps. A direct computation yieldsd^j d^i (t_k) = d^i+1 d^j(t_k) ={[t_k fork<j≤ i,; t_k+2 t_k+1 t_kt_k+1 t_k+2forj=k=i,;t_k+1 t_k t_k+1forj=k < i,;t_k+2 t_k+1 t_k+2 forj< k = i,;t_k+1forj < k < i,;t_k+2forj ≤ i < k, ]. for all j ≤ i. This proves the identity (2). The identities (3)-(5) follows from the geometrical interpretation of d_i and d^i. Hence, { T_n, d_i, d^i }_n ≥ 1 is abi-Δ-set.We already noticed that, for each 0 ≤ i ≤ n,d_i(PT_n+1)⊆ PT_n andd_i:PT_n+1→ PT_n is a group homomorphism. The inclusion d^i(PT_n)⊆ PT_n+1 follows from the geometrical interpretation of the map d^i. Alternatively, for each n ≥ 1, letη^i:S_n→ S_n+1 be the map defined byη^i (τ_j)={[ τ_j forj<i,; τ_i+1τ_iτ_i+1 forj=i,; τ_j+1 forj>i. ].As with the case of d^i, each η^i satisfies the far commutativity and involutory relations of generators of S_n. For braid relations, we see that η^i(τ_k)η^i(τ_k+1)η^i(τ_k) ={[τ_k τ_k+1τ_k fork+1 < i,; τ_kτ_k+1τ_k+2τ_k+1τ_kfori = k, k+1,; τ_k+1τ_k+2τ_k+1 fork > i, ].=η^i(τ_k+1)η^i(τ_k)η^i(τ_k+1),and hence each η^i is a group homomorphism. Then the inclusion d^i(PT_n)⊆ PT_n+1 also follows from the commutativity of the following diagramT_n T_n+1 S_n S_n+1["d^i", from=1-1, to=1-2] ["η^i", from=2-1, to=2-2] ["ν"', from=1-1, to=2-1] ["ν", from=1-2, to=2-2].Finally, we prove that each d^i is group homomorphism at the level of twin groups itself. Clearly,(d^i(t_k))^2=1 for all i and k. Further, for k<ℓ with |k-ℓ|≥ 2, we haved^i(t_k)d^i(t_ℓ) ={[ t_kt_ℓforℓ < i,;t_kt_ℓ+1t_ℓ t_ℓ+1forℓ=i,;t_k t_ℓ+1fork < i<ℓ,; t_k+1 t_k t_k+1t_ℓ+1 for k = i,; t_k+1t_ℓ+1 for i < k, ].=d^i(t_ℓ)d^i(t_k).This proves that { PT_n, d_i, d^i }_n ≥ 1 is abi-Δ-group. For each 0≤ i ≤ n, we can also define the coface mapsd^i: T_n → T_n+1 byd^i (t_j)={[ t_j forj<i,; t_i t_i+1 t_i forj=i,; t_j+1 forj>i. ].It can be verified that the analogue of Proposition <ref> holds with these coface maps. We now use the bi-Δ-set structure on {T_n}_n ≥ 1 to give a new presentation for T_n+1. We use the coface maps d^i as defined in Proposition <ref>.Let q_k:=d^nd^n-1⋯ d^k(t_k) for 1 ≤ k≤ n-1 and q_n:=d^n-2(t_n-1). Then T_n+1 admits a presentation with generating set {q_1, …, q_n} and the following defining relations: * q_i^2=1 for all i,* [ q_i+1 q_i q_i+1, q_n] = 1 for i < n-1,* [q_i+1q_iq_i+1,q_j+1q_jq_j+1] for |i-j|>2 and i,j ≤ n-1.Using the simplicial identity d^j d^i = d^i+1 d^j forj ≤ i, we can assume that i_n-1>i_n-2>⋯>i_1 in the composite map d^i_n-1d^i_n-2⋯ d^i_1. We see that q_k=d^n-1d^n-2⋯ d^k(t_k )=t_nt_n-1⋯ t_k+1t_kt_k+1⋯ t_n-1 t_n for 1 ≤ k≤ n-1andq_n=d^n-2(t_n-1)=t_n.A direct check gives t_k=q_k+1q_k q_k+1 for each 1 ≤ k ≤ n-1, and hence {q_1, …, q_n} generates T_n+1. Further, the defining relations of T_n+1 in terms of the Coxeter generating set {t_1, …, t_n } gives the defining relations for the new generating set as follows: * q_i^2=1 for all i,* [ q_i+1 q_i q_i+1, q_n] = 1 for i < n-1,* [q_i+1q_iq_i+1,q_j+1q_jq_j+1] for |i-j|>2 and i,j ≤ n-1.§ BRUNNIAN TWINSIn the influential work <cit.>, a connection has been established between certain quotients of the Brunnian braid groups of the 2-sphere and its higher homotopy groups. A pure twin is said to be Brunnian if it becomes trivial after removing any one of its strands.Let (T_n) denote the set of all Brunnian twins on n strands.(T_n) is a normal subgroup of PT_n. For each 0 ≤ i ≤ n-1, let d_i: PT_n→ PT_n-1 be the face map of Proposition <ref>. Since each d_i is a group homomorphism and(T_n) = ⋂_i=0^n-1 (d_i),it follows that (T_n) is a normal subgroup of T_n. Next, we attempt to understand the groups of Brunnian twins. For n ≥ 4, (T_n) does not contain any element from {(t_it_i+1)^3 | 1 ≤ i ≤ n-1 }. For n ≥ 4,removing a trivial strand from (t_it_i+1)^3 gives a non-trivial twin, and hence the assertion follows. (T_3) ≅ PT_3 ≅ℤ. We already have (T_3) ⊆ PT_3. By <cit.>, we know that PT_3 is the infinite cyclic group generated by (t_1t_2)^3 (see Figure <ref>), and clearly (t_1t_2)^3 ∈(T_3). In contrast, it is proved in <cit.> that (B_3) is the commutator subgroup of the Artin pure braid group P_3.(T_4) is a free group of infinite rank.By <cit.>, PT_4 is a free group of rank 7 generated by x_1=(t_1t_2)^3,x_2=((t_1t_2)^3)^t_3, x_3=((t_1t_2)^3)^t_3t_2, x_4=((t_1t_2)^3)^t_3t_2t_1, x_5=(t_2t_3)^3, x_6=((t_2t_3)^3)^t_1, x_7=((t_2t_3)^3)^t_1t_2. Denote the generator (t_1t_2)^3of PT_3 by y. Direct computations show that images of x_i's underthe face maps d_i's are as follows:d_0(x_1)=d_0(x_2)=d_0(x_3)= d_0(x_6)=d_0(x_7)=1,d_0(x_4)=d_0(x_5)=y,d_1(x_1)=d_1(x_2)=d_1(x_4)= d_1(x_5)=d_1(x_7)=1,d_1(x_3)=d_1(x_6)=y, d_2(x_1)=d_2(x_3)=d_2(x_4)= d_2(x_5)=d_2(x_6)=1,d_2(x_2)=d_2(x_7)=y,d_3(x_2)=d_3(x_3)=d_3(x_4)= d_3(x_5)=d_3(x_6)=d_3(x_7)=1,d_3(x_1)=y.For each generator x_i, let log_i(w) denote the sum of the powers of x_i in the word w. Then it follows that(d_0)= { w ∈ PT_4 | log_4(w)+ log_5(w) = 0 }, (d_1)= { w ∈ PT_4 | log_3(w)+ log_6(w) = 0 }, (d_2)= { w ∈ PT_4 | log_2(w)+ log_7(w) = 0 }, (d_3)= { w ∈ PT_4 | log_1(w) = 0 },and hence(T_4)= ⋂_i=0^3(d_i) = { w ∈ PT_4 | log_4(w)+ log_5(w) =log_3(w)+ log_6(w) = log_2(w)+ log_7(w) =log_1(w) = 0}.Clearly, (T_4) isfree being a subgroup of the free group PT_4. We now find an infinite free basis for (T_4). It follows from the preceding description of (T_4) that the commutator subgroup of PT_4 is contained in (T_4). In fact, the containment is strict since x_4x_5^-1∈(T_4), butx_4x_5^-1∉PT_n^'. Thus, PT_4/(T_4) is a non-trivial abelian group. Let q:PT_4 → PT_4/(T_4) be the quotient map with q(x_i)=y_i for1 ≤ i ≤ 7. Since x_2x_7^-1 , x_3x_6^-1, x_4x_5^-1∈(T_4), thegroup PT_4/(T_4) is generated by the set {y_1, y_2, y_3, y_4}. Note that x_ix_j^-1∉(T_4) for all i ≠ j ∈{1,2,3,4} and x_i^k ∉(T_4) for k>0. Thus, by the fundamental theorem for finitely generated abelian groups, PT_4/(T_4) is a free abelian group of rank 4. Consider the short exact sequence1→(T_4) → PT_4 →ℤ^4 → 1.We fix a Schreier system {x_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4| k_1, k_2, k_3, k_4 ∈ℤ} of coset representatives of (T_4) in PT_4. This gives a free basis for (T_4) consisting of elements of theformx_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4 x_1 (x_1^k_1+1 x_2^k_2 x_3^k_3 x_4^k_4)^-1,x_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4 x_2 (x_1^k_1 x_2^k_2+1 x_3^k_3 x_4^k_4)^-1,x_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4 x_3 (x_1^k_1 x_2^k_2 x_3^k_3+1 x_4^k_4)^-1,x_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4 x_4 (x_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4+1)^-1,x_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4 x_5 (x_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4+1)^-1,x_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4 x_6 (x_1^k_1 x_2^k_2 x_3^k_3+1 x_4^k_4)^-1,x_1^k_1 x_2^k_2 x_3^k_3 x_4^k_4 x_7 (x_1^k_1 x_2^k_2+1 x_3^k_3 x_4^k_4)^-1,for k_1, k_2, k_3, k_4 ∈ℤ. This completes the proof.PT_n/(T_n) is non-abelian for n≥ 5 since PT_n^'⊈(T_n) for n≥ 5. For example,if w=[(t_1t_2)^3, (t_2t_3)^3], then d_4(w)≠ 1. PT_n/(T_n) is torsion free for each n ≥ 4 and PT_4/(T_4) ≅ℤ^4. The homomorphisms d_i: PT_n → PT_n-1 induce an injective homomorphismPT_n / (T_n) ↪PT_n-1×⋯× PT_n-1_n times.By <cit.>, PT_n is torsion-free for n ≥ 3. Hence, PT_n-1×⋯× PT_n-1 is torsion free, and thereforePT_n / (T_n) is so. The second assertion is proven in the proof of Theorem <ref>.Describe the structure of the group PT_n/(T_n)for n ≥ 5. Recall from <cit.> thatthe virtual twin group VT_n on n ≥ 2 strands is the group generated by { t_1, …, t_n-1, ρ_1, …, ρ_n-1} and having the following defining relations:t_i^2 = 1 for1 ≤ i ≤n-1,t_i t_j=t_j t_ifor |i - j| ≥ 2, ρ_i^2 =1for 1 ≤ i ≤ n-1,ρ_iρ_j= ρ_jρ_i for |i - j| ≥ 2,ρ_iρ_i+1ρ_i= ρ_i+1ρ_iρ_i+1for 1 ≤ i ≤ n-2,ρ_i t_j=t_jρ_i for |i - j| ≥ 2,ρ_iρ_i+1 t_i=t_i+1ρ_i ρ_i+1for 1 ≤ i ≤ n-2. The group VT_n plays the role of virtual braid groups in the Alexander-Markov correspondence for the planar analogue of virtual knot theory. There is a surjective homomorphism μ:VT_n→ S_n given byμ(t_i) = μ(ρ_i) = (i, i+1)for all 1≤ i ≤ n-1. The kernel PVT_n of this surjection is called the pure virtual twin group on n strands. For each n≥ 2, wehave surjective homomorphisms d_n-1: PT_n → PT_n-1 and d_n-1: PVT_n → PVT_n-1 that delete the n-th strand from the diagram of a pure twin and pure virtual twin. In the reverse directions, we have homomorphisms d^n-1: PT_n-1→ PT_n and d^n-1: PVT_n-1→ PVT_n that add a trivial strand to the right side of the diagram. Further, we have d_n-1d^n-1= 𝕀_PT_n-1 and d_n-1 d^n-1= 𝕀_PVT_n-1. Setting U_n=(d_n-1) and V_n=(d_n-1), we have split short exact sequences1 → U_n → PT_n → PT_n-1→ 1and1 → V_n → PVT_n → PVT_n-1→ 1. In other words, PT_n ≅ U_n ⋊ PT_n-1 and PVT_n ≅ V_n ⋊ PVT_n-1. (T_n) is free for all n≥ 3.The map t_i↦ t_i gives an embedding of T_n into PVT_n <cit.>. Restricting to PT_n, this gives an inclusion ψ_n: PT_n → PVT_n such that the following diagram commutesPT_n [r, "d_n-1"] [d, "ψ_n"] PT_n-1[d, "ψ_n-1"] PVT_n [r, "d_n-1"]PVT_n-1. This givesU_n ≅ψ_n(U_n) = ψ_n ((d_n-1)) ≤(d_n-1) = V_n. Since V_n is free for n ≥ 2 <cit.>, it follows that U_n is also free. Note that the subgroup U_i=(d_i) is conjugate to U_n by the element t_n-1t_n-2⋯ t_i+1. Thus, U_i is free group for each 1 ≤ i ≤ n, and hence (T_n)=∩_i=1^n U_i is a free group.At this juncture, the ensuing problem naturally arises. Determine a free generating set for (T_n) for n ≥ 5. We conclude the section with a consequence of the Decomposition Theorem for bi-Δ-groups in our setting <cit.>. The pure twin group PT_n+1 is the iterated semi-direct product of subgroups { d^i_k d^i_k-1⋯ d^i_1((T_n-k+1))  |  0 ≤ i_1<i_2<⋯<i_k ≤ n and 0 ≤ k ≤ n }with the lexicographic order on the indexing set{ (i_k, i_k-1, …, i_1,i_0,i_0,…,i_0_n-ktimes ) | 0 ≤ i_1<i_2<⋯<i_k ≤ n and 0 ≤ k ≤ n }from the left, where i_0 is the blank symbol considered smaller than all other indices. For n=3, we haved^0(PT_3) = ⟨ (t_2t_3)^3 ⟩,d^1(PT_3) = ⟨ (t_2t_1t_2t_3)^3 ⟩,d^2(PT_3) = ⟨ (t_1t_3t_2t_3)^3 ⟩ ,d^3(PT_3) = ⟨ (t_1t_2)^3 ⟩. Observing the proof of <cit.>, we getPT_4 = (d_3)⋊⟨ (t_1t_2)^3 ⟩,where (d_3) is normal in PT_4 and ⟨ (t_1t_2)^3 acts on (d_3) via conjugation. At the second stage, we obtain(d_3) = ((d_2)∩ (d_3)) ⋊⟨ (t_1t_3t_2t_3)^3 ⟩,where (d_2)∩ (d_3) is normal in (d_3) and the subgroup ⟨ (t_1t_3t_2t_3)^3 ⟩ = ⟨ d^2((t_1t_2)^3) ⟩≤(d_3) acts on (d_2)∩ (d_3) via conjugation. At the third stage, we get(d_2)∩ (d_3) = ((d_1) ∩(d_2)∩ (d_3)) ⋊⟨ (t_2t_1t_2t_3)^3 ⟩,where (d_1)∩(d_2)∩ (d_3) is normal in (d_2)∩(d_3) and the subgroup ⟨ (t_2t_1t_2t_3)^3 ⟩ = ⟨ d^1((t_1t_2)^3) ⟩≤(d_2)∩(d_3) acts on (d_1)∩(d_2)∩ (d_3) via conjugation. Finally, we have(d_1)∩(d_2)∩ (d_3) = (T_4) ⋊⟨ (t_2t_3)^3⟩,where (T_4) is normal and the subgroup ⟨ (t_2t_3)^3⟩= ⟨ d^0((t_1t_2)^3) ⟩ acts on (T_4) via conjugation. Thus, we obtain the following decomposition of PT_4 asan iterated semi-direct productPT_4 =((((T_4) ⋊⟨ (t_2t_3)^3⟩) ⋊⟨ (t_2t_1t_2t_3)^3 ⟩) ⋊⟨ (t_1t_3t_2t_3)^3 ⟩) ⋊⟨ (t_1t_2)^3 ⟩.Similarly, there are 16 non-trivial terms in the decomposition of PT_5 with the leftmost term being the Brunnian subgroup (T_5).§ K-DECOMPOSABLE TWINS AND COHEN TWINSIn this section, we consider two generalisations of Brunnian twins. §.§ k-decomposable twinsWe begin with the following definition. A pure twin on n strands is said to be k-decomposable if it becomes trivial after removing any k of its strands. Clearly, a 1-decomposable twin is simply a Brunnian twin. Further, the set of all k-decomposable twins on n strands forms a normal subgroup of PT_n and we denote this subgroup by D_k,n. For w∈ PT_n and 1≤ i<j< k≤ n, let w_i,j,k be the pure twin obtained from w by deleting all the strands except those indexed i,j,k. We can still view each w_i,j,k as an element of PT_n by adding trivial (n-3) strands on its right. See Figure <ref> for an example for n=4. Using ideas from <cit.>, we prove the following result.For n ≥ 4,D_n-3,n= { w ∏_1≤ i<j< k≤ n (w_i, j, k^-1)^c_i,j,k | w∈ PT_n },where c_i,j,k∈ T_n is a coset representative of the permutation in T_n/PT_n≅ S_n which takes i , j, k to 1,2,3, respectively, and fix everything else.In view of Proposition <ref>, we have w_i,j,k∈(T_3). A direct check shows that for any w ∈ PT_n, the pure twinw ∏_1≤ i<j< k≤ n (w_i, j, k^-1)^c_i,j,kis a (n-3)-decomposable twin on n strands. Note that the map ϕ:PT_n →D_n-3,n given byϕ(w)=w ∏_1≤ i<j< k≤ n (w_i, j, k^-1)^c_i,j,kis a retraction, that is, the restriction of ϕ on D_n-3,n is the identity map. Hence, it follows that each element ofD_n-3,n arises in this fashion.(T_4)= {ww_1,2,3^-1 (w_1,2,4^-1)^t_3(w_1,3,4^-1)^t_2t_3(w_2,3,4^-1)^t_1t_2t_3 |  w∈ PT_4 }.Next, we describe a process of constructing D_k-1,n from D_k,n. Let w ∈ D_k,n and 1≤ i_1<i_2⋯< i_n-k+1≤ n. Let w_i_1,i_2,…,i_n-k+1 be the pure twin obtained from w by removing the k-1 strands except those indexed i_1,i_2,…,i_n-k+1. Since w ∈ D_k,n, we have w_i_1,i_2,…,i_n-k+1∈(T_n-k+1). The following result can be proved along the lines of Proposition <ref>.For n ≥ 4, D_k-1,n= { w ∏_1≤ i_1<i_2⋯< i_n-k+1≤ n (w_i_1,i_2,…,i_n-k+1^-1)^c_i_1,i_2,…,i_n-k+1 |  w ∈ D_k,n},where c_i_1,i_2,…,i_n-k+1∈ T_n is a coset representative of the permutation in T_n/PT_n≅ S_n which takes i_1,i_2,…,i_n-k+1 to 1,2,…, n-k+1, respectively, and fix everything else. Beginning with PT_n=D_n-2,n=D_n-1,n and iterating the procedure of constructing D_k-1,n from D_k,n, we can construct all Brunnian twins on n strands. §.§ Cohen twinsNext, we consider another generalisation of Brunnian twins motivated by an idea due to Fred Cohen <cit.>, and developed further for surface braid groups in <cit.>. Recall that, for 0 ≤ i ≤ n-1, the face map d_i:T_n → T_n-1 deletes the (i+1)-st strand from the diagram of a twin. Although d_i is not a group homomorphism,it satisfiesd_i(u w)= d_i(u)d_ν(u)(i+1)-1(w),where ν:T_n+1→ S_n+1 is the natural surjection.For an arbitrary u ∈ T_n-1, we ask whether there exists w ∈ T_n which is a solution of the system of equations{[ d_0(w)=u,; d_1(w)=u,; ⋮; d_n-1(w)=u. ]. Taking u=1 amounts to w ∈ T_n being a Brunnian twin.A twin w ∈ T_n is called a Cohen twin if d_0 (w)=d_1(w)=⋯=d_n-1 (w).For n ≥ 2, let us setCT_n={w ∈ T_n  |  d_0 (w)=d_1(w)=⋯=d_n-1 (w) } .In other words, a twin on n strands lie in CT_n if it gives the same twin on (n-1) strands after removing any one of its strands. For example, the twinδ_n:=(t_1 t_2⋯ t_n-1)(t_1 t_2⋯ t_n-2)⋯ (t_1t_2)t_1lies in CT_n for all n ≥ 2 and d_0(δ_n)=δ_n-1 (see Figure <ref>).Similarly, we defineCPT_n = CT_n ∩ PT_n={w ∈ PT_n  |  d_0 (w)=d_1 (w)=⋯=d_n-1 (w) } .We refer to elements of CPT_nas pure Cohen twins. For instance, the pure twinγ_n:=(t_1t_2⋯ t_n-1)^nlies in CPT_n for all n ≥ 2 and d_0(γ_n)=γ_n-1(see Figure <ref>). If ϕ, ψ: G → H are group homomorphisms, then their equalizer is the subgroup of G given by {g ∈ G  | ϕ(g)=ψ(g)}.Hence, CPT_nis a subgroup of PT_n being the equalizer of group homomorphisms d_0, d_1, …, d_n-1:PT_n→ PT_n-1.The following assertions hold: * For each 0 ≤ i ≤ n-1, d_i(CPT_n) ⊆ CPT_n-1 and the mapd_0=d_1=⋯=d_n-1:CPT_n→ CPT_n-1 is a group homomorphism.* The set CT_n is a subgroup of T_n. Moreover, for each 0 ≤ i ≤ n-1, d_i(CT_n) ⊆ CT_n-1 and the mapd_0=d_1=⋯=d_n-1:CT_n→ CT_n-1 is a group homomorphism. Let w ∈ CPT_n and 0 ≤ i ≤ n-1. Then, using(<ref>), we obtaind_j (d_i (w) )=d_j (d_0 (w) )=d_0 (d_j+1 (w) )=d_0 (d_i (w) )for each 0 ≤ j ≤ n-2, and henced_i(CPT_n) ⊆ CPT_n-1. That d_0=d_1=⋯=d_n-1:CPT_n→ CPT_n-1 is a group homomorphism follows from Proposition <ref>.For the second assertion, let u, w ∈ CT_n. By (<ref>), we haved_i(u w)=d_i(u) d_ν(u)(i+1)-1(w)=d_0(u) d_ν(u)(1)-1(w)=d_0(uw)for each 0 ≤ i ≤ n-1, and hence u w ∈ CT_n. Further, the equation1=d_i(u^-1 u)=d_i(u^-1) d_ν(u^-1)(i+1)-1(u)=d_i(u^-1) d_0(u),givesd_i(u^-1)=(d_0(u))^-1for each 0 ≤ i ≤ n-1, and hence CT_n is a subgroup of T_n. The proof of d_i(CT_n) ⊆ CT_n-1follows from <ref>. Finally,(<ref>) also shows that d_0=d_1=⋯=d_n-1: CT_n → CT_n-1 is a group homomorphism.CPT_n is an index two subgroup of CT_n for n ≥ 3.The topological interpretation of elements of T_n can be applied to elements of S_n as well by allowing triple intersection points. Thus, for each 0 ≤ i ≤ n-1, there is a map d̅_̅i̅: S_n → S_n-1 (thought of as deleting the (i+1)-st strand) such the following diagram commutesPT_n T_n S_n PT_n-1 T_n-1 S_n-1[hook, from=1-1, to=1-2] [hook, from=2-1, to=2-2] ["ν_n",two heads, from=1-2, to=1-3] ["ν_n-1",two heads, from=2-2, to=2-3] ["d_i", from=1-1, to=2-1] ["d_i", from=1-2, to=2-2] ["d̅_̅i̅", from=1-3, to=2-3].Set CS_n:=ν_n(CT_n) for each n ≥ 2. Note that CS_2=ν_2(T_2)=S_2 ≅ℤ_2. The commutativity of the preceding diagram shows that every τ∈ CS_n satisfy d̅_̅0̅(τ)=d̅_̅1̅(τ)= ⋯= d̅_n-1(τ). By Proposition <ref>(2), we have d_0(CT_n)⊆ CT_n-1. The commutativity of the preceding diagram implies that d̅_̅0̅(CS_n)= d̅_̅0̅ν_n(CT_n)= ν_n-1d_0(CT_n) ⊆ν_n-1(CT_n-1)= CS_n-1.Thus, for n≥ 3,the restriction of the map d̅_̅0̅:S_n → S_n-1induces a map d̅_̅0̅ :CS_n→ CS_n-1 such that (d̅_̅0̅)=∩_i=0^n-1(d̅_̅i̅). Direct computation gives (d̅_̅0̅)=1, and hence the map d̅_̅0̅⋯d̅_̅0̅:CS_n→ CS_2 is injective. Since ν_n(δ_n)≠ 1, we have CT_n/CPT_n ≅ CS_n≅ℤ_2, and the proof is complete.The following result follows along the lines of <cit.>. For each 1 ≤ k ≤ n-1, the mapd_0 ⋯ d_0_(n-k) times:CPT_n → CPT_kis surjective. In particular, d_0:CPT_n → CPT_n-1 is surjective for n ≥ 2.Let us set d_n-k,n=d_0 ⋯ d_0_(n-k) times. We use induction on k. Clearly, for k=1, the map d_n-1,n:CPT_n→ CPT_1 is surjective. Assume that d_n-k+1, n is surjective with k>1, and let w ∈ CPT_k. Case 1: Suppose that w ∈(d_0:CPT_k → CPT_k-1). Then consider the elementw_k, n=∏_0 ≤ i_1<i_2<⋯<i_n-k≤ n-1 d^i_n-k d^i_n-k-1⋯ d^i_1 (w)ofPT_n with lexicographic order on the indices from the right. Since w ∈(d_0:CPT_k → CPT_k-1), a straightforward computation shows that w_k,n∈ CPT_n and d_n-k,n(w_n,k)=w. For instance, taking n=4 and k=1, we havew_1, 4=∏_0 ≤ i_1<i_2<i_3≤ 3 d^i_3 d^i_2 d^i_1 (w)with lexicographic order from the right. Note that (i_1, i_2, i_3)∈{(0,1,2), (0,1,3), (0,2,3), (1,2,3)} andw_1,4 = d^2d^1d^0(w)  d^3d^1d^0(w)  d^3d^2d^0(w)  d^3d^2d^1(w). Direct computations gived_0(w_1,4)=d^1d^0(w)  d^2d^0(w)  d^2d^1(w)  d^2d^1d^0(d_0(w)),d_1(w_1,4)=d^1d^0(w)  d^2d^0(w)  d^2d^1d^0(d_0(w))  d^2d^1(w),d_2(w_1,4)=d^1d^0(w)  d^2d^1d^0(d_0(w))  d^2d^0(w)  d^2d^1(w),d_3(w_1,4)=d^2d^1d^0(d_0(w))  d^1d^0(w)  d^2d^0(w)  d^2d^1(w).Since w ∈(d_0:CPT_k → CPT_k-1), d^2d^1d^0(d_0(w))=1, and hence w_1,4∈ CPT_4.Case 2: Now, suppose that 1 δ = d_0(w) ∈ CPT_k-1. By induction hypothesis, there exists γ∈ CPT_n such that d_n-k+1, n(γ)= d_0(d_n-k,n(γ))=δ. Note thatw d_n-k,n(γ)^-1∈(d_0:CPT_k → CPT_k-1). Thus, byCase 1, there exists λ∈ CPT_n such thatd_n-k,n(λ)=wd_n-k,n(γ)^-1,and henced_n-k,n(λγ) = w. This proves that the map d_n-k, n is surjective. The map d_0:CT_n → CT_n-1 is surjective for each n ≥ 2. In view ofProposition <ref>, we can write CT_n-1= CPT_n-1∪δ_n-1CPT_n-1. Let us take w ∈ CT_n-1. If w ∈ CPT_n-1,then by Proposition <ref>, there exists an u ∈ CPT_n such that d_0(u)=w. If w ∈δ_n-1CPT_n-1, then again by Proposition <ref>, there exists v ∈ CPT_n, such that d_0(v)=δ_n-1^-1w, and hence d_0(δ_n v)=w. This complete the proof. Thus, we obtain the following short exact sequences1→(T_n) → CT_n → CT_n-1→ 1and 1→(T_n) → CPT_n → CPT_n-1→ 1.Observe that CPT_2=(T_2)=PT_2=1 and CPT_3=(T_3)=PT_3=⟨ (t_1t_2)^3⟩≅ℤ. Thus, the preceding exact sequence gives CPT_4=(T_4)⋊⟨ (t_1t_2)^3 ⟩. For each u ∈ PT_n-1 or u ∈ T_n-1, the system of equations{[ d_0(w)=u,; d_1(w)=u,; ⋮; d_n-1(w)=u, ]. has a solution if and only if u satisfies the conditiond_0(u)=d_1(u)=⋯=d_n-2(u). Let u ∈ PT_n-1 such thatthe system of equations (<ref>) has a solution. Then there exists w ∈ PT_n such that d_0 (w)=⋯=d_n-1 (w)=u. It follows from Proposition <ref> that u ∈ CPT_n-1, and hence d_0( u)=⋯=d_n-2(u). Conversely, suppose that d_0 (u)=⋯=d_n-2 (u), that is, u ∈ CPT_n-1. By Proposition <ref>, d_0:CPT_n → CPT_n-1 is surjective, and hence there exists w ∈ CPT_n which is a solutionto (<ref>). The proof for the case when u ∈ T_n-1 is similar. § BRUNNIAN DOODLES ON THE 2-SPHERE Note that the closure of a Brunnian braid is a Brunnian link. The converse is not true and there exist Brunnian links that cannot be obtained as the closure of Brunnian braids (see <cit.>). The same scenario occurs with doodles on the 2-sphere. Consider the Brunnian doodle on the 2-sphere as shown in Figure <ref>. The main result of this section will show that this Brunnian doodle cannot be realised as the closure of a Brunnian twin. A doodle diagram on the 2-sphere is called minimal if it has no monogons and bigons. <cit.> Any doodle has a unique (up to the transformation shown in Figure <ref>) minimal doodle diagram with a minimal number of intersection points. Further, this minimal doodle diagram can be constructed from any other doodle diagram by applying Reidemeister moves R1 and R2 that reduce the number of intersection points.For a given reduced word w=t_i_1… t_i_k∈ T_n, let ℓ(w)=k be the length of w.For each 1≤ i≤ n-1, if log_i(w) denote the number of t_i's present in the expression w, then ℓ(w) =∑_i=1^n-1log_i(w).A cyclic permutation of a wordw=t_i_1… t_i_k∈ T_n (not necessarily reduced) is a word w'=t_i_rt_i_r+1… t_i_kt_i_1 t_i_2⋯ t_i_r-1 for some 1≤ r≤ k. It is easy to see that w and w' are conjugate to each other in T_n, in fact, w'=(t_i_1 t_i_2… t_i_r-1)^-1w(t_i_1 t_i_2… t_i_r-1). A word w is called cyclically reduced if each cyclic permutation of w is reduced. Clearly, a cyclically reduced word is reduced.Let w ∈ PT_n be a pure twin. Then the following assertions hold: * If ℓ(w) is minimal among all the elements in the conjugacy class of w, then the closure of w is a minimal doodle diagram.* The closure of w is an n-component trivial doodle if and only if w is a trivial twin.It follows from <cit.> that each word in T_n is conjugate to some cyclically reduced word. Since ℓ(w) is minimal among all the elements in the conjugacy class of w, it follows that w is a cyclically reduced word. Hence, the closure of w has no bigons. Since w is pure twin, its closure has no monogons, and hence the diagram is minimal. By Markov Theorem for doodles on the 2-sphere <cit.>, conjugate twins have the same closure. Thus, we can assume that ℓ(w) is minimal among all the elements in the conjugacy class of w. It follows from assertion (1) that the closure of w is a minimal doodle diagram. Note that the number of double points in the closure of the twin w equals ℓ(w), and hence ℓ(w)=0. But, this implies that w is trivial twin. The converse implication in assertion (2) is obvious. Let w denote the closure of a twin w on the 2-sphere. By <cit.>, every oriented doodle on the 2-sphere is the closure of a twin. The twin index I(D) of a doodle D on the 2-sphere is the minimal n such that there is a twin w ∈ T_n whose closure is equivalent to D.An m-component Brunnian doodle D on the 2-sphere is the closure of a Brunnian twin if and only if I(D) = m.If u is a Brunnian twin on m strands, then its closure on the 2-sphere is a Brunnian doodle on m components with I(u)=m. Conversely, if D is a Brunnian doodle on m components and I(D)=m, then there exist w ∈ PT_m such that w=D. Removing any strand from w corresponds to removing a component from D. Thus, d_i(w) is a trivial doodle for each i. By Lemma <ref>, d_i(w)=1 for each i, and hence w is a Brunnian twin.An analogue of Theorem <ref> for Brunnian links in S^3 is proved in <cit.>.§ SIMPLICIALSTRUCTURE ON PURE TWIN GROUPSIn this section, we discuss simplicial structures on twin and pure twin groups and relate them with Milnor's construction for simplicial spheres. §.§ Simplicial sets and simplicial groups We recall some basic definitions and constructions <cit.>. A sequence of sets X_* = { X_n }_n ≥ 0is called a simplicial set if there are face mapsd_i : X_n ⟶ X_n-1  0 ≤ i ≤ nand degeneracymapss_i : X_n ⟶ X_n+1  0 ≤ i ≤ n,which satisfy the following simplicial identities: * d_i d_j = d_j-1 d_i if i < j,* s_i s_j = s_j+1 s_i if i ≤ j,* d_i s_j = s_j-1 d_i if i < j,* d_j s_j = 𝕀 = d_j+1 s_j,* d_i s_j = s_j d_i-1 if i > j+1.We view X_n geometrically as the set of n-simplices including all possible degenerate simplices. Here, a simplex x is degenerate if x = s_i (y) for some simplex y and degeneracy operator s_i, otherwise x is non-degenerate.A simplicial set X_* is pointed if we fix a basepoint ⋆∈ X_0 that creates one and only one degenerate n-simplex in each X_n by applying iterated degeneracy operations on it. A simplicial groupis a simplicial set X_* such that each X_n is a group and all face and degeneracy maps are group homomorphisms. In the context of braid-type groups (for example, braid group B_n, virtual braid group VB_n, welded braid group WB_n, etc.), the maps d_i usually represents deleting of the (i+1)-th strand and s_i represents doubling of the (i+1)-th strand.Note that the defining identitiesof a bi-Δ-set and that of a simplicial set aresimilar. The only differences are that we don't haved_j+1 s_j= 𝕀 for bi-Δ-sets, and when viewed as maps from X_n- 1→ X_n, the number of degeneracy maps is one less than the number of coface maps. We have used the bi-Δ-set structure at three instances in the preceding sections. The first instance of usage of a bi-Δ-set is Proposition <ref>, though its arguments can be modified to adapt to a simplicial set structure. The second instance is the proof of Proposition <ref>, where we defined the element w_k,n and showed that w_k,n∈ CPT_n. In the latter case, a simplicial structure would not be helpful. Finally, using the Decomposition Theorem forbi-Δ-groups, we have given a decomposition of pure twin groups in Proposition <ref> with Brunnian subgroups as constituents.Let G_* = { G_n }_n ≥ 0 be a simplicial group. The group of Moore n-cycles Z_n(G_*)≤ G_n is defined byZ_n(G_*)=⋂_i=0^nKer(d_i G_n→ G_n-1)and the group of Moore n-boundaries B_n(G_*)≤ G_n is defined byB_n(G_*)=d_0(⋂_i=1^n+1Ker(d_i G_n+1→ G_n)).Simplicial identities guarantees that B_n(G_*) is a (normal) subgroup of Z_n(G_*) (see <cit.> or <cit.>). The n-th Moore homotopy group π_n(G_*) of G_* is defined byπ_n(G_*)=Z_n(G_*)/B_n(G_*).It is a classical result due to Moore <cit.> that π_n(G_*) ≅π_n(|G_*|), where |G_*| is the geometric realisation of G_*. A simplicial group G_* is called contractible if π_n (G_*) = 1 for all n>0.Milnor's F[K] construction is the adjoint functor to the forgetful functor from the category of pointed simplicial groups to the category of pointed simplicial sets. For a given pointed simplicial set K_* = { K_n, ⋆}_n ≥ 0, Milnor's F[K] construction is the simplicial group with F[K]_n = F(K_n ∖⋆), the free group on K_n ∖⋆, with the face and the degeneracy maps induced from the face and degeneracy maps of K_*. It is well-known from <cit.> that there is weak homotopy equivalence|F[K]_*| ≃ΩΣ|K_*|,where |X_*| denotes the geometric realisation of a simplicial set X_*. Here, Ω Z is the loop space of all based loops in a pointed topological space Z and Σ Zis the reduced suspension of Z. Consider the pointed simplicial 2-sphere S^2 = Δ[2] / ∂Δ[2] withS^2_0 = {⋆},  S^2_1 = {⋆},  S^2_2 = {⋆, σ},  S^2_3 = {⋆, s_0 (σ),s_1(σ), s_2 (σ)}, …,S^2_n = {⋆, x_ij | 0 ≤ i < j ≤ n-1 }, …where σ = (0,1,2) is the non-degenerate 2-simplex, x_ij = s_n-1… s_j+1s_j s_j-1… s_i+1s_i s_i-1… s_0 (σ) and s_k means that the degeneracy map s_k is omitted. Then F[S^2] construction has the following terms:F[S^2]_0= 1,F[S^2]_1 = 1, F[S^2]_2 = F(σ),F[S^2]_3 = F(s_0 (σ),s_1 (σ), s_2 (σ)),F[S^2]_4 = F(s_1s_0 (σ),s_2s_0 (σ), s_3s_0 (σ), s_2s_1 (σ), s_3s_1 (σ), s_3s_2 (σ)), ⋮ F[S^2]_n = F(x_ij;  0 ≤ i < j ≤ n-1), ⋮ For each n ≥ 2, the group F[S^2]_n is a free group of rank n(n-1)/2. In this construction of the simplicial 2-sphere, it is convenient to present the degeneracy map s_i is a doubling of the (i+1)-th component and the face map d_i as deletion of the (i+1)-th component. For example,s_0 (σ) = (0,0,1,2), s_1 (σ) = (0,1,1,2),s_2 (σ) = (0,1,2,2), s_1 s_0 (σ) = (0,0,0,1,2),s_2 s_0 (σ) = (0,0,1,1,2),s_3 s_0 (σ) = (0,0,1,2,2), s_2 s_1 (σ) = (0,1,1,1,2),s_3 s_1 (σ) = (0,1,1,2,2),s_3 s_2 (σ) = (0,1,2,2,2).The face and degeneracy maps are determined with respect to the standard simplicial identities for simplicial groups. For example, the first non-trivial face maps d_i : F[S^2]_3 → F[S^2]_2 are given byd_0 : s_0 (σ)↦σ, s_1 (σ)↦⋆, s_2(σ)↦⋆, d_1 : s_0 (σ)↦σ, s_1 (σ)↦σ, s_2 (σ)↦⋆, d_2 : s_0 (σ)↦⋆, s_1 (σ)↦σ, s_2 (σ)↦σ, d_3 : s_0 (σ)↦⋆, s_1 (σ)↦⋆,s_2(σ)↦σ.Milnor's construction gives a possibility to define the homotopy groups π_n(S^3) combinatorially, in terms of free groups. By (<ref>), the geometric realisation of F[S^2]_* is weakly homotopically equivalent to the loop space Ω S^3. Thus, the homotopy groups of S^3 are isomorphic to the Moore homotopy groups of F[S^2], that is,π_n+1(S^3) ≅ Z_n (F[S^2]_*) / B_n (F[S^2]_*).§.§ Simplicial pure twin group By <cit.>, we have PT_3 = ⟨ (t_1 t_2)^3 ⟩≅ℤ and PT_4 ≅ F_7, where F_7 is the free group on the elementsx_1 = (t_1 t_2)^3, x_2 = ( (t_1 t_2)^3 )^t_3,x_3 = ( (t_1 t_2)^3)^t_3 t_2, x_4 = ( (t_1 t_2)^3 )^t_3 t_2 t_1, x_5 = (t_2 t_3)^3,x_6 = ((t_2 t_3)^3 )^t_1,x_7 = ((t_2 t_3)^3)^t_1 t_2. Let SPT_*= {SPT_n}_n ≥ 0, where SPT_n=PT_n+1 for each n≥0. Following the methodology of <cit.>, consider the sequence of groups… ⟶ … ⟶ ⟵ … ⟵ PT_4⟶ ⟶ ⟶ ⟶ ⟵ ⟵ ⟵ PT_3 ⟶ ⟶ ⟶ ⟵ ⟵ PT_2 ⟶ ⟶ ⟵ PT_1with face and degeneracy homomorphismsd_i: SPT_n=PT_n+1→ SPT_n-1=PT_n, s_i: SPT_n=PT_n+1→ SPT_n+1=PT_n+2,where the face map d_i is the deleting of the (i+1)-th strand and the degeneracy map s_i is the doubling of the (i+1)-th strand for each 0 ≤ i ≤ n. For example, we prove in the proof of Proposition <ref> that d_3 : PT_4 → PT_3 is given byd_3(x_1) = y and d_3(x_2) = d_3(x_3) = d_3(x_4) = d_3(x_5) = d_3(x_6) = d_3(x_7) =1,where y=(t_1 t_2)^3 ∈ PT_3. As in the classical case it is not difficult to prove the following result, whose proof is adapted from <cit.>. SPT_* is a contractible simplicial group.Let x ∈ Z_n(SPT_*) be a Moore n-cycle, that is, x ∈ SPT_n and d_i(x) = 1 for all 0 ≤ i ≤ n. Note that SPT_* admits an additional degeneracy map ι_n+1: SPT_n→ SPT_n+1, which adds a trivial strand on the left of the diagram of the twin. If we set y = ι_n+1(x) ∈ SPT_n+1, then we see that d_j(y) = 1 for all 1 ≤ j ≤ n+1 and d_0(y) = x. Thus, x ∈ B_n(SPT_*) is a Moore n-boundary, and hence π_n(SPT_*) = 1 for all n. We write U_n,i := Ker(d_i : PT_n → PT_n-1) for each 0 ≤ i ≤ n-1. Then, we have the following short exact sequence1U_n,i PT_n PT_n-11 [from=1-1, to=1-2] [from=1-2, to=1-3] ["d_i", from=1-3, to=1-4] [from=1-4, to=1-5]with the splitting given by d^i: PT_n-1→ PT_n as defined in Proposition <ref>. This gives asemi-direct productdecomposition PT_n = U_n,i⋊ PT_n-1. Clearly, U_3,0=U_3,1=U_3,2=PT_3. The following problem seems interesting.Find presentations of U_n,i for n ≥ 4. We construct a simplicial subgroup K_* of SPT_* which would be the image of the simplicial sphere S^2 under a simplicial map. Put K_0 = K_1 = 1, K_2 = SPT_2 = ⟨ c_111⟩, the infinite cyclic group generated by c_111= (t_1 t_2)^3, andK_3 =⟨ c_211 = s_0 (c_111),   c_121 = s_1 (c_111),  c_112 = s_2 (c_111) ⟩. In general, we defineK_n = ⟨ c_klm = s_n-1… s_j+1s_j s_j-1… s_i+1s_i s_i-1… s_0 (c_111) | 0 ≤ i < j ≤ n-1,  k+l+m = n+1 ⟩,the subgroup of SPT_n generated by n(n-1)/2 elements. It follows from the simplicial identities that d_i(c_klm)∈ K_n-1 ands_j(c_klm)∈ K_n+1 for each generator c_klm of K_n and all d_i, s_j. Thus, for each n ≥ 0, restriction of face maps d_i: SPT_n → SPT_n-1 gives face maps d_i:K_n → K_n-1. Similarly, restriction of degeneracy maps s_i: SPT_n → SPT_n+1 induce degeneracy maps s_i: K_n → K_n+1, turning K_*={K_n}_n ≥ 0 into a simplicial subgroup of SPT_*.K_3 ≅ F[S^2]_3 andK_4 ≅ F[S^2]_4.Using the geometrical interpretation of c_111 (see Figure <ref>) and degeneracy maps s_i, we write the generators of K_3 in terms of the generators of PT_4 as follows:c_211 = (t_2 t_1 t_3 t_2 ) (t_1 t_2 t_3 )( t_1 t_2 t_3) = ( (t_1 t_2)^3 )^t_3 t_2(t_2 t_3)^3 = x_3 x_5, c_121 = (t_1 t_2 t_3 )( t_2 t_1 t_3 t_2 )( t_1 t_2 t_3) = ( (t_2 t_3)^3 )^t_1( (t_1 t_2)^3 )^t_3 = x_6 x_2, c_112 =(t_1 t_2 t_3 )( t_1 t_2 t_3 )( t_2 t_1 t_3 t_2) = (t_1 t_2)^3 ( (t_2 t_3)^3 )^t_1 t_2= x_1 x_7.Since PT_4 is a free group of rank 7, it follows that K_3 is a free group of rank 3, and hence K_3 ≅ F[S^2]_3. It is known from <cit.> that SPT_4 = PT_5 is free group of rank 31, but <cit.> does not give any free generating set for PT_5. However, using <cit.>, we obtain a generating set for PT_5 of cardinality 43. By removing the redundant generators, we obtain the following minimial generating set for PT_5:3a_1=(t_1 t_2)^3 a_2=((t_1 t_2)^3)^t_3 a_3=((t_1 t_2)^3)^t_3 t_2 a_4=((t_1 t_2)^3)^t_3 t_2 t_1 a_5=((t_1 t_2)^3)^t_3 t_2 t_1 t_4 t_3 t_2 a_6=((t_1 t_2)^3)^t_3 t_2 t_1t_4 t_3 a_7=((t_1 t_2)^3)^t_3 t_2 t_1 t_4 a_8=((t_1 t_2)^3)^t_3 t_2 t_4 t_3 a_9=((t_1 t_2)^3)^t_3 t_4 t_3 t_2 a_10=((t_1 t_2)^3)^t_3 t_4a_11=(t_2 t_3)^3 a_12=((t_2 t_3)^3)^t_1 a_13=((t_2 t_3)^3)^t_1 t_2a_14=((t_2 t_3)^3)^t_4 t_3 t_2 t_1 a_15=((t_2 t_3)^3)^t_4 t_3 t_2 a_16=((t_2 t_3)^3)^t_4 t_3 a_17=((t_2 t_3)^3)^t_4 a_18=((t_2 t_3)^3)^t_1 t_2 t_4 t_3 t_2 t_1 a_19=((t_2 t_3)^3)^t_1 t_2 t_4 t_3 t_2 a_20=((t_2 t_3)^3)^t_1 t_2 t_4 t_3 a_21=((t_2 t_3)^3)^t_1 t_2 t_4 a_22=((t_2 t_3)^3)^t_1 t_4 t_3 t_2 t_1 a_23=((t_2 t_3)^3)^t_1 t_4 t_3 t_2 a_24=((t_2 t_3)^3)^t_1 t_4 t_3 a_25=((t_2 t_3)^3)^t_1t_4 a_26=((t_3 t_4)^3) a_27=((t_3 t_4)^3)^t_2t_1t_3t_2 a_28=((t_3 t_4)^3)^t_2t_1t_3 a_29=((t_3 t_4)^3)^t_2t_1 a_30=((t_3 t_4)^3)^t_2t_3 a_31=((t_3 t_4)^3)^t_2 By definition, we have K_4 = ⟨ s_1s_0(c_111), s_2s_0(c_111), s_3s_0(c_111), s_2s_1(c_111), s_3s_1(c_111),s_3s_2(c_111) ⟩. Direct calculation givess_1(x_3 x_5) = a_8 a_16 a_26,s_2(x_3 x_5) =a_23 a_9 a_31 a_17,s_3(x_3 x_5) = a_3 a_19 a_11 a_30,s_2(x_6 x_2) = a_29 a_25 a_10,s_3(x_6 x_2) =a_12 a_28 a_2 a_20,s_3(x_1x_7) = a_1 a_13 a_27. Thus,K_4 is free of rank 6, and hence K_4 ≅ F[S^2]_4. Determine a presentation of K_n for n ≥ 4. Weconsider c_111 as a 2-simplex in the simplicial group SPT_*. Since d_0 (c_111) = d_1 (c_111 )= d_2 (c_111) = 1, there is a (unique) simplicial mapθ : S^2 → SPT_*such that θ(σ)=c_111, where σ=(0,1,2) is the non-degenerate 2-simplex of the simplicial sphere S^2. By Milnor'sconstruction, the simplicial map θ extends uniquely to a simplicial homomorphismΘ : F[S^2]_* ⟶SPT_*.We note thatK_*=Θ(F[S^2]_*) and it is the smallest simplicial subgroup of SPT_* containing c_111. Further, by Proposition <ref>,Θ_n : F[S^2]_n ⟶ SPT_nis injective for n≤ 4. If each Θ_n:F[S^2]_n → SPT_n is injective, then by (<ref>), we have π_n+1 (S^3) ≅ Z_n(F[S^2]_*)/B_n(F[S^2]_*) ≅ Z_n(K_*)/B_n(K_*) ≅π_n (K_*). Thus, if Θ is injective, then we can describe π_n+1(S^3) as a quotient of a subgroup ofPT_n+1. For instance, the generator of π_3(S^3)≅ℤ can be represented by the pure twin (t_1t_2)^3. It appears that the following holds. Θ : F[S^2]_* ⟶K_* is an isomorphism. Valeriy Bardakov is supported by the state contract of the Sobolev Institute of Mathematics, SB RAS (No. I.1.5, Project FWNF-2022-0009). Pravin Kumar is supported by the PMRF fellowship at IISER Mohali. Mahender Singh is supported by the Swarna Jayanti Fellowship grants DST/SJF/MSA-02/2018-19 and SB/SJF/2019-20/04.§ DECLARATIONThe authors declare that there is no data associated to this paper and that there are no conflicts of interests. plain1MR2966697 V. G. Bardakov, R. Mikhailov, V. Vershinin and J. Wu,Brunnian braids on surfaces, Algebr. Geom. Topol. 12 (2012), no. 3, 1607–1648. BW2 V. G. Bardakov and J. Wu,Lifting theorem for the virtual pure braid groups, Chinese Ann. Math. Ser. B (2023),DOI: 10.1007/s11401-007-0001-x.MR4027588V. Bardakov, M. Singh and A. Vesnin,Structural aspects of twin and pure twin groups, Geom. Dedicata 203 (2019), 135–154.MR3482589 V. G. Bardakov, V. V. Vershinin and J. Wu, On Cohen braids, Proc. Steklov Inst. Math. 286 (2014), no. 1, 16–32.MR3876348 A. Bartholomew, R. Fenn, N. Kamada and S. Kamada,Doodles on surfaces, J. Knot Theory Ramifications 27 (2018), no. 12, 1850071, 26 pp.MR2188127 A. J. Berrick,F. R. Cohen,Y. L. Wong and J. Wu, Configurations, braids, and homotopy groups, J. Amer. Math. Soc. 19 (2006), no. 2, 265–326.MR3108834 A. J. Berrick, E. Hanbury and J. Wu, Brunnian subgroups of mapping class groups and braid groups, Proc. Lond. Math. Soc. (3) 107 (2013), no. 4, 875–906. MR3152716 A. J. Berrick, E. Hanbury and J. Wu, Delta-structures on mapping class groups and braid groups, Trans. Amer. Math. Soc. 366 (2014), no. 4, 1879–1903.MR1317619 A. Björner and V. Welker, The homology of “k-equal" manifolds and related partition lattices, Adv. Math. 110 (1995), no. 2, 277–313. MR1349129 F. R. Cohen, On combinatorial group theory in homotopy. Homotopy theory and its applications (Cocoyoc, 1993), 57–63, Contemp. Math., 188, Amer. Math. Soc., Providence, RI, 1995. MR2853222 F. R. Cohen and J. Wu, Artin's braid groups, free groups, and the loop space of the 2-sphere, Q. J. Math. 62 (2011), no. 4, 891–921.MR3200492 F. Duzhin and S. M. Z. Wong, On two constructions of Brunnian links, J. Knot Theory Ramifications 23 (2014), no. 3, 1420002, 6 pp.MR0547452 R. Fenn and P. Taylor, Introducing doodles, Topology of low-dimensional manifolds (Proc. Second Sussex Conf., Chelwood Gate, 1977), pp. 37–43, Lecture Notes in Math., 722, Springer, Berlin, 1979.MR2915498 G. Friedman, Survey article: An elementary illustrated introduction to simplicial sets, Rocky Mountain J. Math.42(2012), no.2, 353–423.MR4170471 J. González, J. L. León-Medina and C. Roque-Márquez,Linear motion planning with controlled collisions and pure planar braids, Homology Homotopy Appl. 23 (2021), no. 1, 275–296.Gotin K. Gotin, Markov theorem for doodles on two-sphere, (2018), arXiv:1807.05337. MR1370644 M. Khovanov, Doodle groups, Trans. Amer. Math. Soc. 349 (1997), 2297–2315.Khovanov1990M. Khovanov, New geometrical constructions in low-dimensional topology, (1990), preprint.MR1386845 M.Khovanov,Real K(π,1) arrangements from finite root systems, Math. Res. Lett. 3 (1996), 261–274.MR3180740 F. Lei, F. Li and J. Wu, On simplicial resolutions of framed links, Trans. Amer. Math. Soc. 366 (2014), 3075–3093.MR324684 H. Levinson,Decomposable braids and linkages, Trans. Amer. Math. Soc. 178 (1973), 111–126.MR362287 H. Levinson, Decomposable braids as subgroups of braid groups, Trans. Amer. Math. Soc. 202 (1975), 51–55.MR2551462 J. Li and J. Wu, Artin braid groups and homotopy groups, Proc. Lond. Math. Soc. (3) 99 (2009), no. 3, 521–556.MR1822143 A. Maes and C. Cerf,A family of Brunnian links based on Edwards' construction of Venn diagrams, J. Knot Theory Ramifications 10 (2001), no. 1, 97–107.MR0222892 J. P. May, Simplicial objects in algebraic topology, Van Nostrand Mathematical Studies, No. 11 D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto, Ont.-London 1967 vi+161 pp.Milnor J. Milnor, On the construction F[K], Algebraic Topology - A Student Guide, by J. F. Adams, Cambridge Univ. Press, 1972, 119–136.MR3035327 R. V. Mikhailov and J. Wu, Combinatorial group theory and the homotopy groups of finite complexes, Geom. Topol. 17 (2013), 235–272.moore J. C. Moore, Homotopie des complexes monöidéaux, Séminaire Henri Cartan (1954-55).MR4145210 T. K. Naik, N. Nanda and M. Singh, Conjugacy classes and automorphisms of twin groups, Forum Math. 32 (2020), no. 5, 1095–1108.MR4651964 T. K. Naik, N. Nanda and M. Singh,Structure and automorphisms of pure virtual twin groups, Monatsh. Math.202 (2023), 555–582.NNS2 T. K. Naik, N. Nanda and M. Singh, Virtual planar braid groups and permutations, J. Group Theory (2023), 30 pp, https://doi.org/10.1515/jgth-2023-0010.MR1955357 J. Wu,Homotopy theory of the suspensions of the projective plane, Mem. Amer. Math. Soc. 162 (2003), no. 769, x+130 pp.MR2203532 J. Wu,On maps from loop suspensions to loop spaces and the shuffle relations on the Cohen groups, Mem. Amer. Math. Soc. 180 (2006), no. 851, vi+64 pp.
http://arxiv.org/abs/2312.16567v1
{ "authors": [ "Valeriy G. Bardakov", "Pravin Kumar", "Mahender Singh" ], "categories": [ "math.GR", "math.GT", "Primary 20F55, 20F36, Secondary 18N50" ], "primary_category": "math.GR", "published": "20231227132355", "title": "Brunnian planar braids and simplicial groups" }
label2,label1]Tianxin Huang label1]Qingyao Liu label1]Xiangrui Zhao label1]Jun Chen label1]Yong Liucor1 [cor1]denotes Corresponding author Email: T.Huang ([email protected]), Q.Liu ([email protected]), X.Zhao ([email protected]), J.Chen ([email protected]), Y.Liu ([email protected])[label2]National University of Singapore, Singapore [label1]Zhejiang University, Hangzhou, China As point clouds are 3D signals with permutation invariance, most existing works train their reconstruction networks by measuring shape differences with the average point-to-point distance between point clouds matched with predefined rules. However, the static matching rules may deviate from actual shape differences. Although some works propose dynamically-updated learnable structures to replace matching rules, they need more iterations to converge well. In this work, we propose a simple but effective reconstruction loss, named Learnable Chamfer Distance (LCD) by dynamically paying attention to matching distances with different weight distributions controlled with a group of learnable networks. By training with adversarial strategy, LCD learns to search defects in reconstructed results and overcomes the weaknesses of static matching rules, while the performances at low iterations can also be guaranteed by the basic matching algorithm. Experiments on multiple reconstruction networks confirm that LCD can help achieve better reconstruction performances and extract more representative representations with faster convergence and comparable training efficiency. The source codes are provided in https://github.com/Tianxinhuang/LCDNet.git.Keywords: 3D point cloud processing, reconstruction loss, adversarial strategy § INTRODUCTION Point cloud is one signal describing the 3D shape, which is widely-used due to its convenient acquisition from 3D sensors such as RGB-D camera or LiDAR. Different from regular 1-D signals or 2-D images, point clouds are permutation-invariant, which means changing specific permutations of points does not change described shapes. In other words, the permutations of points do not include any useful information. In this condition, commonly-used mean squared errors (MSE) cannot be directly applied to point cloud reconstruction. To train a point cloud reconstruction network, most existing works use the Chamfer Distance (CD) or Earth Mover's Distance (EMD) <cit.> as training losses. They match points with predefined rules and measure shape differences between input point clouds and reconstructed results by average point-to-point distance. However, the losses based on manually-defined matching rules are static, which means the optimization goals are fixed and unchanged for all data during training. They may deviate the actual shape differences and make the reconstruction fall into local minimums with inferior reconstructed results but low reconstruction losses.Although some works <cit.> introduce GAN discriminators <cit.> to improve the reconstruction performance, they simply add the discriminator constraints to CD or EMD. Their improvements are limited as the discriminators only provide slight corrections to unchanged CD or EMD as shown in <cit.>. PCLoss <cit.> replaces the matching-based losses with distances between comparison matrices extracted with dynamic-updated learnable structures, which totally avoids the adoption of static matching rules and learns to use changing measurements to measure the shape differences. It learns to search the shape defects by adversarial process, which has better performances due to the removal of predefined rules. But the totally learnable structures perform relatively inferior at the beginning of training process because it needs iterations to learn to find the defects. Considering the problems mentioned above, we propose a simple but effective learnable point cloud reconstruction loss, named Learnable Chamfer Distance (LCD) by designing a reasonable combination of dynamic learning-based strategy and static matching-based loss evaluation.The differences between LCD and existing methods are presented in Fig. <ref>. Unlike the totally learning-based design in PCLoss <cit.>, LCD learns to predict weight distributions for matching distances of different points. During training, LCD is optimized by turns with the reconstruction network through an adversarial strategy to search regions with more shape defects, where the weight distributions are dynamically adjusted to pay more attention to matching distances of different regions. Benefited from the adoption of dynamic learning-based strategy, LCD can achieve outstanding performances for the training of reconstruction networks, while the static matching-based evaluation can provide an initialization prior for the optimization and ensure that LCD has better performances than totally learning-based PCLoss <cit.> at the beginning of training process. Our contributions can be summarized as * We propose Learnable Chamfer Distance (LCD), which can learn to search shape defects by dynamically predicting weight distributions for matching distances; * Benefited from the reasonable combination of learning-based strategy and matching-based evaluation, LCD has faster convergence than existing learning-based losses; * Experiments on multiple point clouds reconstruction networks demonstrate that LCD can help the reconstruction networks achieve better reconstruction performances and extract more representative representations. § RELATED WORKS §.§ Point Cloud ReconstructionPoint cloud reconstruction aims to design networks, e.g. auto-encoders, to reconstruct point clouds through the representation extracted from the input point clouds. It can be adopted to related tasks like completing <cit.> or sampling <cit.> point clouds, while the extracted intermediate representations can be used for the unsupervised classification <cit.>. Following PCLoss <cit.>, the basic point cloud reconstruction network is often organized with encoders to extract representations from point clouds, and decoders to generate point clouds from the intermediate representations. The commonly-used encoders include PointNet <cit.>, PointNet++ <cit.>, and DGCNN <cit.>, while decoders often come from fully connected networks proposed in AE <cit.> and FoldingNet proposed in <cit.>. In this work, we follow PCLoss <cit.> to construct multiple reconstruction networks to evaluate performances of different losses.§.§ Reconstruction Loss Design Most existing point cloud reconstruction-related tasks rely on the Chamfer Distance (CD) <cit.> and Earth Mover's Distance (EMD) <cit.>, which evaluate the reconstruction losses based on the average point-to-point distance between matched input and reconstructed point clouds. However, the predefined matching rules are static, which may cause the training processes fall into local minimums due to the deviation of predefined rules. In this condition, many researchers attempt to introduce learning-based strategy to improve the constraining performances. Most researchers <cit.> design GAN discriminators <cit.> for extra supervisions. However, these works simply add the discriminator constraints to basic CD or EMD losses. Their optimizations still mainly reply on the matching-based CD or EMD, where the discriminators can only provide slight corrections. Therefore, these methods often have limited improvements. Although DCD <cit.> fixes the matching rules in CD by considering the density distribution of reconstructed points, it is still limited by the static evaluation for reconstruction losses. PCLoss <cit.> replaces the usage of static matching rules with a dynamic learning process. It learns to extract comparison matrices from point clouds with differentiable structures and measure shape differences with distances between comparison matrices. But the totally learning-based training process in PCLoss makes it need more iterations to converge well, while the relatively complex structures bring low training efficiency. In this work, we explore to organically combine the learning-based strategy with matching rules by learning to pay attention to different matching connections. Benefited from the learning-based strategy, our method has better performances than matching-based methods, while the adoption of static rules ensures it has faster convergence than totally learning-based methods like PCLoss. § METHODOLOGYIn this work, we propose a new method named Learnable Chamfer Distance (LCD) to evaluate the reconstruction loss by measuring the average point-to-point distance weighted with dynamically updated distributions. In this work, we use the static matching rules in CD <cit.> to calculate the matching distances due to its high efficiency.The structure of LCD is presented in Sec. <ref>.The training process of the reconstruction network with LCD is presented in Sec. <ref>.§.§ The Structure of Learnable Chamfer Distance We propose a series of learnable structures to dynamically predict the weight distributions for the matching distances of different points. As shown in Fig. <ref>, the weight distributions W_i and W_o are predicted with Siamese Concatenation block (SiaCon) and Siamese Attention block (SiaAtt). SiaAtt predicts weight distributions for the matching distances, while SiaCon extracts global shape representations from both input and reconstructed point clouds and injects them to SiaAtt.Specifically, in SiaCon block, two parameter-shared f_1(·) are used to extract global features from input point cloud S_i and reconstructed result S_o. They are concatenated to construct a overall perception for the shapes S_i and S_o. In SiaAtt block, two global features extracted by f_2 include independent shape information of S_i and S_o, respectively. This information is fused with overall perception of two models to predict a weight for each coordinates with MLP in g(·). Let S_i and S_o be the input point clouds and reconstructed results, respectively.CD loss can be defined asL_CD(S_i,S_o)=1/2(1/|S_i|∑_x ∈ S_imin_y ∈ S_ox-y_2+1/|S_o|∑_x ∈ S_omin_y ∈ S_ix-y_2).We can see that CD measures the reconstruction loss through the average distance between points in S_i or S_o and their nearest neighbors in another point set.Let f_1(·), f_2(·) be the combination of parameter-shared Multi Layer Perceptrons (MLPs) and symmetric pooling operations like PointNet <cit.>, Con(·) be the concatenation of features, g(·) be a group of MLPs, SiaCon can be defined asF_io=Con(f_1(S_i),f_1(S_o)). In SiaAtt, we have {F_i=g(Con(S_i,f_2(S_i),F_io)), F_o=g(Con(S_o,f_2(S_o),F_io)). .The weight distributions can then be defined as {W_i=σ+e^-F_i^2/|F_i| ·σ+∑ e^-F_i^2, W_o=σ+e^-F_o^2/|F_o| ·σ+∑ e^-F_o^2, . where the boundary coefficient σ is a small constant used to adjust the weight distribution. Intuitively speaking, in Eq. <ref>, F_i is firstly scaled to 0 ∼ 1 with e^-F_i^2, where the scaled results will be normalized into weight distributions satisfying ∑ W_i =1 and ∑ W_o=1. σ is used to soft the weight distributions and prevent each weight from being too small to optimize.The final loss measurement can be defined as L_R(S_i,S_o)=1/2(1/|S_i|∑_x ∈ S_i W_i ·min_y ∈ S_ox-y_2 +1/|S_o|∑_x ∈ S_oW_o ·min_y ∈ S_ix-y_2). Note that our method estimates the weight for each point in a same point cloud/sample, which is quite different with the boosting-related re-weighting methods to predict weights for various point clouds/samples in the dataset.§.§ Training PipelineLCD is trained with adversarial strategy to consistently search for existing shape differences between reconstructed results and input point clouds.The whole training process with LCD is a generative-adversarial process similar as GAN <cit.>, which updates the parameters of LCD and the reconstruction network by turns. In each iteration, LCD is optimized by L_LCD to explore more shape differences, where the reconstruction network is then optimized with L_R to eliminate the searched differences. Let L_R be the reconstruction loss defined in Sec. <ref>. We define the adversarial loss to optimize LCD asL_LCD=-log(L_R+σ_r), where σ_r is a tiny value to avoid errors when L_R → 0.§ EXPERIMENTS §.§ Dataset and Implementation Dtails Training details.ShapeNet part dataset <cit.> is composed of 12288/1870/2874 models in the train/val/test splits. For the reconstruction task, we train the networks on the train split of ShapeNet part dataset, while evaluating on its test split. For the unsupervised classification, we still train networks on the train split of ShapeNet part dataset and use ModelNet10 and ModelNet40 containing 10 and 40 categories of CAD models to evaluate the classification accuracy following FoldingNet <cit.>. Each model consists of 2048 points randomly sampled from the surfaces of mesh models. In this work, learning rates of reconstruction networks and LCD are set as 0.0001 and 0.002, while σ and σ_r are set as 0.01 and 1e-8. The matching-based evaluation of CD <cit.> is introduced to calculate the matching distances in LCD.Reconstruction Networks.To compare LCD with existing reconstruction losses, we conduct comparisons based on multiple reconstruction networks. AE <cit.> and FoldingNet <cit.> are two classic and commonly used point cloud reconstruction networks, which have been used in many works <cit.>. In this work, we follow PCLoss <cit.> to construct 6 reconstruction networks with three commonly-used encoders PointNet <cit.>, PointNet++ <cit.> and DGCNN <cit.> and 2 basic decodersAE <cit.> and FoldingNet <cit.>.The reconstruction performances of whole structures and unsupervised classification accuracy of intermediate representations are adopted to evaluate the performances of different training losses. §.§ Comparison with Existing Reconstruction Loss To confirm the performance of our method, we follow PCLoss <cit.> for the comparison settings. The reconstruction errors and performances of representation learning are adopted for evaluation.The qualitative and quantitative results are presented in Fig. <ref> andTable <ref>, respectively. Multi-scale Chamfer Distance (MCD) proposed by <cit.> and Hausdorff distance (HD) from <cit.> are used as metrics in this work.We can see that our method achieves lowest reconstruction errors on multiple reconstruction networks.As shown in the circled regions of Fig. <ref>, our method can help the reconstruction network create clearer details such as the wings of airplane and the back of chairs, which confirms its effectiveness.The reconstruction networks can also be used to extract intermediate representations from point clouds for classification. We also conduct a comparison on unsupervised classification following AE <cit.>, FoldingNet <cit.>, and PCLoss <cit.>. In details, the reconstruction networks are trained with different losses on ShapeNet <cit.> and adopted to extract representations from point clouds in ModelNet10 and ModelNet40 <cit.>. The extracted representations will be used to train Supported Vector Machines (SVMs) with corresponding labels, where the classification accuracy can then reflect the distinguishability of representations. As shown in Table <ref>, our method has higher classification accuracy than existing methods in most phenomena, which means LCD can help the reconstruction networks learn more representative representations.§.§ Training process analysisTo analyze the training process when optimizing the reconstruction networks with LCD. We visualize and compare the reconstruction errors during the iterations between our method and a few representative training losses including CD <cit.>, EMD <cit.>, PCLoss <cit.> based on AE <cit.>. The results are presented in Fig. <ref>. We can see that LCD has much faster and steadier convergence than existing methods. Beside, it performs much better than totally learning-based PCLoss at 0 ∼ 200 iterations, which confirms that the introducing of static matching-based evaluation can ensure the performances at the beginning of training process. §.§ Comparison on Training Efficiency In this section, we compare the time cost consumed by a single iteration between different methods. The results are presented in Table <ref>. Although LCD is slower than CD and DCD, it has better performances as shown in Table <ref>. We can see that LCD has higher efficiency than the totally learning-based reconstruction loss PCLoss due to its more concise designation, which can further confirm its effectiveness. §.§ Ablation Study Ablation study for the components. In this section, we explore the effect of proposed components by removing them and retraining the networks. SiaAtt and SiaCon denote the Siamese Attention block and Siamese Concatenation block, respectively. log· means the log· operation mentioned in Sec. <ref> to dynamically adjust the optimization of LCD. The results are presented in Table <ref>. We can see that removing any component would reduce the final performance.To compare the effect of these components more intuitively, we also visualize the reconstruction errors over all the iterations in Fig. <ref>. We can see that with adding the SiaAtt, SiaCon, and log· gradually makes the errors reduce fast and stably over the whole iterations. An interesting condition is that SiaAtt reduces MCD while slightly increasing the HD metric at the end of iterations. It may come from the lack of perception for the overall input and output shapes, making it difficult to find the regions with larger reconstruction errors and mislead the training of reconstruction networks. This condition is then addressed by injecting overall shape features with SiaCon. Influence of the boundary coefficient σ. The boundary coefficient σ defined in Sec. <ref> mayaffect the weight distribution for matching distances. Here, we present experiments to explore its influence as shown in Fig. <ref>-a. We can see that larger or smaller σ both have negative influences on the results. According to Eq. <ref>, too small σ makes the distribution steep and hard to train, while too big σ may cause the distribution over-smoothing and limit its performance.Influence of the LCD learning rate. The LCD learning rate decides the convergenceand has influence on final performance. We conduct a group of experiments to observe the influence of the LCD learning rate. The results are presented in Fig. <ref>-b. We can see that too small or large learning rates both reduce performances. Small learning rates may limit the ability of LCD to search shape differences, while larger learning rates may lead to its unsteady convergence.§ CONCLUSIONIn this work, we propose a simple but effective point cloud reconstruction loss, named Learnable Chamfer Distance (LCD), by combining the dynamic learning-based strategy and static matching-based evaluation in a more reasonable way.LCD dynamically predicts weight distributions for matching distances of different points, which is optimized with adversarial strategy to search and pay more attention to regions with larger shape defects. Benefited from the reasonable combination of matching-based evaluation and learning-based strategy, LCD has both faster convergence and higher training efficiency than totally learning-based PCLoss. According to the experiments on multiple reconstruction networks, LCD can help the reconstruction networks achieve better reconstruction performances and extract more representative representations.§ ACKNOWLEDGMENTWe thank all reviewers and the editor for excellent contributions.This work is supported by the Key Scientific and Technological Innovation Project of Hangzhou under Grant 2022AIZD0019. elsarticle-num
http://arxiv.org/abs/2312.16582v1
{ "authors": [ "Tianxin Huang", "Qingyao Liu", "Xiangrui Zhao", "Jun Chen", "Yong Liu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231227141746", "title": "Learnable Chamfer Distance for Point Cloud Reconstruction" }
In the magic angle twisted bilayer graphene (MATBG),non-Fermi liquid like transport phenomena are universally observed. To understand their origin, we perform the self-consistent analysis of theself-energy due to SU(4) valley + spin fluctuations induced by the electron-electron correlation. In the SU(4) fluctuation mechanism, the fifteen channels of fluctuationscontribute additively to the self-energy.Therefore, the SU(4) fluctuation mechanism gives much higher electrical resistance than the spin fluctuation mechanism. By the same reason, SU(4) fluctuations of intermediate strengthprovide T-linear resistivity down to ∼1K. Interestingly, the T-linear resistivity is robustly realizedfor wide range of electron filling, even away from thevan-Hove filling. This study provides a strong evidence for the importance of electron-electron correlation in MATBG.Department of Physics, Nagoya University, Furo-cho, Nagoya 464-8602, Japan.Robust T-Linear Resistivity due to SU(4) Valley + Spin Fluctuation Mechanism in Magic Angle Twisted Bilayer Graphene Daisuke Inoue, Seiichiro Onari, and Hiroshi KontaniJanuary 14, 2024 ====================================================================================================================== § INTRODUCTIONRecently, the magic angle twisted bilayer graphene (MATBG)has been studied very actively as a platform ofnovel quantum phase transitions<cit.>. Nearly flatband due to the multi band foldingwith strong electron correlation is formedthanks to the honeycomb moiré superlattice. The existence of the valley degrees of freedoms andthe van Hove singularity (vHS) points leads to exotic strongly correlated electronic states. The electron filling of the moiré bands can be controlled by the gate voltage. The MATBG is a Dirac semimetal at n=0 (charge neutral point), while Mott insulating state appears at the half filling |n|=2. Various exotic electronic states appear for |n|∼2, including the unconventional superconducting<cit.> and electronic nematic states <cit.>. Recently, inter-valley coherent order states with and without time-reversal symmetry attract great attention <cit.> Such exotic multiple phase transitions are believed to becaused by strong Coulomb interaction and the valley+spin degrees of freedoms in the MATBG <cit.>. For example, the nematic bond order is caused by thevalley+spin fluctuation interference mechanism, which is described by the Aslamazov-Larkin (AL) vertex correction (VC) <cit.>. This mechanism also explains the nematic and smectic statesin Fe-based superconductors, <cit.> cuprates, and nickelates <cit.>, and kagome metals <cit.>. The significance of the AL-VC has been confirmed by the functional renormalization group (RG) studies<cit.>. On the other hand, the significance of the electron-phonon interactionsin the MATBG has been discussed in Refs. <cit.>, and the acoustic phonon can cause the nematic order<cit.>. Thus, the origin and the nature of the strongly correlated electronic statesin MATBG for |n|∼2 is still uncovered.To understand the dominant origin of electron correlations, transport phenomena provides very useful information. In cuprate and Fe-based superconductors, non-Fermi-liquid type transport coefficients, such and the T-linear resistivity andCurie-Weiss behavior of Hall coefficient (R_H), are naturally explained by the spin fluctuation mechanism <cit.>. The increment of R_H originates from thesignificant memory effect described by the current VC <cit.>.Interestingly,prominent non-Fermi-liquid type transport phenomena has been universally observed in MATBG. For example, almost perfect T-linear resistivityis realized for wide area of n=± (1.0-3.0) <cit.>. The Curie-Weiss behavior of R_H is also observed <cit.>. These results are the hallmark of the presence ofstrongly anisotropic quasiparticle scattering. (In fact, the acoustic phonon scattering mechanism gives ρ∝ T^4 at low temperatures <cit.>.)Thus, non-Fermi-liquid type transport phenomena in MATBG are significant open problems to understand the dominant origin and the natureof the electron correlation. In this paper, we study the many-body electronic states in MATBG in the presence of the SU(4) valley+spin composite fluctuations.The self-energy due to the SU(4) fluctuations (Σ̂(k)) is calculated by employing the fluctuation-exchange (FLEX) approximation. The obtained resistivity well satisfies the T-linear behavior for T=1∼10K for wide range of n. Large T-linear coefficient a≡ρ/T is obtained in the present mechanism due to the contribution of fifteen channel SU(4) fluctuations. Therefore, the obtained result is quantitatively consistent with experiments. The present results indicates the development ofSU(4) valley+spin composite fluctuations in MATBG, which should be strongly associated with the exotic multiple phase transitions.§ T-LINEAR RESISTIVITY NEAR THE QCPIn usual Fermi liquids (FLs), the resistivity follows the relations ρ=A T^2 and A∝{N(0)}^2 at low temperatures,where N(0) is the density-of-states (DOS) at Fermi level<cit.>. (Also, the Hall coefficient and the magnetoresistivity in FLs follow the relations |R_ H|≈ 1/en and Δρ/ρ_0∝ (B_z/ρ)^2, respectively <cit.>.) In contrast, T-linear resistivity is observed in two-dimensional (2D) metals near the quantum critical points. For example, CeMIn_5 (M=Co, Rh) exhibits non-FL like relationships such as ρ∼ T and R_ H∼ T^-1, in addition to the modified Kohler's rule(Δρ/ρ_0)∝ (R_ H/ρ)^2 <cit.>. Similar non-FL transport phenomena are observed near the nematic quantum critical point (QCP) in Fe(Se,S) <cit.>.Furthermore, T-linear resistivity appears in nickerates <cit.> and cuprates <cit.> near the charge-density-wave (CDW) QCPs. To understand the critical transport phenomena, the self-consistent renormalization (SCR) theory <cit.>, the renormalization group theory <cit.>, spin-fermion model analysis <cit.> have been performed.In these theories, strong quasiparticle scattering rate γ_=̨ ImΣ_^̨A(0) due to quantum fluctuations gives rise to thenon-FL resistivity ρ∝ T^n with n<2 near the QCP. (n=1[4/3] in 2D metals with the antiferro (AF) [ferro] fluctuations according to Ref. <cit.>.) More detailed analyses are explained in Ref.<cit.>. It is noteworthy that the current VCplays significant roles in both R_ H (∝ T^-1)and Δρ/ρ_0(∝ T^-2ρ^-2), in addition to the self-energy<cit.>. The modified Kohler's rule(Δρ/ρ_0)∝ (R_ H/ρ)^2 observed in CeMIn_5 and the Fe(Se,S) is naturally explained by considering the current VC<cit.>. Here, we concentrate on the T-dependence of the resistivity, where the current VC is not essential. In the SCR theory and the spin-fermion model, the dynamical AF susceptibility is assumed as χ^ AF(,)=χ^ AF_0/1+ξ^2(- Q)^2-i/_ AF where ξ is the AF correlation length andQ is the AF wavevector. _ AF is the energy scale of the AF fluctuations and χ^ AF_0=χ^ AF( Q,0): They are scaled as _ AF∝ξ^-2 and χ^ AF∝ξ^2 <cit.>. The relation ξ^2∝ (T-T_0)^-1 is satisfied forwide parameter range, and T_0=0 at the QCP. In the SCR theory, when _ AF≲ T, the resistivity is approximately given as ρ∼∑_γ̨_∼̨T^2 ∑_̨̨'ρ_'̨(0) Imχ^ AF(-̨'̨,)/|_=0∼ T^2ξ^4-d, where ρ_(̨)= ImG^A_(̨)/π <cit.>. Thus, the T-linear resistivity appears when T_0∼0. In various two-dimensional Hubbard models, the relation ρ∝ T is reproducedbased on the FLEX approximation <cit.>, because the relation ξ^2∝ T^-1 is well satisfiedfor U∼ W_ band. (Note that the relation ξ^2 ∝ (1-)^-1 holds, whereis the Stoner factor given by the FLEX approximation.) Importantly, the relation ξ^2≪∞ is always satisfiedby the FLEX approximation for two-dimensional systems because the FLEX approximation satisfies the Mermin-Wagner theorem <cit.>.In Ref. <cit.>, the present authors studied realistic Hubbard model for MATBG<cit.> based on the RPA, and derived the development of theSU(4) valley+spin composite fluctuations. The nematic bond-order is caused by the interference between SU(4) fluctuations <cit.>. In this paper, we study the same MATBG model based on the FLEX approximation,where the self-energy is calculated self-consistently. Thanks to the self-energy, the T-linear resistivity is realizedfor wide parameter range. Interestingly,the T-linear resistivity is realized even when thesystem is far from the SU(4) QCPso that ξ^2T^2 decreases at low temperatures. The present result indicates thatthe T-linear resistivity in MATBG originates from the combination between themoderate SU(4) fluctuations and the characteristicband structure with the vHS points. Importantly, the T-linear coefficient a=ρ/T is large in the present fifteen-channel SU(4) fluctuation mechanism,compared with the conventional three-channelSU(2) spin fluctuation mechanism. Consistently, the observed a is rather large in MATBG <cit.>.§ FORMULATION Here, we analyze the following multiorbital model for MATBG studied in Ref. <cit.>: H^0= ∑_,̨'lc_,̨ l^† h^0_'l()̨ c_,̨' l, where =̨(k_x,k_y), l=(ρ,ξ), ρ and ξ represent spin and valley indices, respectively. Here, =A(B) which represents a sublattice AB (BA)is the center of Wannier orbital 1 (2) in Fig. <ref> (a). Also, the valley index ξ=±1 correspond to the angular momentum. This model Hamiltonian is based on the first-principles tight-binding model in Ref. <cit.>, and we modified the hopping integrals according to Ref. <cit.>. The Fermi surface (FS) of this model at n=2.0is shown in Fig. <ref> (b). Here, two FSs are labeled as ξ=+1 and ξ=-1 because H^0 is diagonal with respect to the valley.Six vHS points are shown in Fig. <ref> (b). The band structure and total DOS are givenin Fig. <ref> (c) and Fig. <ref> (d),respectively.Energy gap between the two vHS energies E_vHS1-E_vHS2∼ 50 meVcorresponds to the effective bandwidth, which is consistent withthe STM measurement <cit.>.The 2× 2 matrix Green function with respect to the sublattices (A,B) is given as Ĝ_l(k)=[(i_n-μ)1̂-ĥ_l^0()̨-Σ̂_l(k)]^-1, where k≡(,̨i_n), _n=(2n+1)π T and μ is the chemical potential, and Σ̂_l(k) is the self-energy.In MATBG, the intra- and inter-valley on-site Coulomb interactions are exactly the same (U=U') <cit.>. Also, the inter-valley exchange interaction J is very small (J/U≪ 1) <cit.>,therefore we set J=0. Then, the Coulomb interaction term is given as H_U = U/2∑_i,ξ( ∑_ρρ'n_i,ρξn_i,ρ'ξ̅ +∑_ρn_i,ρξn_i,ρ̅ξ), where i is the unit cell index. n_i,ρξ is the electron number operator with spin ρ and valley ξ atspot. Using SU(4) operators in Eq. <ref>,H_U is expressed as <cit.> H_U = U/16∑_i,[ -∑_μ,ν(O^i,_μ,ν)^2 +4(O^i,_0,0)^2],O^i,_μ,ν = ∑_ll' Q^μ,ν_α ll'c_i, l^† c_i, l', where μ,ν=0∼3 andQ^μ,ν_ ll'=(_μ⊗τ̂_ν)_ll'. Here, _m (τ̂_m) for m=1,2,3is Pauli matrix for the spin-channel with ρ=±1 (valley-channel with ξ=±1). σ̂_0 and τ̂_0 are the identity matrices. The Coulomb interaction H_U in Eq. (<ref>) apparently possesses the SU(4) symmetry. Note that similar multipolar decomposion ofthe Coulomb interaction has been used in the strong heavy fermion systems in Refs. <cit.>.Here, we examine theSU(4) susceptibility given as χ_μ, ν;μ',ν'^'(,i_l)= ∫_0^β dτ⟨O^_μ,ν(,τ)O^'_μ', ν'(-,0)⟩ e^i_lτ,where q≡(,_l) and _l=2l π T. In the present calculations, we consider only diagonal channels with respect to (μ,ν), χ_μ, ν;μ, ν^', because off-diagonal channels χ_μ,ν;μ'ν'^'[(μ',ν') ≠ (μ,ν)] are exactly zero or very small. Then, diagonal channel χ_μ,ν^'(q) except for (μ,ν)=(0,0) is expressed as χ̂_μ,ν(q)= χ̂^0_μ,ν(q) +U/4χ̂^0_μ,ν(q)χ̂^0_μ,ν(q) + ⋯=χ̂^0_μ,ν(q)(1̂-U/4χ̂^0_μ,ν(q))^-1, χ^0;'_μ,ν(q)=-T/N∑_k,ll'Q^μ,ν_ l'lQ^μ,ν_' l l' G_l^'(k+q)G_l'^' (k).Figure <ref> shows the diagrammatic expression in Eq. (<ref>). Here, χ̂_m, 0(q) represents the spin susceptibility,χ̂_0, m(q) represents the valley susceptibility, and χ̂_m, n(q) represents the susceptibility of the "spin-valley quadrupole order". Also, the local charge susceptibility χ̂_0, 0(q) is expressed asχ̂_0, 0(q) = χ̂^0_0, 0(q)(1̂+3U/4χ̂^0_0, 0(q))^-1,which is suppressed by U.In the FLEX approximation, the self-energy and the effective interaction are given asΣ_l^'(k)= T/N∑_q,l' G_l'^'(k-q)V^'_ll',l'l(q), V^'_l l', l'l(q) =(U/4)^2 ∑_μ,ν ≠ (0,0) Q^μ,ν_α ll'χ_μ,ν^'(q)Q^μ,ν_' l' l + ( 3U/4)^2 Q^0,0_α ll'χ_0,0^'(q)Q^0,0_' l' l.Here, we solve Eqs. (<ref>)-(<ref>), self-consistently. Note that the double-counting U^2 terms in Eqs. (<ref>) are subtracted properly. In the present numerical study, we use 108×108 k meshes and 2048 Matsubara frequencies.In the case of SU(4) symmetry limit,the Green function Ĝ_l(k) isindependent of the spin and valley. Then, it is allowed to replace Ĝ_l(k) in Eq. (<ref>) with Ĝ_av(k)≡ 1/4∑_l Ĝ_l(k). Therefore, the irreducible susceptibility in the SU(4) symmetry limit is approximately simplified asχ̂_μ,ν^0(q) ≈ 4χ̂_av^0(q),where χ^0;'_av(q)≡ -T/N∑_kG_av^'(k+q)G_av^' (k). Here, we used the relation ∑_ll'Q^μ,ν_ l' lQ^μ,ν_ l l' = 4 for all μ,ν. Also, the SU(4) susceptibility except for (μ,ν) = (0,0)in Eq. (<ref>) and the self-energy in Eq. (<ref>) in the SU(4) symmetry limit is given asχ̂_μ,ν(q)≈ 4χ̂_av(q) ≡ 4χ̂_av^0(q)(1̂-Uχ̂^0_av(q))^-1,Σ^'(k)≈T/N∑_q15/4U^2 G^'_av(k-q)χ_av^'(q). Eq. (<ref>) indicates thatthe self-energy per orbital in this system develops easier than the systems which are considerd spin or charge fluctuations, due to the multi-channel SU(4) fluctuations.In the presence of the off-site Coulomb interaction between (i,) and (j,'), v_i,j', the interaction Hamiltonian is given as H_v = ∑_ij,' l l' v_i,j'c_i, l^† c_i, lc_j,' l'^† c_j,' l'= ∑_ij,'v_i,j'O_0,0^i,αO_0,0^j,α'Then, the effect of off-site Coulomb interaction in the FLEX approximationis simply given by replacing (3U/4)^2 in Eq. (<ref>) with (3U/4 + 2v_'())^2. Here, v_' is the Fourier transform of v_i ,j'.Present formulation using the Coulomb interaction expressd by the SU(4) operator is equivalent to the conventional formulation using the Coulomb interaction expressed by the spin and charge channels. We explain the correspondence with the previous multiorbital FLEX approximation formalism in Appendix A.We obtain the resistivity ρ=1/σ_xx based on the Kubo formula. σ_xx is given byσ_xx = e^2∑_k,ξ∫dω/π( -∂ f/∂ω) |G^_ξ(k,ω)|^2 (v^_ξ;x(k,ω))^2,where v^_ξ;x(k,ω)=∂(ϵ^_ξ + ReΣ^_ξ(k,ω))/∂ k_x is the quasiparticle velocity, and f=1/(1+e^(ω-μ)/T). Here,and ξ denote the sublattice and valley,respectively. The self-energy Σ^_ξ(k,ω) is obtained by the analytic continuation of Eq. (<ref>) using Pade approximation. § NUMERICAL RESULT Hereafter, we mainly study the case of n=2.0, where the Fermi level is close to vHS energy. We consider only the on-site Coulomb interaction unless otherwise noted.Figure <ref> (a) and <ref> (b) showthe SU(4) susceptibility χ_μ,ν^AA(q), μ,ν=0∼3[(μ,ν) ≠ (0,0)].In the present calculation,χ_μ,ν^AA (q)≃χ_μ,ν^BB(q)> χ_μ,ν^AB(q) ≃χ_μ,ν^BA(q)is satisfied.χ̂_μ,ν(q) include not only the spin fluctuationsbut also, valley and valley+spin composite fluctuations.The fifteen components ofχ̂_μ,ν(q) take very similar values by reflecting the SU(4) symmetryCoulomb interaction in Eq. (<ref>).As shown in Fig.<ref> (b), seven components with (μ,ν) = (m,0), (μ,3) are exactly equivalent, and eight components with (μ,ν) = (μ,1), (μ,2) are also equivalent , where m=1∼3.In the present MATBG model given in Eq. (<ref>),FSs are different with respect to thevalley index as shown in Fig. <ref>(b), but the difference is very small.Therefore, the system possesses approximateSU(4) symmetry and the fifteen channelsof χ̂_μ,ν equally develop.Note that χ̂_0,0 is much smaller value thanthat in other channels(χ̂_0,0 1/10χ̂_μ,ν).χ_μ,ν^AA(q) develops aroundthe nesting vector that connects the two vHS points.The Stoner factor α is defined as the largest eigenvalueof U χ̂^0_μ,ν(q,0)/4≈ U χ̂^0_av(q,0).It represents the SU(4) fluctuation strength.Figure <ref>(c) shows the T-dependence of the Stoner-enhanced factor.According to the spin fluctuation theory <cit.>,the relation 1/(1-α) ∝ 1/T is satisfied due tothe development ofat low temperatures,and this relation gives rise tothe T-linear resistivity.On the other hand, in the present calculations, α≲0.8 and 1/(1-α) ∝ (1/T+2)^1/3.5indicate an interesting deviation from the conventional spin fluctuation theory in MATBG. Here, we show the self-energy Σ^_ξ(k,ω)obtained by the FLEX approximation. The self-energy gives the quasiparticle damping rate and the mass-enhancement factor. The quasiparticle damping rate γ_k is defined as γ_k=-ImΣ^A_+(k,0)≃-ImΣ^B_+(k,0). Figure <ref>(a) shows the q-dependences of the γ_kdue to the SU(4) fluctuations.There are hot (cold) spots, where γ_ktakes maximum (minimum) value.The hot spots exist near the vHS points.Fig. <ref>(b) shows the T-dependenceof γ_k at hot and cold spots(γ_hot, γ_cold).The T-dependence of ρ follows roughlythat of γ_cold.In our calculations, although the fluctuation per one channel isweak (≲ 0.8) away from the SU(4) QCP, γ_cold∝ T is realized at low temperaturesowing to the fifteen-channel SU(4) fluctuations. The mass-enhancement factor Z_k andthe mean free path l_k are given asZ_k =1-. ∂ReΣ_+^A(k,ω)/∂ω|_ω=0, l_k = |v^_ξ(k,0)/γ_k|,where v^_ξ(k,0) is the quasiparticle velocity. Fig. <ref>(c) shows the mass-enhancement factorZ_k = m^*/m along the k-path on the FS shownin Fig. <ref>(a), where m and m^* are the bare electron mass and the effective mass, respectively. The obtained Z_k>5 indicates that this system is in the strongly correlated region. Fig. <ref>(d) shows the obtained l_kdevided by the moiré superlattice constant L_M. l_k on the FS is longer than L_M, particularly near the cold spot. l_cold∼ 20 L_M at T≈ 3K. Such long l_k indicates that the Fermi-liquid picture holds well. Thus, in this system, the Fermi liquid state withstrong correlation is realized. Figure <ref>(a) shows the resistivity ρobtained by the FLEX approximation(blue line) due to the SU(4) fluctuations .ρ∝ T is satisfied at low temperatures,which is quantitatively consistent with experimentalresults in Refs. <cit.>.The green line in Fig. <ref>(a) showsρ given by the FLEX approximation with includingonly spin fluctuations (SU(2) fluctuations).The T-linear coefficient a = ρ/Tdue to the SU(4) fluctuations and that due to the only SU(2) fluctuations are a ∼ 0.2 and a ∼ 0.06, respectively. In experimental results <cit.>, the observed T-linear coefficientis larger than 0.1, thus our result considering the SU(4) fluctuationsis consistent with the observations.On the other hand, the T-linear coefficient adue to only the SU(2) fluctuations is very small.Therefore, the fifteen-channel SU(4) fluctuationsare significant for the large a.We stress that the power m in ρ = aT^m decreases less than 1at high temperature.This behavior is consistent with some experimental results <cit.>,and realized in previous theoretical study based onthe FLEX approximation<cit.>.Fig. <ref>(b) shows the obtained U-dependence of ρ.The power m increases as the Coulomb interactionbecomes weak.This behavior indicates that the system approachesthe Fermi liquid state (ρ∝ T^2)as U→0.Thus, theT-linear resistivityoriginates from the strong electron-electron correlation effect.Here, the power m is smaller than 1.5 evenwhen U=12.5 meV.As we discuss in the Appendix B,the power m is smaller than 2 whenthe vHS points near the FS evenwhen U ≪ W_band. Figure <ref> shows the filling dependence of ρ,and the FSs for n=1.0, 2.4 and 3.0. The relation ρ∝ T is satisfied in the various filling. The fifteen-channel SU(4) fluctuations originate from the (approximate) SU(4) symmetry which the system possesses by nature in MATBG. Thus, the SU(4) fluctuations easily develop even away from vHS filling, and therefore ρ∝ T is realized in wide n range. Experimentally, T-linear resistivity is observed in wide n range <cit.>. Thus, our results are consistent with experiments. The T-linear resistivity realized inwide n range suggests that the SU(4) fluctuations universally develop and non-Fermi liquid behavior in MATBG is mainly derived from the SU(4) fluctuations. The coefficient a=ρ/T for n=1.0 islargest in n=1.0-3.0. This filling dependence of the coefficient a is similarly observed in experiments <cit.>. The n-dependence of the γ_cold is shown in Appendix C. The obtained γ_cold is largest for n=1.0due to the good nesting of the FS as shown in Fig. <ref>(b). Here, we discuss the effect of the off-site Coulomb interaction based on the Kang-Vafek model <cit.>. We introduce the nearest-neighbor (V_1), next nearest-neighbor (V_2), and the third next nearest-neighbor (V_3) hopping integral into the on-site Coulomb interaction term in Eq. <ref> Here, we fix U=80 meV, V_1=2V_2=2V_3 and V_1=0 or V_1=2U/3 The results given by the FLEX approximation for V_1=0 (blue line) and V_1=2U/3 are shown in Fig. <ref>. Figure <ref>(a) shows the SU(4) susceptibility χ_μ,ν^AA(q) [(μ,ν)≠(0,0)]. Although χ̂_μ,ν(q) are slightly suppressed by the off-site Coulomb interaction, χ̂_μ,ν(q) for V_1=2U/3 are fifteen-fold degenerated and quantitatively unchanged. In contrast, χ_0,0^AA(q) shown in Fig. <ref>(b) is drastically changed whether V_1 is zero or nonzero, and obtained χ̂_0,0(q) for V_1=2U/3 is same order as χ_μ,ν(q). By introducing the off-site Coulomb interactions, the local charge susceptibility is modified as χ̂_0,0(q) =χ̂_0,0^0(q)[1̂+(3U/4+2v̂(q)) χ̂^0_0,0(q)]^-1. Here, the formulation of χ̂_μ,ν(q) [(μ,ν)≠(0,0)] in Eq. (<ref>) is unchanged, because the susceptibility χ̂_μ,ν;μ',ν' [(μ',ν')≠(μ,ν)] is negligible. Therefore, χ̂_0,0(q) is only enlarged by v_', and other channels of the susceptibilities take almost the same value. The obtained damping rate γ_k is shown in Fig. <ref>(c). Nevertheless χ̂_0,0(q)χ̂_μ,ν(q) for V_1 =2U/3, γ_k is almost equivalent to that for V_1=0. This is because the contribution of χ̂_0,0(q) to γ_k is just 1/16 of other all channels, and χ̂_μ,ν(q) [(μ,ν)≠(0,0)] is essentially independent of V. Consequently, the resistivity ρ obtained for V_1=2U/3 is almost the same as that for V_1=0. Therefore, the present analysis based on the on-site Coulomb interaction U is justified. § SUMMARY In this study, we demonstrated that the T-linear resistivity is realized by the electron-electron correlation in MATBG in the presence of the SU(4) valley+spin composite fluctuations. We calculated the self-energy by employing the FLEX approximation. The obtained self-energy takes large value due to the fifteen-fold degenerated SU(4) fluctuations. Robust T-linear resistivityis realized for wide ranged n at low temperatures derived from the SU(4) fluctuations. Importantly, the T-linear resistivity is realized even when the system is far from the SU(4) QCP(≲ 0.8 in our calculations). Then, large T-linear coefficient a≡ρ/T is obtainedin the present mechanism. The T-linear coefficient a due to only the spin fluctuations is small, which is less than 1/10 of the coefficient observed in Ref. <cit.>. Thanks to the SU(4) fluctuations, robust and large T-linear resistivity is observed for wide n range,even away from n_vHS=2.0, consistent with experiments. This result is strong evidence that the SU(4) fluctuations universally develop in MATBG. As well as MATBG, the exotic electronic states appear in other twisted multilayer graphene. For example, non-Fermi liquid type transport phenomena <cit.> , unconventional superconductivity <cit.> , and nematic ordere <cit.> has been observed in twisted double bilayer graphene (TDBG). Furthermore, in trilayer graphene,unconventional superconducting state appears <cit.>. The present Green function formalismin the SU(4) symmetry limit will be useful in analyzing the abovementioned problems. § ACKNOWLEDGEMENTSThis study has been supported by Grants-in-Aid for Scientific Research from MEXT of Japan (JP18H01175, JP20K03858, JP20K22328, JP22K14003), and by the Quantum Liquid Crystal No. JP19H05825 KAKENHI on Innovative Areas from JSPS of Japan. § APPENDIX A: FLEX APPROXIMATION FOR MULTIORBITAL HUBBARD MODELS In this Appendix, we explain another foumulation of the multiorbital FLEX approximation based on the matrix expressions of the Coulomb interaction. This method has been widely used forruthenate <cit.>,cobaltates <cit.>, Fe-based superconductors <cit.>, and heavy fermions <cit.>. It is confirmed that the formulation using SU(4) operator developed in the main text is equivalent with the following formulation. The Coulomb interaction H_U in Eq. (<ref>) is decomposed into spin and charge channel as <cit.> H_U = U/8∑_i,∑_{ρ},{ξ}[-Γ̂^s_ξ_1ξ_2,ξ_3ξ_4(⊗)_ρ_1ρ_2,ρ_3ρ_4.. -Γ̂^c_ξ_1ξ_2,ξ_3ξ_4 (^0⊗^0)_ρ_1ρ_2,ρ_3ρ_4]× c_i,ρ_1ξ_1^† c_i,ρ_2ξ_2c_i,ρ_4ξ_4^† c_i,ρ_3ξ_3, whereand ^0 are Pauli matrix andidentity matrix, respectively and ξ_i is valley index. Here, Γ^s_ξ_1ξ_2,ξ_3ξ_4=U for ξ_1=ξ_2=ξ_3=ξ_4 and ξ_1=ξ_3=-ξ_2=-ξ_4, and Γ^s=0 for others. Also,Γ^c_ξ_1ξ_2,ξ_3ξ_4=-U for ξ_1=ξ_2=ξ_3=ξ_4, Γ^c=-2U for ξ_1=ξ_2=-ξ_3=-ξ_4, Γ^c=U for ξ_1=ξ_3=-ξ_2=-ξ_4, and Γ^c=0 for others.The self-energy in the FLEX calculation is given asΣ_'ξ(k) =T/N∑_qG_'ξ(k-q)V_ξξ','ξ'ξ(q), V_ξξ','ξ'ξ(q) =U^2/2(3Γ̂^sχ̂^s_'(q)Γ̂^s +Γ̂^cχ̂^c_'(q)Γ̂^c)_ξξ',ξ'ξ,χ^0_ξ_1ξ_2,'ξ_3ξ_4(q) =-T/N∑_kG_'ξ_1(k+q)G_'ξ_2(k)δ_ξ_1,ξ_3δ_ξ_2,ξ_4,χ̂^s(c)(q) =χ̂^0(q)(1̂-Γ̂^s(c)χ̂^0(q))^-1,where χ̂^s(c) is the spin (charge) susceptibility. The self-energy in the FLEX approximation is given by solving Eqs. (<ref>)-(<ref>) self-consistently. The coefficients for the self-energy originated from the spin fluctuations and the charge fluctuation are 3/2 and 1/2, respectively. The spin (charge) Stoner factor ^s(c) is defined as the maximum eigenvalue of Γ̂^s(c)χ̂^s(c)_'. ^s and ^c are exactly equivalent due to the relation U=U'. In the presence of the off-site Coulomb interaction between (i,) and (j,'), v_i,j' given as Eq. (<ref>), the effect of off-site Coulomb interaction in the FLEX approximationis simply given by replacing Γ̂^c with Γ̂^c+v_'()δ_ξ_1,ξ_2δ_ξ_3,ξ_4 in Eqs. (<ref>) and (<ref>). Here, v_'() is the Fourier transform of v_i,j'.The SU(4) susceptibility in Eq. (<ref>) can be expanded by the spin and charge susceptibilities in Eq. (<ref>) as χ_μ,ν^'(,i_l) =∫_0^β dτ⟨ O_μ,ν^(,τ)O_μ,ν^'(-,0)⟩ e^i_lτ=∑_l_1l_2l_3l_4 Q^μ,ν_ l_1 l_2χ_ l_1 l_2,' l_3 l_4(q)Q^μ,ν_' l_3 l_4, where Q^μ,ν_ l l'=(_μ⊗τ̂_ν)_l l' and l_i=(ρ_i,ξ_i). The general susceptibility in the right-hand-side ofEq. (<ref>) is given as χ_ l_1 l_2,' l_3 l_4(q) =1/2χ_ξ_1ξ_2,'ξ_3ξ_3^s(q)(⊗)_ρ_1 ρ_2,ρ_3 ρ_4+1/2χ_ξ_1 ξ_2,' ξ_3 ξ_3^c(q)(^0⊗^0)_ρ_1 ρ_2,ρ_3 ρ_4. This conventional formalism used in Refs.<cit.> is exactly equivalent with the SU(4) operator formalism explained in the main text. § APPENDIX B: RESISTIVITY WITHIN THE SECOND-ORDER PERTURBATION THEORY Here, we discuss the important effect of vHS points on the resistivity ρ in the weak coupling region. In the main text,the obtained power m in ρ=aT^m is smaller than about 1.5 for n=2.0,even in the case of very weak on-site Coulomb interaction U. This result is inconsistent with the expected behavior thatthe Fermi liquid behavior ρ∝ T^2is obtained for the limit U→0. To understand this inconsistence, we calculate the resistivity ρ^(2), which is given by the self-consistent second-order perturbation theory with respect to U. Figure <ref> (a) shows ρ with full order (black line) and within second-order perturbation theory (green line)with respect to U, and (b) shows ρ^(2) for n=1.0 (blue line) and n=3.0 (orange line). We set U=50 meV in Fig. <ref>. The obtained power m in ρ^(2)=aT^m for n=2.0 is m=1.45, and this is almost same with m for U=12.5 meV in Fig. <ref>. In contrast, the power m in ρ^(2) for n=1.0, 3.0 are close to 2. This results suggest that the power m is enhanced by the effect of vHS pointsand T-linear resistivity is easily realized near the vHS points. § APPENDIX C: FILLING DEPENDENCE OF Γ_COLD Figure <ref> showsthe filling dependence of γ_cold for n=1.0-3.0. γ_cold for n=2.0-3.0 get small as the filling is far from n_VHS≃ 2.0. Unexpectedely, the obtained γ_cold for n=1.0 at T≳ 10K takes larger value than it for n=2.0 in our calculation. The reason is that the nesting condition on the FS for n=1.0 in Fig. <ref> (b) is better than that for n=2.0 in Fig. <ref> (b). Consequently, SU(4) susceptibilities for n=1.0are higher than that for n=2.0 by reflecting thegood nesting condition of the FS. (FS for n=1.0 is shown in Fig. <ref> (b).) Thus, γ_cold for n=1.0 takes the largest value due to the stronger nesting effect, which exceeds the effect of the reduced DOS.999Cao1 Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Unconventional superconductivity in magic-angle graphene superlattices,Nature 556, 43 (2018). Cao2 Y. Cao, V. Fatemi, A. Demir, S. Fang, S. L. Tomarken, J. Y. Luo, J. D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, E. Kaxiras, R. C. Ashoori, and P. Jarillo-Herrero,Correlated insulator behaviour at half-filling in magic-angle graphene superlattices,Nature 556, 80 (2018). Yankowitz M. Yankowitz, S. Chen, H. Polshyn, Y. Zhang, K. Watanabe, T. Taniguchi,D. Graf, A. F. Young, and C. R. Dean,Tuning superconductivity in twisted bilayer graphene,Science 363, 1059 (2019). Lu X. Lu, P. Stepanov, W. Yang, M. Xie, M. A. Aamir, I. Das, C. Urgell, K. Watanabe, T. Taniguchi, G. Zhang, A. Bachtold, A. H. MacDonald, and D. K. Efetov,Superconductors, orbital magnets and correlated states in magic-angle bilayer graphene,Nature 574, 653 (2019). Sharpe A. L. Sharpe, C. L. Tschirhart, H. Polshyn, Y. Zhang, J.Zhu, K. Watanabe, T. Taniguchi, L. Balents, and A. F. Young,Emergent ferromagnetism near three-quarters filling in twisted bilayer graphene,Science 365, 605 (2019). Serlin M. Serlin, C. L. Tschirhart, H. Polshyn, Y. Zhang, J. Zhu, K. Watanabe, T. Taniguchi, L. Balents, A. F. Young, Intrinsic quantized anomalous Hall effect in a moiré heterostructure,Science 367, 900 (2020). Kerelsky A. Kerelsky, L. McGilly, D. M. Kennes, L. Xian, M. Yankowitz, S. Chen, K. Watanabe, T. Taniguchi, J. Hone, C. Dean, A. Rubio, and A. N. Pasupathy,Maximized electron interactions at the magic angle in twisted bilayer graphene,Nature 572, 95 (2019). Choi Y. Choi, J. Kemmer, Y. Peng, A. Thomson, H. Arora, R. Polski, Y. Zhang, H. Ren, J. Alicea, G. Refael, F. von Oppen, K. Watanabe, T. Taniguchi, and S. Nadj-Perge,Electronic correlations in twisted bilayer graphene near the magic angle,Nat. Phys. 15, 1174 (2019). Jiang Y. Jiang, X. Lai, K. Watanabe, T. Taniguchi, K. Haule, J. Mao, and E. Y. Andrei,Charge order and broken rotational symmetry in magic-angle twisted bilayer graphene,Nature 573, 91 (2019). Cao3 Y. Cao, D. R. Legrain, J. M. Park, F. N. Yuan, K. Watanabe, T. Taniguchi, R. M. Fernandes, L. Fu, and P. J. Herrero,Nematicity and competing orders in superconducting magic-angle graphene,Science 372, 264 (2021). Nuckolls K. P. Nuckolls, R. L. Lee1, M. Oh, D. Wong1, T. Soejima, J. P. Hong1, D. Călugăru, J. Herzog-Arbeitman, B. A. Bernevig, K. Watanabe, T. Taniguchi, N. Regnault, M. P. Zaletel, and A. Yazdani,Quantum textures of the many-body wavefunctions in magic-angle graphene,Nature 620, 525 (2023). Kim H. Kim, Y. Choi, ÁE Lantagne-Hurtubise, C. Lewandowski, A. Thomson, L. Kong, H. Zhou, E. Baum, Y. Zhang, L. Holleis, K. Watanabe, T. Taniguchi, A. F. Young, J. Alicea, and Stevan Nadj-Perge,Imaging inter-valley coherent order in magic-angle twisted trilayer graphene,Nature 623, 942 (2023). Isobe-super H. Isobe, N. F. Q. Yuan, and L. Fu,Unconventional Superconductivity and Density Waves in Twisted Bilayer Graphene,Phys. Rev. X 8, 041041 (2018).Chubukov-nematic D. V. Chichinadze, L. Classen, and A. V. Chubukov,Nematic superconductivity in twisted bilayer graphene,Phys. Rev. B 101, 224513 (2020).Onari-TBG S. Onari and H. Kontani,SU(4) Valley + Spin Fluctuation Interference Mechanism for Nematic Order in Magic-Angle Twisted Bilayer Graphene: The Impact of Vertex Corrections,Phys. Rev. Lett. 128, 066401 (2022).Kontani-rev2 H. Kontani, R. Tazai, Y. Yamakawa, and S. Onari, Unconventional density waves and superconductivities in Fe-based superconductors and other strongly correlated electron systems, Adv. Phys. 70, 355 (2021).Tazai-LW R. Tazai, S. Matsubara, Y. Yamakawa, S. Onari, and H. Kontani, Rigorous formalism for unconventional symmetry breaking in Fermi liquid theory and its application to nematicity in FeSe, Phys. Rev. B 107, 035137 (2023).Onari-SCVC S. Onari and H. Kontani, Self-consistent Vertex Correction Analysis for Iron-based Superconductors: Mechanism of Coulomb Interaction-Driven Orbital Fluctuations, Phys. Rev. Lett. 109, 137001 (2012).Onari-form S. Onari, Y. Yamakawa, and H. Kontani, Sign-Reversing Orbital Polarization in the Nematic Phase of FeSe due to theC_2 Symmetry Breaking in the Self-Energy, Phys. Rev. Lett. 116, 227001 (2016).Yamakawa-PRX Y. Yamakawa, S. Onari, and H. Kontani, Nematicity and Magnetism in FeSe and Other Families of Fe-Based Superconductors, Phys. Rev. X 6, 021032 (2016).Onari-B2g S. Onari and H. Kontani, rigin of diverse nematic orders in Fe-based superconductors:45^∘ rotated nematicity in AFe_2As_2 (A=CS,Rb), Phys. Rev. B 100, 020507(R) (2019).Chubukov-FeSe R. Q. Xing, L. Classen, A. V. Chubukov,Orbital order in FeSe: The case for vertex renormalization, Phys. Rev. B 98, 041108(R) (2018).Chubukov-RG A. V. Chubukov, M. Khodas, and R. M. Fernandes, Magnetism, Superconductivity, and Spontaneous Orbital Order in Iron-Based Superconductors: Which Comes First and Why?, Phys. Rev. X 6, 041045 (2016).Tsuchiizu-Cu M. Tsuchiizu, K. Kawaguchi, Y. Yamakawa, and H. Kontani, Multistage electronic nematic transitions in cuprate superconductors: A functional-renormalization-group analysis, Phys. Rev. B 97, 165131 (2018).Onari-Ni S. Onari and H. Kontani, Strong Bond-Order Instability with Three-Dimensional Nature in Infinite-Layer Nickelates due to Non-Local Quantum Interference Mechanism, arXiv:2212.13784 (2022).Tazai-kagome1 R. Tazai, Y. Yamakawa, S. Onari, and H. Kontani, Mechanism of exotic density-wave and beyond-Migdal unconventional superconductivity in kagome metal AV_3Sb_5 (A = K, Rb, Cs), Sci. Adv. 8, eabl4108 (2022).Tazai-kagome2 R. Tazai, Y. Yamakawa, and H. Kontani, Charge-loop current order and Z3 nematicity mediated by bond-order fluctuations in kagome metals, Nat. Commun. 14, 7845 (2023).Tazai-kagome3 R. Tazai, Y. Yamakawa, and H. Kontani,Drastic magnetic-field-induced chiral current order and emergent current-bond-field interplay in kagome metals, accepted for publication in Proceedings of the National Academy of Sciences (PNAS) (available at https://arxiv.org/abs/2303.00623).Tsuchiizu-Ru1 M. Tsuchiizu, Y. Ohno, S. Onari, and H. Kontani, Orbital Nematic Instability in the Two-Orbital Hubbard Model: Renormalization-Group + Constrained RPA Analysis, Phys. Rev. Lett. 111, 057003 (2013).Sarma1 E. H. Hwang and S. D. Sarma,Acoustic phonon scattering limited carrier mobility in two-dimensional extrinsic graphene,Phys. Rev. B 77, 115449 (2008). Sarma2 F. Wu, E. Hwang, and S. D. Sarma,Phonon-induced giant linear-in-T resistivity in magic angle twisted bilayer graphene: Ordinary strangeness and exotic superconductivity,Phys. Rev. B 99, 165112 (2019). Fernandes-TBG R. M. Fernandes and J. W. F. Venderbos,Nematicity with a twist: Rotational symmetry breaking in a moiré superlattice,Sci. Adv. 6, 8834 (2020).Kontani-RH H. Kontani, K. Kanki, and K. Ueda, Hall effect and resistivity in high-T_c superconductors: The conserving approximation, Phys. Rev. B 59, 14723 (1999).Kontani-MR Hiroshi Kontani, General formula for the magnetoresistance on the basis of Fermi liquid theory, Phys. Rev. B 64, 054413 (2001).Kontani-MR2 H. Kontani, Magnetoresistance in High-T_c Superconductors: The Role of Vertex Corrections, J. Phys. Soc. Jpn. 70, 1873 (2001).Kontani-Nernst Hiroshi Kontani, Nernst Coefficient and Magnetoresistance in High-T_c Superconductors: The Role of Superconducting Fluctuations, Phys. Rev. Lett. 89, 237003 (2002).Kontani-rev1 H. Kontani, Anomalous transport phenomena in Fermi liquids with strong magnetic fluctuations, Rep. Prog. Phys. 71, 026501 (2008).Jaoui A. Jaoui, I. Das, G. D. Battista, J. Díez-Mérida, X. Lu, K. Watanabe, T. Taniguchi, H. Ishizuka, L. Levitov, and D. K. Efetov,Quantum critical behaviour in magic-angle twisted bilayer graphene,Nat. Phys. 18, 633 (2022). Polshyn H. Polshyn, M. Yankowitz, S. Chen, Y. Zhang, K. Watanabe, T. Taniguchi, C. R. Dean, and A. F. Young,Large linear-in-temperature resistivity in twisted bilayer graphene, Nat. Phys. 15, 1011 (2019). Park J. M. Park, Y. Cao, K. Watanabe, T. Taniguchi, and P. Jarillo-Herrero,Flavour Hund’s coupling, Chern gaps and charge diffusivity in moiré graphene,Nat. Phys. 592, 43 (2021). Lyu R. Lyu, Z. Tuchfeld, N. Verma, H. Tian, K. Watanabe, T. Taniguchi, C. Ning Lau, M. Randeria, and M. Bockrath,Strange metal behavior of the Hall angle in twisted bilayer graphene,Phys. Rev. B 103, 245424 (2021). Nakajima1 T. Nakazima, H. Yoshizawa, and Y. Ueda, A-site Randomness Effect on Structural and Physical Properties of Ba-based Perovskite Manganites, J. Phys. Soc. Jpn. 73, 5 (2004).Nakajima2 Y. Nakajima Y. Nakajima, H. Shishido, H. Nakai, T. Shibauchi, K. Behnia, K. Izawa, M. Hedo, Y. Uwatoko, T. Matsumoto, R. Settai, Y. Onuki, H. Kontani, and Y. Matsuda, Non-Fermi Liquid Behavior in the Magnetotransport of CeMIn5 (M: Co and Rh): Striking Similarity between Quasi Two-Dimensional Heavy Fermion and High-T_c Cuprates, J. Phys. Soc. Jpn. 76, 024703 (2007).FeSe1 J. P. Sun, G. Z. Ye, P. Shahi, J.-Q. Yan, K. Matsuura, H. Kontani, G. M. Zhang, Q. Zhou, B. C. Sales, T. Shibauchi, Y. Uwatoko, D. J. Singh, and J.-G. Cheng, High-T_c Superconductivity in FeSe at High Pressure: Dominant Hole Carriers and Enhanced Spin Fluctuations, Phys. Rev. Lett. 118, 147004 (2017). FeSe2 W. K. Huang, S. Hosoi, M. Culo, S. Kasahara, Y. Sato, K. Matsuura, Y. Mizukami, M. Berben, N. E. Hussey, H. Kontani, T. Shibauchi, and Y. Matsuda, Non-Fermi liquid transport in the vicinity of the nematic quantum critical point of superconducting FeSe_1-xS_x, Phys. Rev. Res. 2, 033367 (2020).Ni1 D. Li, B. Y. Wang, K. Lee, S. P. Harvey, M. Osada, B. H. Goodge, L. F. Kourkoutis, and H. Y. Hwang, Superconducting Dome inNd_1-xSr_xNiO_2 Infinite Layer Films, Phys. Rev. Lett. 125, 027001 (2020).Ni2 K. Lee, B. Y. Wang, M. Osada, B. H. Goodge, T. C. Wang, Y. Lee, S. Harvey, W. J. Kim, Y. Yu, C. Murthy, S. Raghu, L. F. Kourkoutis, H. Y. Hwang, Character of the "normal state" of the nickelate superconductors, arXiv:2203.02580 (2022).YBCO H. Takagi, T. Ido, S. Ishibashi, M. Uota, S. Uchida, and Y. Tokura, Superconductor-to-nonsuperconductor transition in (Ls_1-xSr_x)_2CuO_4as investigated by transport and magnetic measurements, Phys. Rev. B 40, 2254 (1989).Taillefer R. Daou, N. Doiron-Leyraud, D. LeBoeuf, S.Y. Li, F. Lalibertè, O. Cyr-Choinière, Y.J. Jo, L. Balicas, J.-Q. Yan, J.-S. Zhou,J.B. Goodenough, and L. Taillefer, Linear temperature dependence of resistivity and change in the Fermi surface at the pseudogap critical point of a high-Tc superconductor, Nat. Phys. 5, 31 (2009).Moriya-rev1 T. Moriya and K. Ueda,Spin fluctuations and high temperature superconductivity, Adv. Phys. 49, 555 (2000).Hertz J. A. Hertz,Quantum critical phenomena,Phys. Rev. B 14, 1165 (1976).Millis A. J. Millis,Effect of a nonzero temperature on quantum critical points in itinerant fermion systems,Phys. Rev. B 48, 7183 (1993).Rice R. Hlubina and T. M. Rice, Resistivity as a function of temperature for models with hot spots on the Fermi surface, Phys. Rev. B 51, 9253 (1995).Pines B. P. Stojkovic and D. Pines, Theory of the longitudinal and Hall conductivities of the cuprate superconductors, Phys. Rev. B 55, 8576 (1997).Abanov A. Abanov, A. V. Chubukov, J. Schmalian, Quantum critical theory of the spin-fermionmodel and its application to cuprates: Normal state analysis, Adv. Phys. 52, 119 (2003). FLEX1 N. E. Bickers and S. R. White, Conserving approximations for strongly fluctuating electron systems. II. Numerical results and parquet extension, Phys. Rev. B 43, 8044 (1991).FLEX2 T. Dahm and L. Tewordt, Physical quantities in nearly antiferromagnetic and superconducting states of the two-dimensional Hubbard model and comparison with cuprate superconductors, Phys. Rev. B 52, 1297 (1995).FLEX3 T. Takimoto and T. Moriya, Theory of Spin Fluctuation-Induced Superconductivity Based on a d- p Model. II. -Superconducting State-, J. Phys. Soc. Jpn. 67, 3570 (1994).MW H. Kontani and M. Ohno, Effect of a nonmagnetic impurity in a nearly antiferromagnetic Fermi liquid: Magnetic correlations and transport phenomena, Phys. Rev. B 74, 014406 (2006).Koshino M. Koshino, N. F. Q. Yuan, T. Koretsune, M. Ochi, K. Kuroki, and L. Fu,Maximally Localized Wannier Orbitals and the Extended Hubbard Model for Twisted Bilayer Graphene,Phys. Rev. X 8, 031087 (2018). Klug M. J. Klug, Charge order and Mott insulating ground states in small-angle twisted bilayer graphene, New J. Phys. 22, 073016 (2020). Tazai-multipole1 R. Tazai and H. Kontani,Fully gapped s-wave superconductivity enhanced by magnetic criticality in heavy-fermion systems, Phys. Rev. B 98, 205107 (2018).Tazai-multipole2 R. Tazai and H. Kontani,Multipole fluctuation theory for heavy fermion systems: Application to multipole orders in CeB_6, Phys. Rev. B 100, 241103(R) (2019).Tazai-multipole3 R. Tazai and H. Kontani,Hexadecapole Fluctuation Mechanism for s-wave Heavy Fermion Superconductor CeCu_2Si_2: Interplay between Intra- and Inter-Orbital Cooper Pairs, J. Phys. Soc. Jpn. 88, 063701 (2019).Vafec J. Kang and O. Vafek,Strong Coupling Phases of Partially Filled Twisted Bilayer Graphene Narrow Bands,Phys. Rev. Lett. 122, 246401 (2019). Burg G. W. Burg, J. Zhu, T. Taniguchi, K. Watanabe, A. H. MacDonald, and E. Tutuc, Correlated Insulating States in Twisted Double Bilayer Graphene, Phys. Rev. Lett. 123, 197702 (2019). Liu X. Liu, Z. Hao, E. Khalaf, J. Y. Lee, Y. Ronen, H. Yoo, D. H. Najafabadi, K. Watanabe, T. Taniguchi, A. Vishwanath, and P. Kim, Tunable spin-polarized correlated states in twisted double bilayer graphene, Nature 583, 221 (2019). Shen C. Shen, Y. Chu, Q. Wu, N. Li, S. Wang, Y Zhao, J. Tang, J. Liu, J. Tian, K. Watanabe, T. Taniguchi, R. Yang, Z. Y. Meng, D. Shi, O. V. Yazyev, and G. Zhang, Correlated states in twisted double bilayer graphene, Nat. Phys. 16, 520 (2020). He M. He, Y. Li, J. Cai, Y. Liu, K. Watanabe, T. Taniguchi, X. Xu, and M. Yankowitz, Symmetry breaking in twisted double bilayer graphene, Nat. Phys. 17, 26 (2021).Samajdar R. Samajdar, M. S Scheurer, S. Turkel, C. Rubio-Verdú, A. N Pasupathy, J. W F Venderbos, and R. M Fernandes, Electric-field-tunable electronic nematic order in twisted double-bilayer graphene, 2D Mater. 8, 034005 (2021).Zhou H. Zhou1, T. Xie, T. Taniguchi, K. Watanabe, and A. F. Young, Superconductivity in rhombohedral trilayer graphene, Nature 598, 434 (2021).Takimoto-FLEX T. Takimoto, Orbital fluctuation-induced triplet superconductivity: Mechanism of superconductivity in Sr_2RuO_4 Phys. Rev. B 62, R14641(R) (2000).Yada K. Yada and H. kontani, Origin of Weak Pseudogap Behaviors in Na_0.35CoO_2: Absence of Small Hole Pockets, J. Phys. Soc. Jpn. 74, 2161 (2005).Kuroki K. Kuroki, S. Onari, R. Arita, H. Usui, Y. Tanaka, H. Kontani, and H. Aoki, Unconventional Pairing Originating from the Disconnected Fermi Surfaces of Superconducting LaFeAsO_1-xF_x, Phys. Rev. Lett. 101, 087004 (2008).Kontani-FeSC H. Kontani and S. Onari, Orbital-Fluctuation-Mediated Superconductivity in Iron Pnictides: Analysis of the Five-Orbital Hubbard-Holstein Model, Phys. Rev. Lett. 104, 157001 (2010).
http://arxiv.org/abs/2312.16042v1
{ "authors": [ "Daisuke Inoue", "Seiichiro Onari", "Hiroshi Kontani" ], "categories": [ "cond-mat.str-el", "cond-mat.supr-con" ], "primary_category": "cond-mat.str-el", "published": "20231226130941", "title": "Robust $T$-Linear Resistivity due to SU(4) Valley + Spin Fluctuation Mechanism in Magic Angle Twisted Bilayer Graphene" }
[email protected] Centro Ricerche Enrico Fermi (CREF), Via Panisperna 89a, 00184 Rome, ItalyPhysics Department, Sapienza University of Rome, 00185 Rome, Italy Centro Ricerche Enrico Fermi (CREF), Via Panisperna 89a, 00184 Rome, ItalySystems of coupled optical parametric oscillators (OPOs) forming an Ising machine are emerging as large-scale simulators of the Ising model. The advances in computer science and nonlinear optics have triggered not only the physical realization of hybrid (electro-optical) or all-optical Ising machines, but also the demonstration of quantum-inspired algorithms boosting their performances. To date, the use of the quantum nature of parametrically generated light as a further resource for computation represents a major open issue. A key quantum feature is the non-Gaussian character of the system state across the oscillation threshold. In this paper, we perform an extensive analysis of the emergence of non-Gaussianity in the single quantum OPO with an applied external field. We model the OPO by a Lindblad master equation, which is numerically solved by an ab initio method based on exact diagonalization. Non-Gaussianity is quantified by means of three different metrics: Hilbert-Schmidt distance, quantum relative entropy, and photon distribution. Our findings reveal a nontrivial interplay between parametric drive and applied field: (i) Increasing pump monotonously enhances non-Gaussianity, and (ii) Increasing field first sharpens non-Gaussianity, and then restores the Gaussian character of the state when above a threshold value. Dawn and fall of non-Gaussianity in the quantum parametric oscillator Claudio Conti January 14, 2024 =====================================================================§ INTRODUCTIONHard optimization problems are permeating several areas of modern science and society. Their rapidly-increasing computational complexity nowadays pairs with the evident limits of conventional computer architectures, fostering the investigation of innovative specialized paradigms and devices. In this respect, optical systems are emerging as promising alternative computing platforms <cit.>. Leveraging the mapping of complex optimization to Ising Hamiltonians <cit.>, the quest of solving a large class of problems translates into building a system capable of simulating the classical Ising model and efficiently finding its lowest-energy configuration.Specifically, systems of optical parametric oscillators (OPOs) have emerged as a valuable platform to solve the Ising model. When pumped by an external drive above the oscillation threshold, an OPO undergoes phase-dependent amplification forcing the phase of the optically amplified signal to be either 0 or π with respect to the phase of the pump. These two states simulate the “spin-up” and “spin-down” configurations of a classical Ising spin. This circumstance is behind the use of networks of coupled OPOs as computing machines (called Ising machines) to find the ground state of the classical Ising model <cit.>. Recently, a significant effort has been put in exploiting classical properties of OPOs to enhance computational speed and efficiency <cit.>.The question on whether quantum features of the parametrically generated light can be employed to further boost OPO-based computing machines has also been raised <cit.>. However, no clear answer is available to date because of the difficulty in the analytical and numerical description of quantum OPO networks compared to their classical counterparts. A major issue is the identification of specific quantum properties that can enhance the computational performance.One of such quantum features is the non-Gaussian nature of the state <cit.>, which is the focus of this work. Previous work discussed the presence of non-Gaussianity in OPOs close to the threshold <cit.> by observing non-Gaussian statistics in the photon distribution. The emergence of non-Gaussian correlations as the system is driven above the oscillation threshold is one of the key features that is envisioned to improve quantum tunneling during the quantum parallel search <cit.> and thus enhance the OPO-based Ising machines. However, a systematic study on the way non-Gaussianity emerges is missing.In this work, we report on an extensive analysis of non-Gaussianity in the quantum OPO in different parameter regimes. We model the OPO by a driven-dissipative open quantum system described by a Lindblad master equation accounting for two-photon gain (pump) and subject to one- and two-photon dissipation (intrinsic loss and pump saturation, respectively). We numerically obtain the full density matrix of the system by resorting to an ab-initio method, by projection of the master equation on the Fock (number) basis and subsequent exact diagonalization of the Liouvillian superoperator <cit.>. Non-Gaussianity is first quantified as a function of the pump amplitude by comparing three different metrics: Hilbert-Schmidt distance <cit.>, quantum relative entropy s(ρ̂) <cit.>, and photon distribution <cit.>. Then, non-Gaussianity is studied in the presence of a one-photon drive (additive field) by computing the quantum relative entropy as a function of both pump amplitude and applied field strength.We find that, while the quantum state is well described by a Gaussian state for sufficiently low pump, non-Gaussianity dominates above threshold. Specifically, increasing pump causes a monotonous growth of non-Gaussianity, while increasing additive field first makes non-Gaussianity to grow, and then causes a steep decrease, suggesting a restoration of the Gaussian nature of the state for large additive field.This paper is organized as follows: In Sec. <ref> we introduce the quantum model of the OPO and review the corresponding classical model. In Sec. <ref>, we discuss our numerical procedure, first addressing the case of zero additive field. We present our numerical results on the Wigner function in Sec. <ref>. The measurements of non-Gaussianity are discussed in Sec. <ref>, and their analysis is extended to the case of nonzero additive field in Sec. <ref>. We draw our conclusions in Sec. <ref>, and report additional analytical and numerical details in the appendices.§ THE MODEL In this section, we introduce our model of the quantum optical parametric oscillator and review for the sake of completeness its main properties in the classical (mean-field) limit.§.§ Quantum master equation We model the OPO as a driven-dissipative open quantum system described by a density operator ρ̂, obeying the following master equation (ħ=1) <cit.>d/dtρ̂=ℒρ̂(t)=1/i[Ĥ_0,ρ̂]+𝒟_ 1ph(ρ̂)+𝒟_ 2ph(ρ̂),where ℒ is the Liouvillian superoperator. In Eq. (<ref>), we defineĤ_0=ih/8((â^†)^2-â^2),as the Hamiltonian describing two-photon gain (parametric amplification) by a real field of amplitude h>0, and 𝒟_ 1ph(ρ̂) =g(â ρ̂ â^†-1/2{â^†â,ρ̂}) 𝒟_ 2ph(ρ̂) =β/2(â^2 ρ̂ (â^†)^2-1/2{(â^†)^2â^2,ρ̂}),are the dissipators representing one- and two-photon losses. These processes describe the intrinsic cavity loss (quantified by g>0) and the nonlinear saturation (quantified by β>0), respectively. In Eqs. (<ref>) and (<ref>), â (â^†) is the photon annihilation (creation) operator, obeying the bosonic commutation relations [â,â^†]=1 and [â,â]=0. §.§ Classical limitFrom Eq. (<ref>), we obtain the equation of motion for â by the adjoint master equationdâ/dt=h/4â^†-g/2â-β/2â^†â^2.By taking the mean-field approximation â→⟨â⟩≡ A in Eq. (<ref>), the classical equations of motion describing the dynamics of the complex OPO amplitude A are obtained <cit.>dA/dt=h/4A^*-1/2(g+β|A|^2)A.When the pump amplitude h is below the classical oscillation threshold value h_ th=2g, the dynamics in Eq. (<ref>) suppresses both the real and imaginary part of A (respectively Re[A] and Im[A]). The only fixed point of the dynamics (defined by the condition dA̅/dt=0, where the overline denotes the steady-state value) is the origin of the complex plane, i.e., Re[A̅]= Im[A̅]=0. When the pump amplitude is driven above threshold (h>h_ th), the origin becomes a saddle point, giving raise to two symmetric stable fixed points on the real axis by a pitchfork bifurcation <cit.>. The amplitude at these nontrivial fixed points from Eq. (<ref>) is readily foundRe[A̅]=±√(1/β(h/2-g)) Im[A̅]=0.Above threshold, the system converges to the fixed point in Eq. (<ref>) with sign determined by the initial condition A(t=0), a phenomenology that is reminiscent of the spontaneous ℤ_2 (Ising) symmetry breaking.§ MASTER EQUATION IN THE FOCK BASIS We now discuss the numerical solution of the quantum master equation in Eq. (<ref>). Our goal is to find the exact density operator ρ̂, from which any observable can be measured. To this end, we proceed by using an ab initio method as follows. We choose the basis of Fock (number) states for the bosonic Hilbert space ℋ= span{|n⟩}_n=0^∞ to represent ρ̂ as a (infinite) real positive definite matrix with elements ρ_mn≡⟨ m|ρ̂|n⟩, so thatρ̂=∑_m,n=0^∞ρ_mn |m⟩⟨ n|.The projection of Eq. (<ref>) onto the Fock states allows to obtain the equations of motion for all the elements ρ_mn in the following tensor formd/dtρ_mn=∑_r,s=0^∞ℒ^rs_mn ρ_rs ,where the nonzero elements of the Liouvillian tensor ℒ^rs_mn are the projected right-hand side of Eq. (<ref>) and are reported in Appendix <ref>.While in general the Fock states are upper unbounded, in our numerics, we truncate the Hilbert space up to n_ max-1 particles, i.e., ℋ= span{|n⟩}_n=0^n_ max-1, in order to represent operators (superoperators) as matrices (tensors) of finite size <cit.>. In particular, ρ_mn and ℒ^rs_mn are a n_ max× n_ max matrix and a n_ max× n_ max× n_ max× n_ max tensor, respectively. The steady state density matrix ρ̅_mn, found from Eq. (<ref>) as customary by imposing dρ̅/dt=ℒρ̅=0, is obtained by the exact diagonalization of ℒ^rs_mn, reshaped as a matrix, as the eigenvector of the Liouvillian associated to the zero eigenvalue <cit.>.Physically, the truncation of the Hilbert space is possible thanks to the presence of the nonlinear saturation dissipator 𝒟_ 2ph(ρ̂) in Eq. (<ref>), which naturally sets an upper bound for the average number of photons ⟨â^†â⟩ in the system that is approximatively given by the squared classical fixed-point amplitude in see Eq. (<ref>): ⟨â^†â⟩≃(h/2-g)/β. Therefore, to have a faithful representation of ρ̂ on the truncated Hilbert space, it is sufficient to choose n_ max such that |ρ_mn|<ϵ with ϵ vanishingly small, for all m,n>n_ max. We checked that this condition is ensured for ρ̂ in all our numerical simulations.§ WIGNER FUNCTION A useful observable that can be measured from the numerically obtained density matrix ρ_mn is the Wigner quasi-probability distribution function W(z), which provides a representation of the quantum state in the complex quadrature space z=(X+iP)/√(2), where X and P are the position and momentum coordinates, respectively. The Wigner function is defined as the complex Fourier transform of the characteristic function χ(ξ)= Tr[D̂_ξρ̂], where D̂_ξ=e^ξâ^†-ξ^*â is the displacement operator, i.e., W(z)=1/π^2∫_ℂd^2ξ e^zξ^*-z^*ξ χ(ξ), where the integral extends over the complex plane. Using a series of identities, one can show that the Wigner function is equivalently rewritten as <cit.>W(z)=2/πTr[D̂_2z e^iπ â^†âρ̂].Equation (<ref>) is particularly useful when ρ̂ is represented in the Fock basis. Indeed, by using the resolution of the identity ∑_n=0^∞|n⟩⟨ n|=1̂, one hasW(z)=2/π∑_m,n(-1)^m⟨ n|D̂_2z|m⟩ ρ_mn .The matrix representation of the displacement operator in the Fock basis ⟨ n|D̂_z|m⟩ is reported in Appendix <ref>.The numerical results on the Wigner function are shown in Fig. <ref>, where we plot W(z) as a colomap in the Im[z] vs. Re[z] plane for different values of the pump amplitude h, relative to the classical oscillation threshold h_ th=2g, on which we overlap the classical fixed points from Eq. (<ref>) as green and black dots for stable and unstable points, respectively. We see that below threshold, the Wigner function consists of one lobe centered about the stable origin, and it develops a symmetric two-lobe structure on the real axis around the origin (which becomes a saddle after the pitchfork bifurcation) as the pump amplitude is driven above the classical threshold. From the quantum point of view, such a symmetric Wigner function signifies that the state is found on each lobe with equal probability. The corresponding classical behaviour is explained by the fact that the two stable fixed points are equally attractive, i.e., their basins of attraction are of equal size so that the probability to converge to either fixed point is the same for random initial conditions close to the origin.It is known that the quantum state or a sub-threshold OPO is a squeezed state <cit.>, which is a Gaussian state (in particular for zero pump the system is in the vacuum state). Instead, the two-lobe structure of W(z) is a clear indication of the non-Gaussian nature of the state above threshold. Specifically, far above threshold W(z) resembles two symmetric Gaussian lobes, suggesting that the quantum state tends to a mixture of coherent states: ρ̂≃(|α⟩⟨α|+|-α⟩⟨-α|)/2 with |α|≃A̅ in Eq. (<ref>), which is indeed non-Gaussian <cit.>. A natural question therefore arises: How does non-Gaussianity emerges from the Gaussian state as the pump amplitude is driven from below to above threshold?§ MEASUREMENTS OF NON-GAUSSIANITY In this section, we quantify the deviation from Gaussianity of the quantum state ρ̂ as the system crosses the oscillation threshold by comparing three different metrics: The degree of non-Gaussianity δ(ρ̂) based on the Hilbert-Schmidt distance <cit.>, the quantum relative entropy s(ρ̂) <cit.>, and non-Gaussianity Q(ρ̂) extracted from the photon distribution of ρ̂. All these metrics quantify the deviation of the actual quantum state ρ̂ from a reference state τ̂ defined as the Gaussian state having the same first and second moments (covariance matrix) of ρ̂. Since τ̂ is Gaussian, the determination of the first moments and covariance matrix of ρ̂ fully determines τ̂. §.§ Determination of the Gaussian reference state We first discuss how the state τ̂ is defined. Let us denote by 𝐑̂=(X̂,P̂) the vector of the two quadratures R̂_1≡X̂=(â+â^†)/√(2) and R̂_2≡P̂=(â-â^†)/i√(2). From the state ρ̂, the first moments ⟨𝐑̂⟩ and covariance matrix Σ (which is a 2×2 real and symmetric matrix) are found as customary as⟨R̂_j⟩= Tr[R̂_j ρ̂] Σ_jk=1/2 Tr[{ΔR̂_j,ΔR̂_k}ρ̂],where ΔR̂_j=R̂_j-⟨R̂_j⟩. From our numerical simulations, the first moments and covariance matrix of ρ̂ are readily computed by plugging in Eq. (<ref>) the Fock state expansion in Eq. (<ref>) with ρ_mn computed as explained before, and by recalling that â|n⟩=√(n)|n-1⟩ and â^†|n⟩=√(n+1)|n+1⟩. Let us observe that, due to ℤ_2 symmetry, which translates in phase space as invariance under inversion symmetry 𝐑̂→-𝐑̂, the first moments in our case are zero, and thus the computation of the covariance matrix simplifies to Σ_jk=1/2 Tr[{R̂_j,R̂_k}ρ̂].As said before, the computed first moments and covariance matrix of ρ̂ are by construction the same of τ̂. Since the generic single-mode Gaussian state is given by the displaced squeezed thermal state <cit.>τ̂=D̂_α Ŝ(ξ) ρ̂_ th(n) Ŝ^†(ξ) D̂^†_α ,with complex α and ξ, and n≥0, where the squeezing operator is Ŝ(ξ)=e^(ξ^* ââ-ξ â^†â^†)/2 and the thermal state with average number of thermal particles n isρ̂_ th(n)=∑_n=0^∞f_n|n⟩⟨ n|f_n=n^n/(n+1)^n+1 ,the state τ̂ is determined by finding α, ξ, and n from ⟨X̂⟩, ⟨P̂⟩, and Σ of ρ̂.The displacement α affects the first moments only, and one has Re[α]=⟨X̂⟩/√(2) and Im[α]=⟨P̂⟩/√(2). In our case, since the first moments are zero, one readily has α=0 and thus D̂_α=1̂. Instead, the squeezing ξ and thermal number of photons n affect covariance matrix only, whose form is reviewed in Appendix <ref>. From our numerical simulations, we observe that the covariance matrix Σ of ρ̂ (and thus of τ̂) is a diagonal matrix with Σ_11>Σ_22. From Appendix <ref> it follows that τ̂ is defined with real ξ and n given byξ=-1/4log(Σ_11/Σ_22) n=√(Σ_11Σ_22)-1/2 .We recall that n is related to the symplectic eigenvalue ν of Σ by ν=n+1/2=√(Σ_11Σ_22)=√( det[Σ]) <cit.>. The Fock representation of τ̂ in Eq. (<ref>) with α=0 isτ_mn=∑_v=0^∞f_v ⟨ m|Ŝ(ξ)|v⟩⟨ v|Ŝ^†(ξ)|n⟩ ,which, with ξ and n in Eq. (<ref>), is a real and symmetric matrix, where f_v is as in Eq. (<ref>) and the expression of ⟨ v|Ŝ^†(ξ)|n⟩=(⟨ n|Ŝ(ξ)|v⟩)^* is reported in Appendix <ref>. §.§ Non-Gaussianity by Hilbert-Schimdt distanceA natural way to quantify the deviation of ρ̂ from Gaussianity is via the operator distance between ρ̂ and τ̂ in the Hilbert-Schmidt metric <cit.>D_ HS(ρ̂,τ̂) = √( Tr[(ρ̂-τ̂)^2])= √( Tr[ρ̂^2]+ Tr[τ̂^2]-2Tr[ρ̂ τ̂]) ,where the purity of ρ̂ in Eq. (<ref>) and of τ̂ in Eq. (<ref>) are (see also Appendix <ref>)Tr[ρ̂^2]=∑_m,n=0^∞ρ_mn^2Tr[τ̂^2]=1/2n+1 .MoreoverTr[ρ̂ τ̂]=∑_m,n=0^∞ρ_mn τ_mn ,denotes the scalar product (overlap) between ρ̂ and τ̂ (recall that both ρ_mn and τ_mn are real and symmetric matrices). From Eq. (<ref>), the degree of non-Gaussianity is defined as <cit.>δ(ρ̂)D^2_ HS(ρ̂,τ̂)/2Tr[ρ̂^2] .Notice that, in order to numerically compute the purities of ρ̂ and τ̂, it is sufficient to determine ρ̂ in the Fock basis, from which Σ and thus n in Eq. (<ref>) are computed. Instead, the numerical computation of the overlap Tr[ρ̂ τ̂] requires also the Fock representation of τ̂ in Eq. (<ref>). §.§ Quantum relative entropyAnother observable that quantifies the non-Gaussian nature of the quantum state is provided by the quantum relative entropy between the actual state ρ̂ and its Gaussian reference state τ̂ <cit.>s(ρ̂) S(τ̂)-S(ρ̂),where S(ρ̂)=- Tr[ρ̂ log(ρ̂)] is the von Neumann entropy. For the state ρ̂ in Eq. (<ref>), the von Neumann entropy is defined in terms of the eigenvalues λ_k≥0 of ρ_mn asS(ρ̂)=-∑_k=0^∞λ_k log(λ_k).Instead, the von Neumann entropy of τ̂ readily follows from the diagonal representation of the thermal state in Eq. (<ref>), i.e., S(τ̂)=-∑_n=0^∞f_nlog(f_n), which explicitly reads <cit.>S(τ̂)=(n+1)log(n+1)-nlog(n).The fact that Eq. (<ref>) defines an exact distance-type measure of non-Gaussianity was shown in Ref. <cit.>. §.§ Euclidian distance between photon distributionsWhile the degree of non-Gaussianity and quantum relative entropy in Eqs. (<ref>) and (<ref>) provide exact measurements to quantify the non-Gaussian nature of the state, they require the full knowledge of the density matrix ρ̂. However, reconstructing ρ̂ requires complex state tomography techniques that are often unfeasible for large-dimensional systems <cit.>, hampering the experimental measurement of δ(ρ̂) and s(ρ̂). To overcome this problem, we show that it is possible to obtain similar results as for Eq. (<ref>) from solely the knowledge of the first moments, Σ, and the photon distribution p_n≃ρ_nn. This fact has notable advantages in experiments, since first and second moments are measured by homodyne detection <cit.>, while ρ_nn is measured by photon counting <cit.>.The measured Σ of the full quantum state ρ̂ is used to determine the Gaussian target τ̂ in Eq. (<ref>) by determining the squeezing parameter and the average number of thermal particles from Eq. (<ref>). Then, the measured p_n is compared to the photon distribution q_n=τ_nn obtained from the Fock expansion of τ̂ in Eq. (<ref>). We define the deviation from Gaussianity as the squared Euclidian distance between p_n and q_n, i.e.Q(ρ̂)∑_n=0^∞(p_n-q_n)^2.As discussed in Ref. <cit.>, p_n is expected to be close to q_n below the oscillation threshold, while deviations from q_n are observed as the threshold is approached, which motivates the choice of Eq. (<ref>) as a measure of non-Gaussianity of the quantum state. §.§ Numerical results Figure <ref> shows the degree of non-Gaussianity from our numerical simulations, comparing δ(ρ̂) from Eq. (<ref>) in panel (a), s(ρ̂) from Eq. (<ref>) in panel (b), and Q(ρ̂) from Eq. (<ref>) in panel (c). Data are shown as a function of the pump amplitude h relative to the classical threshold h_ th, which is marked in the plot as the vertical dashed black line. Different colors refer to different values of g as in the legend. Other numerical parameters are β=0.1 and n_ max=40. Clearly, when truncating the Hilbert space, all quantities where the summation over the Fock states appears are evaluated by summing up to the Fock state with n_ max-1 particles.As evident from the figure, all measured quantities show the same qualitative picture: They increase monotonously, being very close to zero below threshold and rapidly deviating from zero above threshold. In other words, the quantum state ρ̂ is well approximated by a Gaussian state below threshold, while it becomes highly non-Gaussian above threshold. The fact that the curves at lower g are below those at higher g is a consequence of the fact that data are taken as a function of h/h_ th.We remark that, in our numerical simulations, the computation of s(ρ̂) in Eq. (<ref>) is significantly less demanding compared to that of δ(ρ) and Q(ρ̂) in Eqs. (<ref>) and (<ref>), respectively. This is because δ(ρ̂) and Q(ρ̂) require the computation of both ρ̂ and τ̂ in the Fock basis. In fact, determining τ_mn as Eq. (<ref>) requires to perform at least n_ max^3 numerical operations (which become n_ max^2 when only the diagonal elements of τ̂ are needed) when α=0 in Eq. (<ref>). In the general case, when α≠0, the number of operations to determine τ_mn increase to n_ max^5 (the additional n^2_ max operations come from the displacement operator).In addition to this, the calculation of τ_mn is strongly affected by the truncation of the Hilbert space (because the unitarity of the displacement and squeezing operators, as well as the proper normalization of the thermal state, are strictly speaking found only when the Hilbert space has infinite dimension), and therefore the numerical calculation of δ(ρ̂) and Q(ρ̂) intrinsically carries with it an additional source of truncation error. This additional truncation error is reduced by increasing n_ max until no sensitive change in the numerical results is observed. In our numerics, we indeed checked that no sensitive change of data occurred for n_ max>40. Instead, computing s(ρ̂) needs only ρ̂, since also n in Eq. (<ref>) is found from the covariance matrix of ρ̂ as in Eq. (<ref>), which makes its computation not only less demanding but also more accurate compared to the other two metrics shown in Fig. <ref>.§ INCLUSION OF AN ADDITIVE FIELD In this section, we analyze the non-Gaussianity of the quantum state by including an additive field. This is done by adding to the parametric gain Hamiltonian in Eq. (<ref>) the one-photon fieldĤ_F=iF(â^†-â),where F∈ℝ quantifies the external field strength. The additional terms in the Lindbladian tensor in Eq. (<ref>) due to the presence of Ĥ_F are reported in Eq (<ref>) of Appendix <ref>.In the adjoint master equation in Eq. (<ref>) and in its classical limit in Eq. (<ref>), the applied field Ĥ_F in Eq. (<ref>) adds the extra term F to the right-hand sides, i.e., a term that is not multiplied by â or A, respectively. This kind of additive field is relevant to Ising machines because it is envisioned to simulate applied fields fully optically in the simulated Ising model (see Ref. <cit.> for a recent work in an electronic oscillator network) without the need of electronic feedback mechanisms <cit.>, therefore preserving the quantum nature of the state.The presence of the applied field breaks the inversion symmetry (i.e., polarizes the system) in phase space 𝐑→-𝐑, which is manifest by looking at the Wigner function and the classical fixed points of the equations of motion in Fig. <ref>. As evident from the figure, the Wigner function loses its symmetric two-lobe structure found in Fig. <ref>, which is for F=0. In particular, a positive (negative) F enhances the lobe at Re[z]>0 (Re[z]<0) in the complex plane, while suppressing the opposite one. In terms of the classical fixed points, the saddle (black, which corresponds to the origin for F=0) gradually approaches the attractor (green) at Re[z]<0 for F>0 (or Re[z]>0 for F<0) as |F| increases, until the two fixed points eventually collide and annihilate each other via a saddle-node bifurcation. After this bifurcation, only the attractor at Re[z]>0 for F>0 (or Re[z]<0 for F<0) is found. In this parameter regime, the system is fully polarized, deterministically converging to the only remaining attractor.We therefore see that the pump amplitude h and the applied field F play two antagonistic roles. The former tends to stabilize a state with two (possibly symmetric) configurations, while the latter induces imbalance, eventually polarizing (phase locking) the system to a single configuration. A natural question is how the combined effect of h and F influences the non-Gaussian character of the quantum state.To answer this question, we extend the analysis on the measurements of non-Gaussianity of Sec. <ref> to the case of F≠0. Since a nonzero F induces a displacement in phase space, the first moments ⟨X̂⟩ and ⟨P̂⟩ are now nonzero, which in turn implies that α≠0 in the target Gaussian state τ̂ in Eq. (<ref>). Following the discussion in Sec. <ref>, to keep a reasonable numerical complexity of the problem, we here quantify non-Gaussianity solely from the quantum relative entropy s(ρ̂) in Eq. (<ref>). This choice is further supported by Fig. <ref>, which shows that the relative entropy provides qualitatively the same information as the other two metrics.The numerical result of s(ρ̂) for different values of h and F is shown as a colormap in panel (a) of Fig. <ref>. Our numerical results highlight a nontrivial interplay between h and F. Indeed, while we observe that increasing h causes a monotonous growth of s(ρ̂) at any F, generalizing the result in Fig. <ref> for F=0, increasing |F| causes instead the quantum relative entropy to vary in a non-monotonous way: Starting from F=0 (green dashed line in the figure), it first increases, reaching a maximum value for nonzero |F|, and then it rapidly decreases. This behaviour is exemplified in panel (c), where a vertical cut of s(ρ̂) at fixed h is shown. From this analysis, we conclude that the parametric gain tends to drive the system into a regime of emerging non-Gaussianity. On the contrary, increasing F above a certain value restores the Gaussian nature of the state.§ CONCLUSIONS In this paper, we provided an ab initio detailed numerical analysis of the emergence of the non-Gaussianity in the steady state of the single quantum optical parametric oscillator (OPO). We modeled the dynamical evolution of the system by a Lindblad master equation, where the Hermitian part described two-photon gain (parametric amplification), and the dissipation accounted for one- and two-photon losses, quantifying the intrinsic loss and amplitude-saturation nonlinearity, respectively. The full steady-state density matrix of the system was found by exact diagonalization of the Liouvillian tensor, resulting from the projection of the master equation onto the Fock (number) basis.We first showed the Wigner function for different values of pump amplitude, and then discussed the measurement of non-Gaussianity from the density matrix, comparing three different quantities: Degree of non-Gaussianity from the Hilbert-Schmidt distance, quantum relative entropy, and non-Gaussianity from the covariance matrix and photon number distribution. By scanning the pump amplitude from zero to twice the classical oscillation threshold value, we revealed that all measured quantities monotonically increase with the pump amplitude, being close to zero below threshold and rapidly increasing above threshold. This result provides a quantitative clear evidence of how the steady state of the quantum OPO deviates from Gaussianity close to threshold, and becomes highly non-Gaussian for large gain.We then extended the calculation of the Wigner function and quantum relative entropy to the quantum OPO in the presence of an additive field (one-photon drive). Our numerics pointed out the nontrivial interplay between parametric pump and additive field. Specifically, rising the pump amplitude generates a monotonous growth of non-Gaussianity, while a nonzero field first causes non-Gaussianity to grow, and then gives raise to a steep decrease for increasing field strength, suggesting the restoration of the Gaussian nature of the state.Our work opens the future perspective to study without approximation how the quantum properties of small OPO networks such as non-Gaussianity and quantum entanglement evolve for different parameter regimes. Indeed, even if the ab initio method here used becomes exponentially more demanding for increasing number of OPOs, it is still usable for systems of few OPOs only. Previous studied reported on the presence of quantum correlations in OPO networks using phase-space methods like the positive P-representation <cit.>. An interesting perspective is to compare previous results with those obtainable from our ab initio method, as well as from lattice approaches similar to matrix-product-state or density-matrix-renormalization-group methods <cit.>. We thank Cristiano Ciuti, Simone Felicetti, and Jacopo Tosca for useful discussions. C.C. acknowledges support from CN1 Quantum PNRR MUR CN 0000013 HPC.§ LIOUVILLIAN TENSOR IN THE FOCK BASIS In this appendix, we explicitly report the expression of the nonzero elements of the Liuvillian superoperator ℒ in Eq. (<ref>) projected in the Fock basis. By recalling that the action of the annihiliation and creation operators on the Fock states is â|n⟩=√(n)|n-1⟩ and â^†|n⟩=√(n+1)|n+1⟩, and the definition ρ_mn=⟨ m|ρ̂|n⟩, one has the projected Hermitian term1/i ⟨ m|[Ĥ_0,ρ̂]|n⟩= h/8(√(m(m-1)) ρ_m-2,n-√((m+1)(m+2)) ρ_m+2,n. .+√(n(n-1)) ρ_m,n-2-√((n+1)(n+2)) ρ_m,n+2).The projected one-photon dissipator in Eq. (<ref>) reads⟨ m|𝒟_ 1ph(ρ̂)|n⟩= g(√((m+1)(n+1)) ρ_m+1,n+1-m+n/2 ρ_mn) ,and the projected two-photon dissipator is⟨ m|𝒟_ 2ph(ρ̂)|n⟩= β/2√((m+1)(m+2)(n+1)(n+2))ρ_m+2,n+2 -βm(m-1)+n(n-1)/4ρ_mn .Without additive field [i.e., F=0 in Ĥ_F in Eq. (<ref>)], the nonzero elements of ℒ^rs_mn are therefore at (r,s)=(m,n), (m±2,n), (m,n±2), (m+1,n+1), and (m+2,n+2), whose expression is retrieved from Eqs. (<ref>)-(<ref>). The inclusion of F≠0 adds at the right-hand side of Eq. (<ref>) and therefore Eq. (<ref>) the term1/i ⟨ m|[Ĥ_F,ρ̂]|n⟩= F(√(m) ρ_m-1,n-√(m+1) ρ_m+1,n. +.√(n) ρ_m,n-1-√(n+1) ρ_m,n+1),therefore yielding other nonzero elements of ℒ^rs_mn at (r,s)=(m±1,n) and (m,n±1). Before diagonalization, ℒ^rs_mn is reshaped as a matrix ℒ_pq where p=m+n_ maxn and q=r+n_ maxs. It is seen from Eqs. (<ref>)-(<ref>) that ℒ_pq is a very sparse matrix, with density of nonzero elements scaling as 1/n_ max^2.§ DISPLACEMENT OPERATOR IN THE FOCK BASIS The matrix representation of the displacement operator D̂_z=e^z â^†-z^* â in the Fock basis follows from the fact that â|n⟩=√(n)|n-1⟩ and â^†|n⟩=√(n+1)|n+1⟩, and from Baker-Campbell-Housdorff theorem, which allows to write D̂_z=e^z â^†-z^* â=e^-|z|^2/2 e^z â^† e^-z^* â. For m≥ n, one can explicitly compute the matrix element⟨ n|D̂_z|m⟩=√(n!m!) e^-|z|^2/2(-z^*)^m-nL^(m-n)_n(|z|^2),where L^(α)_n(x) is the generalized Laguerre polynomial <cit.>. The element for m<n is found by using the fact that D̂_z^†=D̂_-z, i.e., ⟨ n|D̂_z|m⟩=(⟨ m|D̂^†_z|n⟩)^*=(⟨ m|D̂_-z|n⟩)^*, and therefore one has for m<n⟨ n|D̂_z|m⟩=√(m!/n!) e^-|z|^2/2z^n-mL^(n-m)_m(|z|^2). § COVARIANCE MATRIX AND PURITY OF THE SQUEEZED THERMAL STATE In this appendix, we recall the expression of the covariance matrix Σ_ G and purity of the displaced squeezed thermal state τ̂=D̂_α Ŝ(ξ) ρ̂_ th(n) Ŝ^†(ξ) D̂^†_α in Eq. (<ref>), where D̂_α=e^α â^†-α^* â and Ŝ(ξ)=e^(ξ^* ââ-ξ â^†â^†)/2, and ρ̂_ th(n) is as in Eq. (<ref>). As recalled in Sec. <ref>, the covariance matrix of τ̂ is unaffected by the displacement D̂_α. Let us define for simplicity ξ=r e^iφ in terms of its absolute value r=|ξ| and phase φ= arg(ξ). First, one recalls that the covariance matrix Σ_ sqv(r,φ) of the squeezed vacuum state Ŝ(ξ)|0⟩⟨0|Ŝ^†(ξ) is given by Σ_ sqv(r,φ)=ℛ(φ/2) Σ_ sqv(r,0) ℛ^T(φ/2) whereℛ(ϕ)=( [cos(ϕ) -sin(ϕ);sin(ϕ)cos(ϕ) ]) is the rotation matrix, Σ_ sqv(r,0)=1/2diag(e^-2r,e^2r), and T denotes the transposition. The covariance matrix of the squeezed thermal state readily follows: Σ_ G=(2n+1)Σ_ sqv(r,φ).Since the displacement and squeezing operator are unitary and the trace is cyclic, the purity of τ̂ reduces to the purity of the thermal state, i.e., Tr[τ̂^2]= Tr[ρ̂^2_ th(n)]=∑_n=0^∞f_n^2=1/(2n+1).§ SQUEEZING OPERATOR IN THE FOCK BASIS In this appendix, we report for the sake of completeness the explicit expression of the matrix representation of the squeezing operator Ŝ(ξ)=e^(ξ^* ââ-ξ â^†â^†)/2 with ξ=r e^iφ in the Fock basis. This is ⟨ n|Ŝ(ξ)|m⟩=0 for m and n with opposite parity, while for m and n of the same parity one has ⟨ n|Ŝ(ξ)|m⟩={[(-ζ/2)^(n-m)/2e^-(η/2)(m+1/2)√(n! m!) ∑_k=0^⌊ m/2⌋(-|ζ|^2e^η/4)^k1/(m-2k)! k! [(n-m)/2+k]!(n≥ m); ; (ζ^*/2)^(m-n)/2e^-(η/2)(n+1/2)√(m! n!) ∑_k=0^⌊ n/2⌋(-|ζ|^2e^η/4)^k1/(n-2k)! k! [(m-n)/2+k]!(n< m) ]., where ζ=e^iφ tanh(r) and η=2 log[cosh(r)], and ⌊·⌋ is the floor function. This result is derived after a chain of identities first by using the operator ordering of Ŝ(ξ) <cit.>, and then by using â|n⟩=√(n)|n-1⟩ and â^†|n⟩=√(n+1)|n+1⟩, similar to Appendix <ref>. The explicit calculation can be also found in Ref. <cit.>.44 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[McMahon(2023)]petermchannon2023 author author P. L. McMahon, title title The physics of optical computing, 10.1038/s42254-023-00645-5 journal journal Nat. Rev. Phys. volume 5, pages 717–734 (year 2023)NoStop [Lucas(2014)]10.3389/fphy.2014.00005 author author A. Lucas, title title Ising formulations of many NP problems, 10.3389/fphy.2014.00005 journal journal Frontiers in Physics volume 2, pages 5 (year 2014)NoStop [Yamamoto et al.(2020)Yamamoto, Leleu, Ganguli, andMabuchi]yamamoto2020isingmachine author author Y. Yamamoto, author T. Leleu, author S. Ganguli,andauthor H. Mabuchi, title title Coherent Ising machines-quantum optics and neural network perspectives, 10.1063/5.0016140 journal journal Appl. Phys. Lett. volume 117, pages 160501 (year 2020)NoStop [Kako et al.(2020)Kako, Leleu, Inui, Khoyratee, Reifenstein, and Yamamoto]doi:10.1002/qute.202000045 author author S. Kako, author T. Leleu, author Y. Inui, author F. Khoyratee, author S. Reifenstein,and author Y. Yamamoto, title title Coherent Ising machines with error correction feedback, 10.1002/qute.202000045 journal journal Adv. Quantum Technol. volume 2020, pages 2000045 (year 2020)NoStop [Leleu et al.(2021)Leleu, Khoyratee, Levi, Hamerly, Kohno, and Aihara]leleuchaoticamplitudecontro2021 author author T. Leleu, author F. Khoyratee, author T. Levi, author R. Hamerly, author T. Kohno,and author K. Aihara, title title Scaling advantage of chaotic amplitude control for high-performance combinatorial optimization, 10.1038/s42005-021-00768-0 journal journal Commun. Phys. volume 4, pages 266 (year 2021)NoStop [Tatsumura et al.(2021)Tatsumura, Yamasaki, and Goto]hgoto2021cim author author K. Tatsumura, author M. Yamasaki,and author H. Goto,title title Scaling out Ising machines using a multi-chip architecture for simulated bifurcation, 10.1038/s41928-021-00546-4 journal journal Nat. Electron. volume 4, pages 208–217 (year 2021)NoStop [Calvanese Strinati and Conti(2022)]strinati2022hyperspinmachine author author M. Calvanese Strinati and author C. Conti, title title Multidimensional hyperspin machine, 10.1038/s41467-022-34847-9 journal journal Nat. Commun. volume 13, pages 7248 (year 2022)NoStop [Calvanese Strinati and Conti(2023)]arXiv:2308.02329 author author M. Calvanese Strinati and author C. Conti, title title Hyperscaling in the coherent hyperspin machine, https://arxiv.org/abs/2308.02329 journal journal arXiv:2308.02329(year 2023)NoStop [Inui and Yamamoto(2020)]PhysRevA.102.062419 author author Y. Inui and author Y. Yamamoto, title title Entanglement and quantum discord in optically coupled coherent Ising machines, 10.1103/PhysRevA.102.062419 journal journal Phys. Rev. A volume 102, pages 062419 (year 2020)NoStop [Inui and Yamamoto(2021)]yamamoto2021nopo author author Y. Inui and author Y. Yamamoto, title title Entanglement and photon anti-bunching in coupled non-degenerate parametric oscillators,10.3390/e23050624 journal journal Entropy volume 23, pages 624 (year 2021)NoStop [Kiesewetter and Drummond(2022)]PhysRevA.106.022409 author author S. Kiesewetter and author P. D. Drummond, title title Coherent ising machine with quantum feedback: The total and conditional master equation methods, 10.1103/PhysRevA.106.022409 journal journal Phys. Rev. A volume 106, pages 022409 (year 2022)NoStop [Walschaers(2021)]PRXQuantum.2.030204 author author M. Walschaers, title title Non-Gaussian quantum states and where to find them, 10.1103/PRXQuantum.2.030204 journal journal PRX Quantum volume 2, pages 030204 (year 2021)NoStop [D'Auria et al.(2005)D'Auria, Chiummo, De Laurentis, Porzio, Solimeno, and Paris]DAuria:05 author author V. D'Auria, author A. Chiummo, author M. De Laurentis, author A. Porzio, author S. Solimeno,and author M. G. A. Paris, title title Tomographic characterization of OPO sources close to threshold, 10.1364/OPEX.13.000948 journal journal Opt. Express volume 13, pages 948–956 (year 2005)NoStop [D'Auria et al.(2010)D'Auria, de Lisio, Porzio, Solimeno, Anwar, and Paris]PhysRevA.81.033846 author author V. D'Auria, author C. de Lisio, author A. Porzio, author S. Solimeno, author J. Anwar,and author M. G. A. Paris, title title Non-gaussian states produced by close-to-threshold optical parametric oscillators: Role of classical and quantum fluctuations, 10.1103/PhysRevA.81.033846 journal journal Phys. Rev. A volume 81, pages 033846 (year 2010)NoStop [Yamamoto et al.(2017)Yamamoto, Aihara, Leleu, Kawarabayashi, Kako, Fejer, Inoue, and Takesue]s41534-017-0048-9 author author Y. Yamamoto, author K. Aihara, author T. Leleu, author K. Kawarabayashi, author S. Kako, author M. Fejer, author K. Inoue,and author H. Takesue, title title Coherent Ising machines - Optical neural networks operating at the quantum limit, 10.1038/s41534-017-0048-9 journal journal njp Quantum Information volume 3, pages 49 (year 2017)NoStop [Minganti et al.(2018)Minganti, Biella, Bartolo, andCiuti]PhysRevA.98.042118 author author F. Minganti, author A. Biella, author N. Bartolo,andauthor C. Ciuti, title title Spectral theory of Liouvillians for dissipative phase transitions, 10.1103/PhysRevA.98.042118 journal journal Phys. Rev. A volume 98, pages 042118 (year 2018)NoStop [Dodonov et al.(2000)Dodonov, Man'ko, Man'ko, andWünsche]doi:10.1080/09500340008233385 author author V. V. Dodonov, author O. V. Man'ko, author V. I. Man'ko,andauthor A. Wünsche,title title Hilbert-Schmidt distance and non-classicality of states in quantum optics, 10.1080/09500340008233385 journal journal J. Mod. Opt. volume 47, pages 633–654 (year 2000)NoStop [Genoni et al.(2007)Genoni, Paris, and Banaszek]PhysRevA.76.042327 author author M. G. Genoni, author M. G. A. Paris,and author K. Banaszek, title title Measure of the non-Gaussian character of a quantum state, 10.1103/PhysRevA.76.042327 journal journal Phys. Rev. A volume 76, pages 042327 (year 2007)NoStop [Genoni and Paris(2010)]PhysRevA.82.052341 author author M. G. Genoni and author M. G. A. Paris, title title Quantifying non-Gaussianity for quantum information, 10.1103/PhysRevA.82.052341 journal journal Phys. Rev. A volume 82, pages 052341 (year 2010)NoStop [Genoni et al.(2008)Genoni, Paris, and Banaszek]PhysRevA.78.060303 author author M. G. Genoni, author M. G. A. Paris,and author K. Banaszek, title title Quantifying the non-Gaussian character of a quantum state by quantum relative entropy, 10.1103/PhysRevA.78.060303 journal journal Phys. Rev. A volume 78, pages 060303 (year 2008)NoStop [Marian and Marian(2013)]PhysRevA.88.012322 author author P. Marian and author T. A. Marian, title title Relative entropy is an exact measure of non-Gaussianity, 10.1103/PhysRevA.88.012322 journal journal Phys. Rev. A volume 88, pages 012322 (year 2013)NoStop [Breuer and Petruccione(2002)]breuer2002theory author author H. P. Breuer and author F. Petruccione, @nooptitle The Theory of Open Quantum Systems (publisher Oxford University Press,year 2002)NoStop [Carmichael(2010)]carmichael2010optics author author H. J. Carmichael, @nooptitle Statistical Methods in Quantum Optics 2: Non-Classical Fields (publisher Springer Berlin Heidelberg, year 2010)NoStop [Calvanese Strinati et al.(2019)Calvanese Strinati, Bello, Pe'er, and Dalla Torre]PhysRevA.100.023835 author author M. Calvanese Strinati, author L. Bello, author A. Pe'er,andauthor E. G. Dalla Torre,title title Theory of coupled parametric oscillators beyond coupled Ising spins, 10.1103/PhysRevA.100.023835 journal journal Phys. Rev. A volume 100, pages 023835 (year 2019)NoStop [Goto(2019)]goto2019kpoandopo author author H. Goto, title title Quantum computation based on quantum adiabatic bifurcations of Kerr-nonlinear parametric oscillators, 10.7566/JPSJ.88.061015 journal journal J. Phys. Soc. Jpn. volume 88, pages 061015 (year 2019)NoStop [Strogatz(2007)]strogatz2007nonlinear author author S. H. Strogatz, @nooptitle Nonlinear Dynamics And Chaos, Studies in nonlinearity (publisher Perseus Books, Reading, year 2007)NoStop [Kinsler and Drummond(1991)]PhysRevA.43.6194 author author P. Kinsler and author P. D. Drummond, title title Quantum dynamics of the parametric oscillator, 10.1103/PhysRevA.43.6194 journal journal Phys. Rev. A volume 43, pages 6194–6208 (year 1991)NoStop [Bartolo et al.(2016)Bartolo, Minganti, Casteels, andCiuti]PhysRevA.94.033841 author author N. Bartolo, author F. Minganti, author W. Casteels,andauthor C. Ciuti, title title Exact steady state of a Kerr resonator with one- and two-photon driving and dissipation: Controllable Wigner-function multimodality and dissipative phase transitions, 10.1103/PhysRevA.94.033841 journal journal Phys. Rev. A volume 94, pages 033841 (year 2016)NoStop [Cahill and Glauber(1969a)]PhysRev.177.1857 author author K. E. Cahill and author R. J. Glauber, title title Ordered expansions in boson amplitude operators, 10.1103/PhysRev.177.1857 journal journal Phys. Rev. volume 177, pages 1857–1881 (year 1969a)NoStop [Cahill and Glauber(1969b)]PhysRev.177.1882 author author K. E. Cahill and author R. J. Glauber, title title Density operators and quasiprobability distributions, 10.1103/PhysRev.177.1882 journal journal Phys. Rev. volume 177, pages 1882–1902 (year 1969b)NoStop [Milburn and Walls(1981)]walls1981 author author G. Milburn and author D. F. Walls, title title Production of squeezed states in a degenerate parametric amplifier, 10.1016/0030-4018(81)90232-7 journal journal Opt. Commun. volume 39, pages 401–404 (year 1981)NoStop [Wu et al.(1987)Wu, Xiao, and Kimble]Wu:87 author author L.-A. Wu, author M. Xiao,andauthor H. J. Kimble, title title Squeezed states of light from an optical parametric oscillator, 10.1364/JOSAB.4.001465 journal journal J. Opt. Soc. Am. B volume 4, pages 1465–1475 (year 1987)NoStop [Serafini(2017)]serafini2017quantum author author A. Serafini, @nooptitle Quantum Continuous Variables: A Primer of Theoretical Methods (publisher CRC Press, year 2017)NoStop [Holevo et al.(1999)Holevo, Sohma, and Hirota]PhysRevA.59.1820 author author A. S. Holevo, author M. Sohma, and author O. Hirota,title title Capacity of quantum Gaussian channels, 10.1103/PhysRevA.59.1820 journal journal Phys. Rev. A volume 59, pages 1820–1828 (year 1999)NoStop [Toninelli et al.(2019)Toninelli, B, Vallés, Sephton, Nape, Ambrosio, Capasso, Padgett, and Forbes]Toninelli:19 author author E. Toninelli, author Ndagano. B, author A. Vallés, author B. Sephton, author I. Nape, author A. Ambrosio, author F. Capasso, author M. J.Padgett,and author A. Forbes, title title Concepts in quantum state tomography and classical implementation with intense light: a tutorial, 10.1364/AOP.11.000067 journal journal Adv. Opt. Photon. volume 11, pages 67–134 (year 2019)NoStop [Orszag(2016)]orszag2016quantum author author M. Orszag, @nooptitle Quantum Optics: Including Noise Reduction, Trapped Ions, Quantum Trajectories, and Decoherence (publisher Springer International Publishing,year 2016)NoStop [Paris and Rehacek(2004)]paris2004quantum author author M. G. A.Paris and author J. Rehacek, @nooptitle Quantum State Estimation, Lecture Notes in Physics (publisher Springer Berlin Heidelberg, year 2004)NoStop [Álvarez et al.(2023)Álvarez, Pittilini, Miserocchi, Raamamurthy, Margiani, Ameye, del Pino, Zilberberg, and Eichler]eichler2023 author author P. Álvarez, author D. Pittilini, author F. Miserocchi, author S. Raamamurthy, author G. Margiani, author O. Ameye, author J. del Pino, author O. Zilberberg,and author A. Eichler, title title A biased Ising model using two coupled Kerr parametric oscillators with external force, https://arxiv.org/abs/2307.13676 journal journal arXiv:2307.13676(year 2023)NoStop [Takesue et al.(2020)Takesue, Inaba, Inagaki, Ikuta, Yamada, Honjo, Kazama, Enbutsu, Umeki, and Kasahara]PhysRevApplied.13.054059 author author H. Takesue, author K. Inaba, author T. Inagaki, author T. Ikuta, author Y. Yamada, author T. Honjo, author T. Kazama, author K. Enbutsu, author T. Umeki,and author R. Kasahara, title title Simulating Ising spins in external magnetic fields with a network of degenerate optical parametric oscillators, 10.1103/PhysRevApplied.13.054059 journal journal Phys. Rev. Applied volume 13, pages 054059 (year 2020)NoStop [Gilchrist et al.(1997)Gilchrist, Gardiner, and Drummond]PhysRevA.55.3014 author author A. Gilchrist, author C. W. Gardiner,and author P. D. Drummond, title title Positive P representation: Application and validity, 10.1103/PhysRevA.55.3014 journal journal Phys. Rev. A volume 55, pages 3014–3032 (year 1997)NoStop [Schollwöck(2011)]SCHOLLWOCK201196 author author U. Schollwöck, title title The density-matrix renormalization group in the age of matrix product states,https://doi.org/10.1016/j.aop.2010.09.012 journal journal Ann. Phys. volume 326, pages 96–192 (year 2011)NoStop [Zwillinger(2002)]zwillinger2002crc author author D. Zwillinger, @nooptitle CRC Standard Mathematical Tables and Formulae, Advances in Applied Mathematics(publisher CRC Press, year 2002)NoStop [Barnett and Radmore(2002)]barnett2002methods author author S. Barnett and author P. M. Radmore, @nooptitle Methods in Theoretical Quantum Optics, Oxford Series in Optical and Imaging Sciences (publisher Clarendon Press, year 2002)NoStop [Varró(2022)]Varro_2022 author author S. Varró, title title Coherent and incoherent superposition of transition matrix elements of the squeezing operator, 10.1088/1367-2630/ac6b4d journal journal New J. Phys. volume 24,pages 053035 (year 2022)NoStop
http://arxiv.org/abs/2312.16530v1
{ "authors": [ "Marcello Calvanese Strinati", "Claudio Conti" ], "categories": [ "quant-ph", "physics.optics" ], "primary_category": "quant-ph", "published": "20231227112013", "title": "Dawn and fall of non-Gaussianity in the quantum parametric oscillator" }
Aryan Jadon A Survey of Evaluation Techniques for Recommendation SystemsJuniper Networks, Sunnyvale CA, USA {aryanj,patila}@juniper.netA Comprehensive Survey of Evaluation Techniques for Recommendation Systems Aryan Jadon10000-0002-2991-9913 Avinash Patil2 January 14, 2024 ==========================================================================The effectiveness of recommendation systems is pivotal to user engagement and satisfaction in online platforms. As these recommendation systems increasingly influence user choices, their evaluation transcends mere technical performance and becomes central to business success. This paper addresses the multifaceted nature of recommendation system evaluation by introducing a comprehensive suite of metrics, each tailored to capture a distinct aspect of system performance. We discuss similarity metrics that quantify the precision of content-based and collaborative filtering mechanisms, along with candidate generation metrics which measure how well the system identifies a broad yet pertinent range of items. Following this, we delve into predictive metrics that assess the accuracy of forecasted preferences, ranking metrics that evaluate the order in which recommendations are presented, and business metrics that align system performance with economic objectives.Our approach emphasizes the contextual application of these metrics and their interdependencies. In this paper, we identify the strengths and limitations of current evaluation practices and highlight the nuanced trade-offs that emerge when optimizing recommendation systems across different metrics. The paper concludes by proposing a framework for selecting and interpreting these metrics to not only improve system performance but also to advance business goals. This work is to aid researchers and practitioners in critically assessing recommendation systems and fosters the development of more nuanced, effective, and economically viable personalization strategies. Our code is available at https://github.com/aryan-jadon/Evaluation-Metrics-for-Recommendation-SystemsGitHub.§ INTRODUCTION Recommendation systems have become an integral component of the digital landscape, influencing the way we discover products, content, and even social connections. From e-commerce to online streaming, these systems underpin user experience by personalizing content and suggesting items that align with individual preferences. The proliferation of such systems has catalyzed a need for robust evaluation methods, as the efficiency of a recommendation system is pivotal to user satisfaction and business success.While recommendation systems have become increasingly complex and sophisticated, evaluating their performance remains a challenge. Previous research <cit.> <cit.>has focused on specific subsets of performance metrics, often tailored to the domain where the recommendation system is applied<cit.>. However, a unified approach to performance evaluation, considering the multifaceted aspects of these systems, is largely missing from the literature.The performance of recommendation systems is multi-dimensional and cannot be encapsulated by a single metric<cit.>. To comprehensively assess these systems, one must consider a variety of metrics, each offering unique insights into different aspects of system performance. This paper introduces five key types of metrics that collectively provide a holistic evaluation framework: similarity metrics, candidate generation metrics, predictive metrics, ranking metrics, and business metrics.Similarity metrics<cit.> are the cornerstone of content-based and collaborative filtering methods, offering a quantitative measure of how closely items or user preferences align. Candidate generation metrics<cit.> ensure a balanced recommendation spectrum, avoiding the pitfalls of overly narrow or excessively broad selections. Predictive metrics<cit.> go a step further, providing an assessment of a system's ability to accurately forecast user ratings or preferences. Ranking metrics <cit.> are critical when the sequence of recommendations is pivotal, evaluating the order in which items are presented to the user. Lastly, business metrics<cit.> connect system performance with tangible business outcomes, such as sales conversion rates or customer engagement levels, ensuring the recommendation system aligns with overarching business objectives.In deploying these metrics, one must navigate a landscape rife with trade-offs and complementary relationships, as the improvement in one metric could potentially lead to the detriment of another<cit.>. Therefore, the selection and interpretation of these metrics must be approached with a nuanced understanding of the recommendation system's goals, context, and the characteristics of the dataset being used.Through the subsequent sections, this paper will delve into each metric type, elucidating their definitions, applications, and significance in evaluating the efficacy of recommendation systems. By dissecting these metrics, we aim to provide a framework that academics, practitioners, and stakeholders can adopt to gauge the success of their recommendation systems, thereby enabling the continuous advancement of personalized user experiences in the digital domain.§ EVALUATION METRICS§.§ Similarity Metrics Similarity metrics are used to measure the likeness or similarity between items, users, or any relevant entities in a recommendation system. These metrics help in identifying items that are similar to each other, which is crucial for various recommendation techniques like content-based filtering and collaborative filtering. Some of the key Similarity Metrics used in recommendation systems are: * Cosine Similarity* Euclidean Distance* Jaccard Index* Hamming Distance* Manhattan Distance* Chebyshev Distance* Adjusted Cosine Similarity* Pearson Correlation Coefficient* Spearman Rank Order Correlation Coefficient §.§.§ Cosine Similarity Cosine similarity<cit.> is a measure used to determine the similarity between two non-zero vectors in an n-dimensional space, capturing how closely related they are in orientation. It is calculated as the dot product of the vectors divided by the product of their magnitudes. The formula for cosine similarity is: cos(θ) = ∑_i=1^n A_i B_i/√(∑_i=1^n A_i^2)×√(∑_i=1^n B_i^2) This metric ranges from -1 (exactly opposite) to 1 (the same), with 0 typically indicating orthogonality or no similarity. §.§.§ Euclidean Distance Euclidean distance<cit.> is a widely used distance metric that gauges the straight-line distance between two points in Euclidean space. It's calculated by taking the square root of the sum of the squared differences between corresponding elements of the vectors. The formula for the Euclidean distance between two points, A and B, in an n-dimensional space is: d(A, B) = √(∑_i=1^n (A_i - B_i)^2) This metric ranges from 0 to infinity, where 0 indicates that the points are identical.§.§.§ Jaccard Index The Jaccard Index <cit.>, also known as the Jaccard similarity coefficient, is a statistic used for gauging the similarity and diversity of sample sets. It's defined as the size of the intersection divided by the size of the union of two sets. For two sets A and B, the Jaccard Index J is given by the formula: J(A, B) = |A ∩ B|/|A ∪ B| Here, |A ∩ B| represents the number of elements common to both sets, while |A ∪ B| is the total number of distinct elements present in either set. The Jaccard Index ranges from 0 to 1, where 0 means there is no overlap between the sets and 1 indicates that the sets are identical. §.§.§ Hamming Distance Hamming Distance <cit.> is a metric for comparing two strings of equal length, quantifying the number of positions at which the corresponding symbols are different. It measures the minimum number of substitutions required to change one string into the other, which also corresponds to the minimum number of errors that could have transformed one string into the other. In the context of recommendation systems, it can be particularly useful for comparing user preferences or item characteristics that are represented as binary vectors.The Hamming Distance, d_H, between two strings (or binary vectors) A and B is calculated as: d_H(A, B) = ∑_i=1^n[ A_i ≠ B_i ] where n is the length of the strings, and the notation [ A_i ≠ B_i ] is an indicator function equal to 1 if A_i and B_i are different, and 0 if they are the same.§.§.§ Manhattan Distance Manhattan Distance <cit.>, also known as City Block Distance, is a distance metric that measures the sum of the absolute differences between the coordinates of a pair of objects. It is mathematically defined as: D_Manhattan(A, B) = ∑_i=1^n |A_i - B_i| where A and B are two vectors in n-dimensional space, and |A_i - B_i| denotes the absolute difference between the i-th components of A and B.§.§.§ Chebyshev DistanceChebyshev Distance<cit.>, named after Pafnuty Chebyshev, is a distance metric used in multi-dimensional spaces, often in data science and game theory. It represents the maximum difference along any single dimension between two points. For two points 𝐩 = (p_1, p_2, …, p_n) and 𝐪 = (q_1, q_2, …, q_n) in an n-dimensional space, the Chebyshev Distance is defined as: D_Chebyshev(𝐩, 𝐪) = max_i| p_i - q_i | Here, max_i signifies taking the maximum of the absolute differences along each dimension. This metric is especially useful in scenarios where the greatest single difference is more significant than the sum of all differences, such as in chess, to calculate the minimum moves for a king, or in clustering and classification tasks in data analytics.§.§.§ Adjusted Cosine Similarity Adjusted Cosine Similarity <cit.> is a variation of the traditional cosine similarity that accounts for user rating biases. It adjusts the rating vectors for each user by subtracting the user's average rating before applying the cosine similarity formula. This can enhance recommendation system performance by normalizing user ratings and centering them around zero. The formula for Adjusted Cosine Similarity is: AC(θ) = ∑_i=1^n (R_u,i - R̅_u) × (R_v,i - R̅_v)/√(∑_i=1^n (R_u,i - R̅_u)^2)×√(∑_i=1^n (R_v,i - R̅_v)^2) where R_u,i and R_v,i are the ratings given by user u and user v to item i, respectively, and R̅_u and R̅_v are the average ratings of user u and user v, respectively. §.§.§ Pearson Correlation Coefficient The Pearson Correlation Coefficient (PCC)<cit.>, denoted as r, is a measure of linear correlation between two sets of data, yielding a value between -1 and 1. A value of 1 implies a perfect positive correlation, -1 a perfect negative correlation, and 0 no correlation at all. For two vectors X and Y, each with n elements, PCC is calculated as: r = ∑_i=1^n (X_i - X)(Y_i - Y)/√(∑_i=1^n (X_i - X)^2)√(∑_i=1^n (Y_i - Y)^2) where X and Y are the means of the X and Y vectors, respectively.In Table 1, we delineate specific scenarios for the application of various Similarity Metrics. This table serves as a concise guide, assisting in selecting the appropriate metric in alignment with the particular characteristics and requirements of each use case. Through this structured presentation, we aim to provide clarity and ease in the decision-making process for choosing the most suitable Similarity Metric, tailored to the needs of distinct scenarios. §.§ Candidate Generation Metrics Candidate Generation Metrics play a pivotal role in the efficacy of recommendation systems, acting as the backbone for filtering and presenting the most relevant options to users. At their core, these metrics are algorithms designed to sift through vast datasets, identifying potential items or services that align closely with a user's preferences, search history, and behavioral patterns. This initial step is crucial as it directly influences the quality and relevance of recommendations presented to the user.By efficiently narrowing down the pool of candidates from potentially millions to a manageable few, these metrics not only enhance the user experience by providing targeted and personalized recommendations but also significantly improve computational efficiency. Furthermore, well-calibrated Candidate Generation Metrics help avoid information overload, ensuring that users are not overwhelmed by too many choices, which can lead to decision paralysis. In essence, these metrics are indispensable for creating a tailored, user-centric approach in recommendation systems, leading to increased user engagement, satisfaction, and, ultimately, retention. Some of the key Candidate Generation Metrics used in recommendation systems are :* Novelty* Diversity* Serendipity* Catalog Coverage* Distributional Coverage §.§.§ Novelty Novelty<cit.>in recommendation systems measures how unexpected the recommended items are to users, focusing on less-known items. Mathematically, for a set of recommended items R_u to user u, novelty is: Novelty(R_u) = 1/|R_u|∑_i ∈ R_u (1 - popularity_score(i)) Here, popularity_score(i) is the normalized popularity of item i, calculated as the ratio of users who interacted with i to the maximum popularity in the catalog. This approach inversely relates the popularity of items to novelty, promoting less popular items to enhance user discovery and exploration. It's an essential metric for ensuring a diverse and engaging recommendation experience. §.§.§ DiversityDiversity<cit.> in recommendation systems, crucial for maintaining user engagement, is quantified using metrics like Intra-List Diversity (ILD). ILD measures the average dissimilarity between all pairs of items in a recommendation list. Mathematically, for a set of recommended items R = {r_1, r_2, …, r_n}, ILD is defined as: ILD(R) = 2/n · (n - 1)∑_i=1^n-1∑_j=i+1^n dissimilarity(r_i, r_j) Here, n is the number of items in R, and dissimilarity(r_i, r_j) computes the dissimilarity between items r_i and r_j. ILD ranges from 0 (no diversity) to 1 (maximum diversity), reflecting the variety in recommendations and ensuring that users are exposed to a broad range of options.§.§.§ Serendipity Serendipity<cit.> is a metric used in recommendation systems to quantify the degree to which the recommendations are both unexpected and useful to the user. This concept is crucial in evaluating the effectiveness of a recommendation system, especially in its ability to introduce users to items they might not have discovered otherwise, but find surprisingly relevant and enjoyable.Mathematically, serendipity can be defined in the context of user-item interactions. Let's consider the following notations:- U is the set of users. - I is the set of items. - R(u) is the set of items recommended to user u. - L(u) is the set of items liked by user u. - D(u) is the set of items discovered by user u through the recommendation system.The serendipity of the system for user u can be calculated as: Serendipity(u) = |{i ∈ R(u) ∩ L(u) ∩ D(u)}|/|R(u)| This formula calculates the proportion of recommended items that are both liked and discovered by the user, indicating the element of surprise and relevance in the recommendations.To get the overall serendipity of the system across all users, we average this value over all users: Serendipity = 1/|U|∑_u ∈ USerendipity(u) Here, |U| denotes the number of users in the system. This overall serendipity score gives an indication of how well the recommendation system performs in terms of introducing relevant yet unexpected items to its users. Higher scores indicate a system's stronger ability to provide serendipitous recommendations.§.§.§ Catalog Coverage Catalog Coverage<cit.> is a vital metric in evaluating the breadth of a recommendation system's reach. It measures the proportion of items in the entire catalog that are actually recommended to users, providing insight into the diversity of the system’s suggestions. The formula for Catalog Coverage is: Catalog Coverage = |Unique Items Recommended|/|Total Items in Catalog|× 100% Here, |Unique Items Recommended| is the count of distinct items the system has recommended, and |Total Items in Catalog| represents the total number of unique items available in the catalog. To compute this, one must tally all unique items recommended over a certain period, count the total items in the catalog, and then divide the former by the latter, expressing the result as a percentage.This metric is crucial for gauging how well a recommendation system explores and utilizes the full range of available items. A high Catalog Coverage indicates a system that suggests a wide variety of items, potentially appealing to a diverse user base. On the contrary, low Catalog Coverage might indicate a tendency to focus on a limited set of popular items, which could neglect users with unique or niche interests. Therefore, monitoring and optimizing Catalog Coverage is essential for maintaining a balanced and inclusive recommendation system. §.§.§ Distributional Coverage Distributional coverage<cit.> is a crucial metric in recommendation systems, focusing on the diversity of recommendations across the entire item catalog. It ensures that a system is not biased towards a few popular items but instead promotes a broader range of choices. To quantify this, distributional coverage is often calculated using entropy, a measure of unpredictability or diversity. For a catalog with N items, where p(i) represents the probability of recommending item i, the distributional coverage (DC) can be expressed as: DC = -∑_i=1^N p(i) log_2 p(i) Here, the sum is over all catalog items, and p(i) is estimated from the frequency of item i's appearance in recommendation lists. A higher DC value indicates a more diverse recommendation pattern, implying a wide array of items being recommended, while a lower value suggests a concentration on fewer items. Balancing this metric with others, such as personalization and relevance, is vital to maintain the effectiveness of the recommendation system while ensuring variety.Table 2 presents a comprehensive overview of the appropriate application scenarios for various Candidate Generation Metrics. This table serves as a guide, delineating which metrics are most suitable for specific use cases in recommendation systems.§.§ Predictive MetricsPredictive metrics are used to assess the predictive accuracy of a recommendation system. These metrics evaluate how well the system predicts user preferences or ratings for items. Some of the key Rating Metrics used in recommendation systems are : * Root Mean Squared Error (RMSE)* Mean Absolute Error (MAE)* Mean Squared Error (MSE)* Mean Absolute Percentage Error (MAPE)* R^2* Explained Variance §.§.§ Root Mean Squared Error (RMSE) Root Mean Squared Error (RMSE)<cit.> is a standard way to measure the error of a model in predicting quantitative data. Formally, it represents the square root of the average of the squares of the differences between predicted and observed values. In the context of recommendation systems, it quantifies the differences between the ratings predicted by the model and the actual ratings given by the users. The formula for RMSE is given by: RMSE = √(1/N∑_i=1^N (y_i - ŷ_i)^2) where N is the number of observations, y_i is the actual value of an observation, and ŷ_i is the predicted value.§.§.§ Mean Absolute Error (MAE) Mean Absolute Error (MAE)<cit.> is a metric used to evaluate the accuracy of a prediction model. It calculates the average magnitude of errors in a set of predictions, without considering their direction. The MAE is given by the formula: MAE = 1/n∑_i=1^n |y_i - ŷ_i| where n is the number of predictions, y_i is the actual value, and ŷ_i is the predicted value. The absolute difference between the actual and predicted values indicates the error magnitude, and the MAE aggregates these errors across all predictions.§.§.§ Mean Squared Error (MSE) Mean Squared Error (MSE)<cit.> is a widely used measure of prediction accuracy in recommendation systems, quantifying the difference between predicted and actual values. The MSE is computed by averaging the squares of the errors, i.e., the differences between predicted (ŷ_i) and observed (y_i) values over n predictions. The formula for MSE is: MSE = 1/n∑_i=1^n (ŷ_i - y_i)^2 The squaring of the errors ensures that larger errors are more prominently reflected in the total, emphasizing the cost of significant deviations and ensuring that the result is always non-negative. §.§.§ Mean Absolute Percentage Error (MAPE) Mean Absolute Percentage Error (MAPE)<cit.> is a statistical measure used to assess the accuracy of forecasting models. It represents the average absolute percent error for each data point, omitting the direction of the error. The formula for MAPE is given as: MAPE = ( 100%/n) ∑_i=1^n| y_i - ŷ_i/y_i| Where: - n is the number of observations, - y_i is the actual value, - ŷ_i is the forecasted value.The MAPE is useful because it provides a quick, intuitive percentage error, allowing for comparison across different datasets or models. However, its interpretability can be compromised when dealing with zero or very small actual values.§.§.§ R^2 The coefficient of determination, denoted as R^2, is a crucial metric in evaluating the predictive accuracy of models, including those in recommendation systems. It quantifies the proportion of variance in the dependent variable that is predictable from the independent variables. The formula for R^2 is: R^2 = 1 - SS_res/SS_tot where SS_res is the residual sum of squares (∑ (y_i - ŷ_i)^2), representing the unexplained variance by the model, and SS_tot is the total sum of squares (∑ (y_i - y̅)^2), reflecting the total variance in the observed data. Here, y_i are the observed values, ŷ_i are the predicted values, and y̅ is the mean of observed data.R^2 ranges from 0 to 1, where 0 indicates no explanatory power and 1 indicates perfect prediction. In recommendation systems, a higher R^2 suggests that the model accurately predicts user ratings or preferences, making it an essential tool for assessing the performance of these systems. It complements other metrics like similarity, classification, and business metrics, offering a comprehensive view of the system's effectiveness and efficiency in personalizing user experiences. §.§.§ Explained Variance Explained Variance is a key statistical measure in predictive modeling, crucial for evaluating recommendation systems. It quantifies the proportion of variance in the dependent variable (like user ratings) explained by the independent variables in the model. The formula for Explained Variance is: Explained Variance = 1 - Var(e)/Var(Y) or in a more detailed form: Explained Variance = 1 - ∑_i=1^n (y_i - ŷ_i)^2/∑_i=1^n (y_i - y̅)^2 Here, Var(e) represents the variance of the model errors, Var(Y) is the total variance of the dependent variable, y_i are the actual values, ŷ_i the predicted values, y̅ the mean of actual values, and n the number of observations. While closely related to R^2, the coefficient of determination, Explained Variance offers a nuanced understanding of a model's ability to capture data variability. This metric is vital for researchers in developing efficient recommendation systems as it helps assess the accuracy and reliability of predictions.In Table 3, we present a comprehensive guide outlining specific scenarios for the application of various Predictive Metrics. This table serves as an essential reference for determining the appropriate metric to employ in distinct contexts, thereby optimizing the effectiveness of predictive analysis. §.§ Ranking Based Metrics Ranking-based metrics are specifically designed to evaluate the quality of the item ranking produced by a recommendation system. These metrics focus on how well the system orders items to maximize user satisfaction.* Mean Reciprocal Rank (MRR)* ARHR@k* Normalized Discounted Cumulative Gain (nDCG@K)* Precision@k* Recall@k* F1@K* Average Recall@k* Average Precision@k* MAP§.§.§ Mean Reciprocal Rank (MRR) Mean Reciprocal Rank (MRR)<cit.> is a statistical measure used to evaluate the performance of recommendation systems or information retrieval systems, focusing on the rank of the first correct answer. For a set of queries, MRR is the average of the reciprocal ranks of results for the queries. The reciprocal rank of a query response is the inverse of the rank of the first correct item. If the correct item is at rank k, the reciprocal rank is 1/k. For a set of Q queries, MRR is calculated as: MRR = 1/|Q|∑_i=1^|Q|1/rank_i where |Q| is the number of queries and rank_i is the position of the first relevant item for the i-th query.§.§.§ Average Reciprocal Hit-Rank at K (ARHR@k)The Average Reciprocal Hit-Rank at K (ARHR@k)<cit.> is a crucial metric in evaluating recommendation systems, particularly emphasizing the ranking efficiency of the first relevant recommendation within the top-K items. It's defined by the formula: ARHR@k = 1/|U|∑_u ∈ U∑_i=1^kδ(i, u)/i Here, |U| represents the total number of users, k is the cut-off rank for top recommendations, and δ(i, u) is an indicator function that equals 1 if the item at rank i is relevant to user u, and 0 otherwise. This metric calculates the average of the reciprocal ranks of the first 'hit' or relevant item for each user but only if it appears within the top-K suggestions. A higher ARHR@k score indicates that the system not only accurately identifies relevant items but also ranks them highly, enhancing user experience and satisfaction. This metric is particularly valuable in scenarios where the prominence of recommendations significantly impacts user engagement.§.§.§ Normalized Discounted Cumulative Gain (nDCG) Normalized Discounted Cumulative Gain (nDCG)<cit.> is a measure of ranking quality that captures the performance of ranking algorithms in recommendation systems. It evaluates how well the predicted ranking of items corresponds to the ideal ranking, considering the relevance of each item. The DCG is computed as: DCG_p = ∑_i=1^p2^rel_i - 1/log_2(i+1) where rel_i is the relevance score of the item at position i and p is the number of ranked items. nDCG is obtained by normalizing DCG with the ideal DCG (iDCG), which is the DCG score obtained by the perfect ranking: nDCG_p = DCG_p/iDCG_p A perfect ranking would result in an nDCG of 1, while any deviation from the ideal ranking results in an nDCG less than 1. §.§.§ Precision@k Precision@k is a performance metric that evaluates the relevance of a list of recommended items. It measures the proportion of recommended items in the top-k set that are relevant to the user. The formula for Precision@k is given by: Precision@k = Number of relevant items in top-k/k Here, "relevant items" are those that are deemed to be of interest to the user based on some ground truth, such as past user behavior or explicit ratings. The metric provides a straightforward indication of recommendation quality at a fixed list size k. §.§.§ Recall@k Recall@k is a metric used to evaluate recommendation systems based on how many relevant items are selected out of all possible relevant items. Specifically, for a set of queries, it measures the proportion of relevant items found in the top-k recommendations. The formula for Recall@k is given by: Recall@k = |{Relevant items}∩{Top-k recommended items}|/|{Relevant items}| Here, |{Relevant items}| is the number of relevant items, and |{Relevant items}∩{Top-k recommended items}| is the number of relevant items that are in the top-k recommendations.§.§.§ F1@K (F1 score at K)The F1 score is the harmonic mean of precision and recall. At a specific cut-off point K, it is calculated as:F1@K = 2 ×Precision@K×Recall@K/Precision@K + Recall@Kwhere Precision@K and Recall@K are the precision and recall calculated at the cut-off K.§.§.§ Average Recall@K Recall@K for a single user is the proportion of relevant items that are in the top K recommendations. The Average Recall@K across all users is:Average Recall@K = 1/U∑_u=1^U|Relevant Items_u ∩Recommended Items_u@K|/|Relevant Items_u| where U is the total number of users, Relevant Items u is the set of relevant items for user u, and Recommended Items u@K is the set of top K recommended items for user u. §.§.§ Average Precision@KPrecision@K for a single user is the proportion of recommended items in the top K that are relevant. The Average Precision@K is:Average Precision@K = 1/U∑_u=1^U|Relevant Items_u ∩Recommended Items_u@K|/K §.§.§ Mean Average Precision (MAP)MAP considers the order of recommendations. It is the mean of the Average Precision at each point a relevant item is retrieved, averaged over all users:MAP = 1/U∑_u=1^U( 1/|Relevant Items_u|∑_k=1^|Recommended Items_u|Precision@k×rel_u(k) )where rel u(k) is an indicator function that is 1 if the item at rank k is relevant to user u and 0 otherwise.Table 4 enumerates the specific scenarios for applying Ranking Metrics. §.§ Business Metrics Business metrics play a crucial role in assessing the performance and impact of recommendation systems (RS) in various domains<cit.>. One essential metric is Click-through Rate (CTR), which measures the number of clicks generated by recommendations. Higher CTR indicates that recommendations are more relevant, making it a popular metric in the news recommendation domain. Platforms like Google News and Forbes utilize CTR to gauge the effectiveness of their recommendations. Personalized suggestions based on CTR have been shown to increase clicks by up to 38% compared to popularity-based systems, underscoring its importance.Adoption and conversion metrics provide deeper insights into user behavior. While CTR indicates clicks, it doesn't determine if those clicks led to conversions or purchases. Platforms like YouTube and Netflix use alternative adoption measures such as "Long CTR" (counting clicks when users watch a specific percentage of a video) and "Take rate" (counting views after recommendations) to assess user engagement and conversion. In cases where an item cannot be viewed directly, domain-specific measures, such as the number of contacts made with an employer after a job offer recommendation on LinkedIn, become crucial.Sales and revenue metrics are ultimately what matter for businesses. Although CTR and adoption metrics are informative, changes in sales and revenue reflect the actual impact on the bottom line. However, attributing improvements solely to RS can be challenging, as users may have made purchases anyway, making it necessary to consider the broader business context.Measuring the effects on sales distribution is another critical aspect. This metric directly compares sales before and after the introduction of RS. However, it requires an understanding of how the shifts in sales distribution impact diversity at the individual level. Efforts to maintain diversity may be needed to prevent unintended consequences.User behavior and engagement metrics highlight the impact of RS on user activity and retention. Recommendations often increase user engagement, and a positive correlation between customer engagement and retention is observed in various domains, such as Spotify. However, measuring this can be challenging, especially when churn rates are low. These metrics collectively provide a comprehensive view of RS performance, helping businesses make data-driven decisions to enhance recommendation systems.§ EXPERIMENTS AND RESULTS In this study, we conducted experiments on three different MovieLens datasets, namely MovieLens 100k, MovieLens 1m, and MovieLens 10m, to evaluate the performance of our recommendation system. We aimed to assess various metrics to gain insights into the quality and effectiveness of our recommendation algorithms. The results of these experiments are summarized in Tables 5 and 6 below.We present the experimental results obtained from the analysis of two datasets: Amazon Electronics Dataset and Amazon Movies and TV Dataset. The experiments were conducted to evaluate the performance of various similarity metrics in the context of recommendation systems. The results are summarized in the following table: These results provide valuable insights into the performance of different similarity measures when applied to recommendation systems, with AUC serving as a key metric for evaluating the quality of recommendations. The experiments were conducted with a consistent number of epochs and batch size (2800) across all similarity measures for a fair comparison.Table 7 presents the catalog coverage, distributional coverage, novelty, diversity, and serendipity metrics for each dataset. These metrics provide valuable insights into the recommendation system's ability to cover a wide range of items, recommend items not previously seen by users, introduce novel items, maintain diversity, and offer serendipitous recommendations. The results indicate that as the dataset size increases, the recommendation system's performance in terms of these metrics generally improves. We also evaluated two collaborative filtering algorithms, ALS (Alternating Least Squares) and SVD (Singular Value Decomposition), on each dataset using a fixed value of K (number of recommendations). The results of these experiments are summarized in the following table:Table 8 provides an overview of the performance metrics for each combination of dataset and algorithm, including training time, predicting time, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), R-squared (R2), and Explained Variance. These metrics allow us to assess the effectiveness of ALS and SVD in providing recommendations on different scales of MovieLens datasets. We evaluated the performance of seven different recommendation algorithms, namely ALS<cit.>, SAR <cit.>, SVD <cit.>, NCF<cit.>, BPR <cit.>, BiVAE <cit.>, and LightGCN<cit.>, across both datasets. The following table summarizes the key metrics for each algorithm on the MovieLens 100k dataset:Table 9 provides a comprehensive overview of the algorithm performance in terms of various evaluation metrics, including Mean Average Precision (MAP), normalized Discounted Cumulative Gain (nDCG@k), Precision@k, Recall@k, F1@k, Mean Reciprocal Rank (MRR), Average Rank-based Half-life Reciprocal (ARHR@k), Average Recall@k, and Average Precision@k. These metrics offer valuable insights into the recommendation quality and efficiency of each algorithm on the MovieLens 100k dataset.Table 10 provides a comprehensive overview of the algorithm performance on the MovieLens 1M dataset. § CONCLUSION Evaluating the efficacy of recommender systems in addressing business challenges presents a complex issue. Initial assessments may commence with binary or rank-aware accuracy metrics, which are instrumental in deriving a preliminary set of recommended items through the employed machine learning (ML) technique. However, there exists a notable discrepancy between such accuracy measures and other critical user-centric metrics which significantly contribute to user engagement and satisfaction. The insufficiency of ML metrics alone in appraising the true performance of recommendation systems necessitates the incorporation of direct user feedback to capture the system's business impact accurately. Consequently, the implementation of A/B testing is essential, as it enables the quantification of the system’s influence on click-through rates (CTR), sales, and related consequential metrics. It is through this integrative approach that business objectives and ML algorithms can be effectively aligned and optimized.splncs04 *
http://arxiv.org/abs/2312.16015v1
{ "authors": [ "Aryan Jadon", "Avinash Patil" ], "categories": [ "cs.IR", "cs.AI", "cs.LG" ], "primary_category": "cs.IR", "published": "20231226115701", "title": "A Comprehensive Survey of Evaluation Techniques for Recommendation Systems" }
#1 1 1 Selective Inference for Sparse Graphs via Neighborhood Selection Yiling HuangDepartment of Statistics, University of Michiganand Snigdha Panigrahi The author acknowledges support by NSF grants 1951980 and 2113342.Department of Statistics, University of MichiganandWalter DempseyDepartment of Biostatistics and Institute for Social Research,University of Michigan January 14, 2024 ============================================================================================================================================================================================================================================================================================================================================================= 0 TitleNeighborhood selection is a widely used method used for estimating the support set of sparse precision matrices, which helps determine the conditional dependence structure in undirected graphical models. However, reporting only point estimates for the estimated graph can result in poor replicability without accompanying uncertainty estimates. In fields such as psychology, where the lack of replicability is a major concern, there is a growing need for methods that can address this issue. In this paper, we focus on the Gaussian graphical model.We introduce a selective inference method to attach uncertainty estimates to the selected (nonzero) entries of the precision matrix and decide which of the estimated edges must be included in the graph.Our method provides an exact adjustment for the selection of edges, which when multiplied with the Wishart density of the random matrix, results in valid selective inferences.Through the use of externally added randomization variables, our adjustment is easy to compute, requiring us to calculate the probability of a selection event, that is equivalent to a few sign constraints and that decouples across the nodewise regressions.Through simulations and an application to a mobile health trial designed to study mental health, we demonstrate that our selective inference method results in higher power and improved estimation accuracy.Keywords: Covariance selection, Gaussian graphical models, Network analysis, Penalized regression, Post-selection inference, Selective inference. 1.9 § INTRODUCTION The network approach to psychopathology posits mental disorders can be conceptualized as systems of co-occurring symptoms <cit.>.From this perspective, symptoms are not indicators of a latent “common cause" but a complex network. Aiming to identify the inter-symptom relations, there has been a growing focus on methods for estimating conditional dependence relationships in undirected graphs.Such methods have provided psychologists with a useful tool for learning complex relationships between different variables. In psychology, these undirected graphs are often based on Gaussian graphical models (GGMs) <cit.>.Consider, for example, the Providing Mental health Precision Treatment (PROMPT) Precision Health Study which is a 12-month mobile health (mHealth) intervention trial focused on augmenting standard care to improve health outcomes by using mobile health technologies to extend the reach outside the clinic. A key scientific aim of PROMPT is to better understand the relationships among treatment, baseline demographic information, survey responses, and mobile health signals.A common concern with network analysis in psychology is replicability <cit.>.Some have commented on the instability of network methods <cit.>, while others have argued the instability is caused by the use of single-item assessments and small samples <cit.>.To promote robustness, methods have been developed for evaluating the precision and stability of estimatedparameters <cit.>. In this paper, we identify and address one such relevant cause, called a silent killer of replicability <cit.>, when selection bias from the estimated conditional dependence relationships in graphical models is simply ignored during inference. Several methods have been adopted in the empirical network literature to estimate conditional dependence relationships and attach point estimates to such relationships.Nonetheless, the issue of replicability is still a subject of ongoing debate <cit.>, especially because psychology continues to grapple with a replication crisis, as noted by <cit.>.While modeling the conditional dependence relationships in multivariate mHealth data is an indispensable goal, given the risk of false discoveries and growing concerns of replicability, reporting findings from the estimated graph can be grossly misleading without accompanying uncertainty estimates for the matched parameters. Recognized as a problem of “post-selection inference" or “selective inference" <cit.>, countering selection bias from the estimation of such conditional dependence relationships remains a key, unaddressed challenge.In this paper, we propose a selective inference approach to quantifying the uncertainty in the estimated graph for mHealth data.Our focus is on the Gaussian graphical model, which associates an undirected graph to p jointly normally distributed random variables.The nodes of the graph represent these variables, while the edges between the nodes capture their conditional dependence relationships.These relationships are characterized by the nonzero entries of the inverse covariance matrix, also known as the precision matrix.In short, we will refer to this model as GGM and provide a brief overview of it in the next section.The rest of our paper is organized as follows. In Section <ref>, we review the GGM and the neighborhood selection approach, which is a multivariate regression method used for estimating conditional dependence relationships. In the same section, we discuss the contributions of our selective inference method.In Section <ref>, we present our method to address feasible selective inferences for the estimated edges in the GGM.We provide an efficient algorithm that can be used to numerically compute a pivot for selective inference, which can produce p-values and confidence intervals for the matched parameters. In Section <ref>, we present the results of simulations that investigate the performance of our method in different settings.In Section <ref>, we discuss the findings from applying our method to the PROMPT mHealth trial. Finally, we conclude our paper in Section <ref> with a brief summary.§ BACKGROUND AND CONTRIBUTIONS §.§ Gaussian graphical model We start by briefly reviewing the GGM. Let X be a p dimensional random vector whereX = (X^[1], X^[2], …, X^[p]) ∼ N_p(0_p, Σ),and Σ is an invertible p× p covariance matrix.LetΘ = Σ^-1 be the precision matrix, with the (j,k)^th element denoted by θ_j,k. Denote by X^[j] the j^th entry in the p dimensional Gaussian vector X. If j ≠ k ∈{1,2,…, p}, then X^[j] and X^[k] are independent conditional on the remaining entries in the random vector X if and only if θ_jk = θ_kj = 0. That is,X^[j] X^[k]|[ X^[l] ]_l≠ j,k if and only if θ_jk=θ_kj = 0.This fact implies that the support set Supp(Θ), excluding the diagonal entries, represents the edge structure of the graph.Moreover, the (j,k)^th entry of the precision matrix Θ is, up to a positive scalar, the regression coefficient of the k^th variable in the multiple regression of j^th variable on the rest, and vice-versa. This forms the basis of multiple regression, a popular framework for estimating the conditional dependence relationships in a GGM.§.§ Selecting edges in the GGM Suppose that we observe n independent realizations of X, which we denote byX_1, X_2, …, X_n, and letbe the n × p matrix with X_i ∈ℝ^p in its i^th row. Let ^[-j] represent the submatrix ofexcluding the j^th column and let ^[j] represent the j^th column of .Neighborhood selection, introduced by <cit.> computes a series of nodewise Lasso regressions by solving b ∈ℝ^p-1minimize1/2^[i] - ^[-i] b _2^2 + λ_ib _1 for i∈{1,2,…, p}. Note, the regression at node i uses the i^th variable as the response and the remaining variables as predictors. The nodewise Lasso coefficients estimate the neighborhood for each of the p variables, and are combined to estimate Supp(Θ), which is equivalent to estimating the edge structure in the graph.For example, the (j,k)^th entry of Θ is estimated to be non-zero using the “OR” logic if either the estimated Lasso coefficient of the j^th variable on the k^th variable or the estimated coefficient of the k^th variable on the j^th variable is non-zero.Alternatively, the “AND” logic can be used to combine the Lasso coefficients for estimating the graph. The nodewise regression approach taken by neighborhood selection has other important extensions, including a symmetric Lasso regression approach by <cit.>, as well as the estimation of more general network models beyond GGM <cit.>.An alternative method for estimating a sparse precision matrix is the graphical lasso by <cit.>. It uses the maximum likelihood approach to produce a matrix estimator of Θ, which can be solved through a block-wise coordinate descent algorithm.Although this method is more computationally intensive than neighborhood selection, which solves p separable lasso regressions, it may be preferred in applications where an estimator for entries of the precision matrix is desired, rather than just a sparse support set for the edges. On the other hand, if the main goal of analysis is to estimate the sparse edge set, then neighborhood selection is usually easier to solve than graphical lasso.In addition, the nodewise Lasso regressions in neighborhood selection can be run in parallel on the data, making it substantially faster than graphical lasso. Finally, in many studies, measurements are of many different types (e.g., continuous, binary, counts).Recent work <cit.> has extended graphical models to handle exponential family distributions inheterogeneous domains while permitting fast structure/parameter learning. While yet to be used extensively in practice, these methods provide psychologists with a useful tool for learning network models from heterogeneous data.The neighborhood selection algorithms we consider can be readily extended to this more general setting.This makes them a natural candidate as a starting point for selective inference for sparse Gaussian graphical models, as we consider the more general exponential family setting an important future direction. §.§ Related work and our contributionsAfter conducting a series of Lasso regressions in equation (<ref>), we obtain an estimate for the support of the precision matrix, denoted by Supp(Θ).However, to obtain valid p-values or confidence bounds for the entries of Θ in the estimated support set using the same available data that was utilized in neighborhood selection, it is important to adjust for the selection of edges. In Figure <ref>, “Naive” confidence intervals for the nonzero entries of the precision matrix, in Supp(Θ), are constructed for a range of values of λ_i=λ, i∈{1,2,…, p} when data is generated from a GGM and nonzero edges are estimated using neighborhood selection. These intervals do not take into account the effect of selection.The mean coverage rate of the “Naive” intervals falls much below the target 90% rate, which highlights the pitfalls of not adjusting for the selection of edges. An alternative approach is to use a subsample of the data for neighborhood selection and the remaining holdout samples that were not used for selection to construct intervals.This method, commonly referred to as “Data Splitting”, is depicted in the same Figure.While “Data Splitting” provides valid selective inferences, it is wasteful since the resulting intervals are only based on the holdout dataset. In this paper, we propose novel methodology that makes use of leftover information from selection to produce valid inferences for the selected entries in Θ. This method, which is presented in the next section, is referred to as “Proposed” in Figure <ref>.Our proposal not only achieves the prespecified coverage rate across the entire range of λ but also produces shorter intervals than data splitting. Both “Data Splitting” and “Proposed” generate longer intervals than “Naive”, as they should, to account for the selection of edges.Below we discuss our key contributionsand provide an overview of related work in selective inference.Several methods have been proposed to address selective inference after Lasso regression. These include simultaneous methods by <cit.>, conditional methods by <cit.>, and randomized conditional methods by <cit.>. The focus of this research is selective inference on the unknown mean parameter while treating the covariance of the response as a fixed parameter.The use of conditional methods in combination with externally added randomization adjusts for the specific selection via Lasso, producing bounded intervals and yielding more powerful inferences compared to those without randomization and conditioning.See, for example, results presented in recent papers by <cit.>. Whether or not randomization is applied, a key aspect of the conditional methods used for inference after Lasso regression is a neat polyhedral representation of the selection event.In the case of normal data, this means that the selection event can be expressed as a set of linear inequalities involving the normally distributed sufficient statistics, and the additional randomization variables if randomization is used.Although neighborhood selection involves solving p separable Lasso regressions, selective inference in the GGM is not a simple extension of the method used for usual Lasso regression with normal data.This is because the selection event in terms of sufficient statistics in the GGM, which follow a Wishart distribution, does not separate into individual nodewise regressions.Furthermore, this event no longer has a polyhedral representation in these sufficient statistics, which makes it even more challenging to provide selective inference for the selected edge parameters.To tackle both challenges, we introduce a randomized conditional method that explicitly adjusts for selection via neighborhood selection in the GGM.Motivated by recent ideas of using randomization to account for non-polyhedral selection events, such as <cit.>,our method adds Gaussian randomization variables to the nodewise Lasso regressions in (<ref>).We defer the specific form of Gaussian randomization used for our problem to the next section. Through the use of external randomization, we achieve two important goals. Firstly, we are able to describe the complex selection event as a set of simple sign constraints. Secondly, these constraints decouple across all the p regressions, allowing us toconstruct a pivot to infer for the selected entries in the precision matrix. The simplified selection event, facilitated by randomization, enables us to providean efficient numerical algorithm for computing selective inferences in the GGM.The primary computing step of our algorithm can be performed in parallel across the nodewise regressions, similar to the selection step, which allows for fast inferences. There have been recent proposals, such as those by <cit.> and <cit.>, suggesting alternative forms of randomization that differ from adding a randomization term to the regression objective.These methods can be used to create two randomized, independent copies of data for selection and inference, similar to data splitting, for normal data with a known covariance matrix. However, it is important to note that these methods do not provide a way to split the data into independent copies when the covariance is unknown, which is the case with the GGM.Our specific form of randomization provides a way to carry over leftover information at the time of selecting the edges to drawing selective inferences.§ METHOD§.§ Randomized neighborhood selection In this section, we present our randomized method for estimating the support set of Θ and addressing selective inferences for the estimated edges in the GGM.As emphasized earlier, the use of randomization is crucial for achieving feasible selective inferences in this problem.We start with the randomized neighborhood selection method, which gives us the estimated support set for Θ.Consider p Gaussian randomization variablesi drawn from N_p-1(0, Ω^[i]) for i∈{1,2,…,p}, where i is independent ofand i is independent of j for all i≠ j. We solve p nodewise Lasso regressions asb ∈ℝ^p-1minimize{1/2^[i] - ^[-i] b _2^2 + λ_ib _1 + ϵ/2 b _2^2- b^⊤i},for each i ∈{1,2,…,p}. The optimization problem is made strongly convex by including an additional ridge penalty with tuning parameter ϵ∈ℝ^+.In our practical implementations of the method, we set the value of ϵ to a small positive value. The single regression problem with an added Gaussian randomization variable in the objective, described in <cit.>, is known as the randomized LASSO.Adding a Gaussian randomization variable to each nodewise regression introduces a tradeoff between selection and inference, allowing us to reserve some information from the selection step to perform selective inference.During inference, our method conditions on the event of selection, which is based on the solution to the randomized neighborhood selection.This allows us to adjust for the selection of edges while using leftover information from data used during this step.Note that our method differs from data splitting, which only uses holdout data for inference.Figure <ref> confirms the expected gain in power over data splitting, providing a preview of our method's performance. The solution of (<ref>) leads us to observe i⊂({1,2,…, p}∖{i}),the nonzero entries of the Lasso coefficients from the i^th regression. Put another way, the set i gives us the estimated neighborhood for the i^th variable. We define i as the number of non-zero entries in i, i as the complement set of i∪{i} and i as the size of i, i.e., |i|= i, i= (i∪{i})^c, and |i|= p-1-i=i. By combining the estimated neighborhoods of the p nodes using either the “AND” or “OR” rule, we obtain an estimate for the support of the precision matrix. In our next step, we present our method for drawing selective inference for θ_j,k whenever (j,k) is an entry in this estimated support set. However, before we delve into selective inference in the GGM, we fix some basic notations to warm up. §.§ Some basics Let M be a matrix in ℝ^p× q and let A and B be subsets of {1,2, …, p} and {1,2,…, q}, respectively.We denote the submatrix of M with rows from A and columns from B as M_A,B.Moreover, we denote the submatrix of M with columns from B as M_B. Similarly, if V is a vector in ℝ^p, we denote the subvector of V with components from A as V_A.We use I_m,m to represent the identity matrix of dimension m, 0_p,q∈ℝ^p× q to represent the zero matrix of dimensions p× q, and 0_q ∈ℝ^q to represent the zero vector of dimension q. Throughout, we use the symbol ϕ(x;μ, Σ) to represent the density at x of a multivariate Gaussian distribution with mean μ and covariance matrix Σ. We usefor the Wishart density at s ∈ℝ^p× p, which is given by∝(dets)^(n-p-1)/2exp( -1/2tr(Θ s) )·_^p(s),where ^p is the cone of p-dimensional positive definite matrices.We start by simplifying some of our notations. To this end, we assume λ_i= λ for i∈{1,2,…, p}. For this purpose, we assume that λ_i=λ for every i in the set {1,2,…, p}.Note that our method for selective inference can easily be generalized even when we have p distinct tuning parameters.Additionally, we assume without a loss of generality that the predictor matrix and randomization variable in each regression are reordered to have the components in i stacked above the components in i.We define some more estimators besides the selected set of edges E_i for i∈{1,2,…, p}, which will be needed for constructing selective inferences. Denote by[ i; i ] = ∂ b _1|_i,the subgradient of Lasso penalty at the solution of the i^th regression, where i = sign( i)and i_∞≤ 1.Let S= ^⊤, with support equal to ^p.We represent the (j,k)^th entry of S as S_j,k.The value of S_j,k is calculated by taking the dot product of the j^th and k^th columns of .In the rest of the paper, we use s to denote the realized value of S and s_i,j to denote the realized value of its (j,k)^th entry.Using these notations, leti = -s_-i, i, i =[ s_i, i + ϵ I_i, i0_i, i;s_i, i λI_i, i ],i = λ[ i; 0_i ].For fixed s, define the mapping i:^p-1→^p-1 asi(b, z) = i + i[ b; z ] +i,where b∈^i and z ∈^i, and define its inverse function asi = (i)^-1. Let i ={b: Sgn(b)= i} and i ={z: z_∞≤ 1}. Then, we note thati =-s_-i, i + [ s_i, i + ϵ I_i, i0_i, i;s_i, iλ I_i, i ][ i; i ] + λ[ i; 0_i ]= i + i[ i; i ] +i= i(i, i),wherei∈i and i∈i.Finally, we specify some of our general notation rules. We use lowercase letters to represent variable realizations in our data. For example, the observed realizations for the random variables i, i, i, i are denoted by i, i, i, i, respectively. Furthermore, we use uppercase bold font letters to denote collections of random variables that are collected for the p nodewise regressions, and lowercase bold font letters to denote their corresponding realized values.For example, we use = {i}_i=1^p, = {i}_i=1^p, = {i}_i=1^p, = {i}_i=1^p to represent collections of the variables i, i, i, i for i∈{1,2,…, p}, and we use 𝐰={i}_i=1^p, = {i}_i=1^p, = {i}_i=1^p, = {i}_i=1^p for their respective realizations. We are ready to construct inferences for the selected edges in the estimated support of Θ. §.§ An exact adjustment for the selection of edges In this section, our main result in Proposition <ref> provides an adjustment for the selection of edges in the GGM, based on the solution to (<ref>). This adjustment, when multiplied with the Wishart density of the random matrix S, gives us the starting point to derive pivots for selective inferences. We note that {= E,= } = {i∈i , i∈i fori∈ [p] },which is a direct result of the well-studied Lasso penalty.To obtain an adjustment for the selection of edges, we first derive the joint distribution of S,, when conditioned on the above-stated selection event. We present Proposition <ref>, which leads to our main result in the section. Consider the event_0= {= E,= },and letD = ∫f_Θ() ·∏_i=1^pϕ( Π_^[i](i, i); 0_p-1, i)·det( _E_i, E_i + ϵI_i) ×_i(i)·_i(i)di di d.Conditional on the event _0, the density function of S,, at (s, , ) is equal tof_Θ; _0(s, , )=D^-1·f_Θ(s) ·∏_i=1^p {ϕ(i( i, i); 0_p-1, Ω^[i]) det(s_i,i + ϵ I_i) ×_i(i) )·_i(i) }.Observe that the joint density of the randomization variables given S=s isf_|S(1, …, p| s) = ∏_i=1^p ϕ(i; 0_p-1, Ω^[i]).For fixed s, applying the change of variables ii↦(i, i),for each i ∈ [p], gives usf_, |S(, | s) ∝∏_i=1^p ( D_i(i, i)) ·ϕ(i( i, i); 0_p-1, Ω^[i]),where D_i(i, i)=λ^idet(s_i,i + ϵ I ) is the Jacobian matrix associated with the mapping i, computed in Lemma <ref>.Clearly, the joint density of (S, , ) is equal tof_S, , (s, , )= · f_, |S(, | s),when combined with the marginal Wishart density of S. Because of the equivalence in (<ref>), the density function of (S, , ) conditional on _0 is given byf_Θ; _0(s, , ) ∝· f_, |S(, | s) ·_i(i) ·_i(i) ∝∏_i=1^p {ϕ(i( i, i); 0_p-1, Ω^[i]) det(s_i,i + ϵ I_i)×_i(i) ·_i(i) }.Normalizing this density proves our claim. Conditional on = {= E,= , = },the density function of S at s, denoted by f_Θ; (s), is proportional tof_Θ(s) ·∏_i=1^p∫_iϕ( i(i, i); 0_p-1, i)·det( s_E_i, E_i + ϵI_i) di .Starting from the joint density in Proposition <ref>, we condition further on =. This gives us the density function ofS,when conditioned on , which is proportional tof_Θ(s) ·∏_i=1^p ϕ( i(i, i); 0_p-1, i)·det( s_E_i, E_i + ϵI_i) ·_i(i) .Marginalizing over i results in the density of S givenand completes our proof. To sum up, Proposition <ref> provides an exact adjustment to the Wishart distribution of S to account for the selection of edges in the GGM. §.§ Pivot for selective inference Using the adjusted density from the preceding section, we construct a pivot for each parameter θ_j_0,k_0∈Θ^E, which serves as the main object of selective inference.To form this pivot, we derive a one-dimensional density function that only involves the parameter we are interested in, by conditioning further on the observed values of S̅_j_0,k_0= { S_j,k, for all(j,k) ≠ (j_0, k_0) }.Theorem <ref> states this density function, which leads us to our pivot by applying the probability integral transform based on the related cumulative distribution function (CDF).We define some more notations to proceed. Let s̅_j_0,k_0 denote the observed values of the variables S̅_j_0,k_0. For c∈, we definec, s̅_j_0,k_0: →^p× pwhere the (j,k)^th entry of the matrix-valued mapping is given by[c, s̅_j_0,k_0]_j,k = c if (j,k) = (j_0, k_0)or (j,k) = (k_0, j_0) s_j,kotherwise.Observe that c, s̅_j_0,k_0 simply replaces the (j_0,k_0)^th and (k_0,j_0)^th entry of the observed data matrix with c, while keeping all its other entries intact.Recall that the matrices i and i, which were defined in (<ref>), can be viewed as mappings of the data matrix s. We letic, s̅_j_0,k_0 = i∘c, s̅_j_0,k_0,ic, s̅_j_0,k_0= i∘c, s̅_j_0,k_0.These notations specify how i and i depend on the (j_0,k_0)^th entry of s. Definei(c, s̅_j_0,k_0, b, z) = ic, s̅_j_0,k_0 + ic, s̅_j_0,k_0[ b; z ] + i.Specifically, we can rewrite(<ref>)as i = i(s_j_0,k_0, s̅_j_0,k_0, i, i).Define the sets ={i ∈ [p]:j_0 ∈ E_i ∧ k_0 ∈ E_i }, ={i ∈ [p]:j_0 ∈ E_i ∨ k_0 ∈ E_i }.LetΛ_j_0,k_0(c, s̅_j_0,k_0)=∏_i∈∪{j_0, k_0}∫_^̋iϕ(i(c, s̅_j_0,k_0, i, i); 0_p-1, Ω^[i]) di ×∏_i∈det( [c, s̅_j_0,k_0]_E_i, E_i + ϵ I_i).Then, the conditional density of S_j_0,k_0 given the event in (<ref>) and S̅_j_0,k_0=s̅_j_0,k_0, at c∈, is given by(detc, s̅_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 c) Λ_j_0,k_0(c, s̅_j_0,k_0)·_(c, s̅_j_0,k_0)/(dett, s̅_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 t) Λ_j_0,k_0(t, s̅_j_0,k_0)·_(t, s̅_j_0,k_0)dt. The previous result indicates that only a subset of the p regressions contribute to the univariate conditional density of S_j_0,k_0. This subset can be much smaller than p, especially if the j_0^th node or the k_0^th node are significantly associated with a sparse subset of nodes in the graph.The simplification of the univariate density from the adjusted density of the random matrix S is explained in detail in the proof, which we defer to the Appendix. Denote by _; s̅_j_0,k_0(·; θ_j_0, k_0) the CDF for this density. Corollary <ref> presents our pivot, which we obtain after applying the probability integral transform to the CDF. Conditional on = {= E,= , = , S̅_j_0,k_0 = s̅_j_0,k_0}, we have _; s̅_j_0,k_0(S_j_0, k_0; θ_j_0, k_0) ∼Uniform(0,1). §.§ Algorithm for computing selective inferenceFinally, we offer an efficient algorithm to numerically compute selective inference using our pivot. Instead of computing the integrals in our pivot, we use a constrained optimization problem to calculate them.This approach is based on a Laplace approximation, which was previously employed for selective inferences in <cit.>.To put it formally, we have∫_^̋iϕ(i(c, s̅_j_0,k_0, i, i); 0_p-1, Ω^[i]) di ≈ exp( -i∈iminimize1/2(i(c, s̅_j_0,k_0, i, i))^⊤(Ω^[i])^-1(i(c, s̅_j_0,k_0,i, i))). In practice, we solve an unconstrained optimization problem through the use of a barrier functionBar_^̋i(i) =∑_j ∈ [i]log(1 + (ii)^-1).Note that the barrier function enforces the sign constraints on our optimization variables by imposing a larger penalty as the variables move near the boundary of i. That is, we solvei(c, s̅_j_0,k_0) = i∈ℝ^iminimize{1/2(i(c, s̅_j_0,k_0, i, i))^⊤(Ω^[i])^-1(i(c, s̅_j_0,k_0,i, i)) + Bar_i(i)}for i∈∪{j_0,k_0}.Now, we are ready to compute our pivot by substituting Λ_j_0,k_0(c, s̅_j_0,k_0) with Λ_j_0,k_0(c, s̅_j_0,k_0)=∏_i∈∪{j_0, k_0}exp(-i(c, s̅_j_0,k_0))·∏_i∈det( [c, s̅_j_0,k_0]_E_i, E_i + ϵ I_i) in the conditional density in Theorem <ref>.Denote byd(c, s̅_j_0,k_0) =(detc, s̅_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 c) Λ_j_0,k_0(c, s̅_j_0,k_0)·_(c, s̅_j_0,k_0). Then, the pivot we compute numerically is equal to _; s̅_j_0,k_0(S_j_0, k_0; θ_j_0,k_0)=(∑_t∈ G d(c, s̅_j_0,k_0))^-1·∑_t∈ G: t ≤ S_j_0, k_0 d(c, s̅_j_0,k_0). Algorithm <ref> provides the above-outlined steps to obtain our pivot. Using our pivot, we compute two-sided p-values for the null hypothesis H_0:θ_j_0, k_0 = θ_0 as 2 min(_; s̅_j_0,k_0(S_j_0, k_0; θ_0), 1 - _; s̅_j_0,k_0(S_j_0, k_0; θ_0)). Inverting the test provides the 100(1-α)% confidence interval for θ_j_0,k_0 as{θ_0 ∈ℝ:α/2 < _; s̅_j_0,k_0(S_j_0, k_0; θ_0) < 1 - α/2}.At last, Lemma <ref> notes that the above set is indeed an interval. This can be verified by showing that _; s̅_j_0,k_0(S_j_0, k_0; θ) is a monotone function in θ. The function _; s̅_j_0,k_0(S_j_0, k_0; θ_j_0,k_0) based on our numerical approximation in (<ref>) is monotonically increasing in θ_j_0,k_0. We provide a proof for Lemma <ref> in the Appendix, which is adapted from Lemma A.1 in <cit.>.Algorithm <ref> outlines the steps to construct confidence intervals using our pivot.§ SIMULATION §.§ SetupIn this section, we assess the performance of our method in providing selective inference for the estimated edges in the GGM and its potential in recovering the conditional dependence relationships in the related graph.We generate our data from a multivariate Gaussian distribution with a sparse precision matrix.To do so, we closely follow the generative scheme described in <cit.>. First, we construct the symmetric, sparse precision matrix Θ.We introduce two parameters: m∈ℕ and c∈ (0,1).These parameters control the number of edges per node (or equivalently the sparsity in the precision matrix) and the signal magnitude. Now, we consider the following steps. * Suppose that each covariate (node) X_i corresponds to a point p_i ∈ℝ^2, where p_i's are independent and identically distributedsamples from a 2-dimensional uniform distribution over the interval [0,1]×[0,1]. * Two nodes X_i and X_j are connected (θ_ij = θ_ji≠ 0) at random with a probability ofϕ(d(i,j) / √(p)),where d(i,j) = ||p_i - p_j||_2, and ϕ is the standard Gaussian probability density function. * The edges are generated according to the probability in step 2, and edges are randomly removed until the graph satisfies the requirement that∑_j≠ i1{θ_ij≠ 0}≤ m. * For each i, θ_ii = 1, and for connected nodes i,j, θ_ij is sampled randomly from a uniform distribution Unif(0, c/m).We set Σ = Θ^-1 and sample n=400 observations from a p=20 dimensional multivariate Gaussian N_p(0, Σ). To solve the problem of randomized neighborhood selection, we use the penalty weights λ_i(α) = κ_i · 2√(n)σ̂_iΦ̅^-1(α / 2p^2).Here, α is a parameter that controls the probability of falsely including an edge in the estimated model. It is set at 0.1. Φ̅ is the survival function of the standard Gaussian distribution, and σ̂_i^2 = 1/n(^[i])^⊤^[i] is the sample variance for the i^th covariate. The parameter κ_i is a scalar value that can be tuned in experiments.This value of tuning parameter is taken from <cit.>, where it was shown to recover the true sparse support of the precision matrix with probability with a high probability. Finally, to estimate the conditional dependence relationships in the graph, we consider both the “AND” and “OR” logic rules for combining the neighborhoods based on . Our method, which is depicted as “Proposed" throughout this section, is implemented with simple independent Gaussian randomization variablesi∼ N_p-1(0, I_p-1,p-1) for i∈{1,2,…, p}.§.§ Metrics To summarize the performance of our method, we compute the following metrics. Letdenote the selected set of nonzero parameters after applying the “AND” and “OR” logic rules to combine the estimated neighborhoods based on . Let 𝐄_0 denote the true sparse support of the precision matrix.First, we compute the coverage rate of the confidence intervals C_, (j,k) for θ_jk whenever (j, k) ∈:Coverage Rate = |{(j,k) ∈: θ_jk∈ C_, (j,k)}|/||for each round of simulation. Next, we report the average length of the confidence intervals C_, (j,k) = (L_, (j,k), U_, (j,k)):Average Length = ∑_(j,k) ∈(U_, (j,k) - L_, (j,k))/||.To examine the accuracy in estimating the conditional dependence relationships in the graph, we compute F1 scores after conducting selective inferences. After selective inference, we decide which of the selected edges to include in the graph based on whether the confidence interval covers 0 or not.These edges are reported as statistically significant discoveries by our method.The F1 score is the harmonic mean of precision and recall and is given by the formula:F 1=2Precision × Recall / Precision +Recall .Here precision is equal to the proportion of the true edges among thethe reported edgesPrecision =|𝐄_0 ∩{(j,k) ∈: 0 ∉ C_, (j,k)}|/|{(j,k) ∈: 0 ∉ C_, (j,k)}|,while recall is defined as the proportion of the reported edges among the set of true edgesRecall =|𝐄_0 ∩{(j,k) ∈: 0 ∉ C_, (j,k)}|/|𝐄_0|.§.§ Baseline for comparison We compare our inferential results to those obtained from a common benchmark method known as data splitting.This method involves dividing the data into two independent parts.In the selection step, half of our data samples are used to solve the neighborhood selection problem and estimate the edge structure in the graph.In the inference step, the remaining half of the data samples are reserved to form confidence intervals for the selected edge parameters in the GGM. Below, we provide a brief description of a pivot based on data splitting.Borrowing similar notations as in the preceding section, let S^(1)_j, k and S̅^(1)_j,k= { S^(1)_j',k', for all(j',k') ≠ (j, k) } denote the sufficient statistics in the standard Wishart density for j≠ k, but based on only 50% of the data samples. The superscript “(1)” distinguishes these statistics from the ones used in the last section, using the full data.When inferring for θ_j_0 k_0 where the edge (j_0, k_0) is included in our graph from the selection step, we use the distribution of S^(1)_j_0, k_0. To obtain a pivot for the parameter of interest, we condition on all other sufficient statistics, given by S̅^(1)_j_0,k_0, in order to eliminate all nuisance parameters. Note, with data splitting, the selected edges can be treated as fixed since inferences are conducted on a new independent dataset. To be precise, the density of S^(1)_j_0, k_0 given S̅^(1)_j_0,k_0=s̅^(1)_j_0,k_0, when evaluated at c ∈ℝ, is given byp_s̅_j_0,k_0(c;θ_j_0k_0) = (detc, s̅^(1)_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 c ) ·_^p(c, s̅^(1)_j_0,k_0) (dett, s̅^(1)_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 t )·_^p(t, s̅^(1)_j_0,k_0)dt . The derivation of this one-dimensional density follows directly from the Wishart density of S^(1)={S^(1)_j_0, k_0}∪S̅^(1)_j_0,k_0, after we condition on S̅^(1)_j_0,k_0=s̅^(1)_j_0,k_0. A pivot for θ_j_0,k_0, using data splitting, is obtained immediately by applying a probability integral transform to this density, which is equal to∫_-∞^S^(1)_j_0,k_0p_s̅_j_0,k_0(c;θ_j_0k_0)ds∼Uniform(0,1). Inverting the pivot yields p-values and confidence intervals for our parameter in focus. §.§ Approximate Pivotal Inference with Different Randomization ScalesTo study the sensitivity of our inferential method to different randomizations i∼ N_p-1 (0, Ω^[i]), we perform another simulation study, with data being generated identically as in the previous experiment, while randomizing the neighborhood selection problems with draws from multivariate Gaussian distributions with different covariance matrices Ω^[i]. In particular, we use an isotropic randomization term i∼ N_p-1 (0, τ^2 I_p-1) with varying scales of τ^2. Theoretically, our method provides valid inference under different randomizations, but this experiment that varies the scale of randomization offers insights into the extent to which the selection quality will be affected by solving a randomized estimation, as well as the behavior of the power of the tests as we consider unequally-noisy estimations. It is to be remarked that the results from different randomization covariances here can provide empirical guidance for choosing a reasonable covariance scale in practice, as long as we standardize the data in both the simulations and the real data application. In the following simulation, we generate data with (n,p) pair taking value in {(200, 10), (400, 20), (1000, 50)}, m = 4, c = 0.6, whereas in solving the randomized neighborhood selection, we set the penalty weight constant w_i = 0.5, and vary the standard deviation of the randomizer τ∈{0.5, 1.0, 2.0}. We used the “AND" symmetrization logic, because for large randomizer scales, the selection of edges could be noisy, and the more conservative “AND" rule presumably helps controlling the selection quality.To provide additional assessments for the power under different randomization scales, we define several metrics that measures selection power. First, to study the amount of noise that randomization injects into the selection procedure, we define the (selection) power asPower=|∩ E^*| / |E^*|,where E^*={(j,k): θ_jk≠ 0} is the set of true edges. It is expected that the power may decrease as the randomization term's variance increase, in which case the noise may dominate the selection procedure. Empirically, as randomization scale increases, the size ofalso increases, which implies one encounters false positive selection in addition to the true signals. However, one could obtains more trustworthy selection by viewing the the set of edges whose confidence interval, constructed after observing , does not include zero. We define this set of selected edges post-inference, _CI(), as {(j,k) ∈: 0 ∉ C_, (j,k)}.Subsequently, we define quantities that measure the quality of edge selection based on inferential results. The post-inference edge-selection quality is measured by power post-inference (PPI), defined analogously asPPI=|_CI() ∩ E^*| / |E^*|. Additionally, given that the selection procedure may offer a selection of edgesthat contains false discoveries, to measure our method's capacity of discerning true signals from false discoveries, conditional on the selection , we introduce conditional power (CP), defined asCP() = |_CI() ∩ E^*| / |∩ E^*|.As its name suggests, this metric measures the power of the test, conditional on the selection , which restricts our targets of inference to only θ_j,k where (j,k) ∈.We similarly define some metrics of evaluation for this post-inference (abbreviated as PI) set of edges:F1 score (PI)=2 × precision (PI) × recall (PI) / precision (PI) +recall (PI) ,where the post-inference precision and recall are defined according toprecision (PI) =|E^* ∩_CI() |/|E | , recall (PI) =|E^* ∩_CI() |/|E^*|. §.§ FindingsWe summarize findings from 500 rounds of simulations in two main settings. In Setting I, we fix the graph connectivity parameter for each node as m=2 and vary the signal magnitude parameter c ∈{0.4, 0.5, 0.6, 0.7, 0.8}. In Setting II, we fix the signal magnitude parameter at c=1 and vary the graph connectivity parameter m∈{1,2,3,4,5}. For both settings, we present a comparison between our method and the baseline method of data splitting using both the “AND" and “OR" rules to combine the selected neighborhoods.The plots in Figure <ref> and <ref> display error bar plots for the inferential and accuracy metrics in Setting I. Similarly, the plots in Figure <ref> and <ref> show error bar plots for the inferential and accuracy metrics in Setting II. Our method is depicted as “Proposed" in all of the plots.For each setting, our simulations show that both “Proposed"and data splitting achieve the target coverage rate, which is set at 90%. However, our “Proposed" method produces narrower intervals than data splitting, indicating better inferential power.This is due to the use of leftover information from the selection step for inference.Moreover, our method improves the accuracy of estimating the conditional dependence structure in the graph, which is measured by F1 scores.This shows that the “Proposed" method strikes a better balance between the amount of information used in selecting edges through neighborhood selection and the amount of information used exclusively for selective inference. All patterns for selective inference and estimation accuracy hold consistently for the “AND" and “OR" rules of combining estimated neighborhoods. § CASE STUDY: PROMPT Depression and anxiety, alongside sleep concerns and addiction, are rapidly escalating global health concerns, leading to increasing disability, lost productivity, and early deaths. Unfortunately, the current healthcare system, relying on traditional face-to-face therapy, is struggling to keep pace with the growing demand for mental health services. The PROviding Mental health Precision Treatment (PROMPT) Precision Health Study is a 12-month mobile health intervention trial focused on augmenting standard care to improve health outcomes by using mobile health technologies to extend the reach outside the clinic. Adult patients (age 18+) who have a scheduled adult mental health intake appointment at either Michigan Medicine Outpatient Psychiatry or University Health Service clinics were eligible for participation. Patients were required to have daily access to a smartphone in order to participate. Recruited patients entered study at least 2 weeks prior to their initial clinic appointment.Patients were randomized to (1) either receive or not receive enhanced feedback (EF) via the study app (e.g., on step count and heart rate goals) and (2) either have access to an additional mental health applicaiton (App) or not. Patients could not be randomized to receive neither EF nor the App, and therefore the study randomized to three conditions (EF + standard of care, App + standard of care, or EF + App + standard of care).See here for additional details on the PROMPT study.Participants were tasked with completing surveys throughout the study, including an initial intake survey, surveys at 6 weeks, 18 weeks, and 12 months into their participation. Participants were also notified via the study app on a daily basis to rate their mood on a scale of 1-10. After consenting to the study and completing the intake survey, each study participant received a free Fitbit to wear daily for the duration of their time in the study. A key scientific aim of PROMPT is to better understand the complex relationships among treatment, baseline demographic information, survey responses, and mobile health signals.Here, we focus on the relationship among baseline survey instruments and wearable data collected via the Fitbit between 7 days prior to and 60 days after the baseline survey. This allows us to understand the relationship among these variables prior to the initial clinic visit. The intake survey included many standard, multi-item questionnaires such as the Patient Health Questionnaire (PHQ-9) and the General Anxiety Disorder (GAD-7).We pre-processed the intake survey to compute severity scores rather than analyzing individual items.Second, we summarized the 67 days of Fitbit data which consisted of 15 daily variables.As some Fitbit data streams require user input, there is a substantial amount of missing data in several data streams. We limit ourselves to datastreams with less than 20% missing data in each variable. The final list of variables is included in Table <ref> from Appendix <ref>.We then compute several summary statistics from the remaining daily wearable data such as means and standard deviations. Our final complete dataset consists of N= 770 patients with 9 survey variables, and 15 sensor variables.Similar to our simulations in the preceding section, we solve the randomized neighborhood selection with standard Gaussian random variables. We set the tuning parameters in the nodewise multivariate regressions according to (<ref>), where κ_i is set to 1.We applied the “OR" rule to combine the selected neighborhoods, resulting in the selection of 34 edges from a total of 276 possible edges.To construct selective inferences for these 34 edge parameters, we utilized Algorithm <ref>. In Figure <ref>, we include a plot for the graph depicting the estimated conditional dependence relationships between the survey and sensor variables. The solid lines indicate edges which were significant post inference and the dotted lines indicate edges that were included in the graph at the selection step, but were no longer significant post inference. It is important to note that three of these edge parameters were deemed insignificant by our method, meaning that the confidence intervals returned for these parameters covered 0. This is accompanied by Tables <ref> and <ref>, which report the confidence intervals formed through selective inference.Our analysis suggests conditional independence between features from the wearable device and the baseline survey items.We can therefore conclude that there is not a strong relationship between various measures of physical activity and the severity scores among this population of treatment seeking individuals.We can thus analyze the wearable and baseline survey items separately.Among the features constructed from Fitbit data, we recover natural relationships such as the conditional dependence between distance and activity calories. Interestingly, calories was found to be conditionally independent of distance given activity calories suggesting that those calories burned during activity windows drives the overall distance covered by the individual.Among the baseline survey items, we recovered known relationships such as the GAD being conditionally dependent on PHQ and NEO, which measures neuroticism (N), extraversion (E), and openness (O).Interestingly, the Positive and Negative Suicide Ideation (PANSI) questionnaire was found to be conditionally independent of the GAD but dependent on the NEO and PHQ with the effect seemingly larger for the PHQ. Most importantly, Tables <ref> and <ref> provide uncertainty quantification to ensure replication and reduce the risk of false discoveries from simply applying neighborhood selection to the observed mHealth data.c]@c@ActivityCalories c]@c@BodyBmi c]@c@BodyWeight Caloriesc]@c@CaloriesBMR Distancec]@c@HeartRateIntradayCount c]@c@HeartRateZoneOutOfRangeMax c]@c@HeartRateZoneCardioMax c]@c@HeartRateZoneCardioMin c]@c@ActivityCaloriesSDc]@c@CaloriesSD c]@c@CaloriesBMR_SDc]@c@DistanceSD c]@c@HeartRateIntradayCountSDActivityCalories - - -c]@c@(-87.47, -85.56)-c]@c@(-25.94, -24.03)c]@c@(-2.89, -1.03)- - -c]@c@(-36.25, -34.34)- - - -BodyBmi - -c]@c@(-29.71, -27.80)- - - - - - - - - - - - BodyWeight- - - -c]@c@(-50.65, -48.74)- - - - - - - - - -Calories- - - -c]@c@(-97.05, -95.13)- - -- -c]@c@(32.58, 34.49)-c]@c@(36.30, 38.21)- -CaloriesBMR- - - - - - - - - - - - - - -Distance- - - - - - - - - - - - -c]@c@(-16.50, -14.59) -c]@c@HeartRateIntradayCount - - - - - - - - - - - - - -c]@c@(-2.97, -2.39)c]@c@HeartRateZoneOutOfRangeMax- - - - - - - -c]@c@(5.26, 6.89)c]@c@(-10.61, -8.82) - - c]@c@(-1.09, 0.20)- -c]@c@HeartRateZoneCardioMax- - - - - - - - -c]@c@(-50.20, -48.29)- - - - -c]@c@HeartRateZoneCardioMin- - - - - - - - - - - - - - - ActivityCaloriesSD- - - - - - - - - - -c]@c@(-36.22, -34.31)-c]@c@(-13.69, -11.78)c]@c@(-4.01, -2.80)CaloriesSD- - - - - - - - - - - -c]@c@(-36.87, -34.95)- -CaloriesBMR_SD- - - - - - - - - - - - - - -DistanceSD- - - - - - - - - - - - - - -c]@c@HeartRateIntradayCountSD- - - - - - - - - - - - - - - Confidence Intervals for Sensor Data § CONCLUSIONPrecision health studies seek to understand the complex relationships among treatment, baseline demographic information, survey responses, and mobile health signals. This is achieved by learning the relevant conditional dependence relationships in a graphical model, which is equivalent to the presence or absence of an edge in the related graph. Although selecting edges and associating point estimates to the selected edges is prevalent, it can be misleading to report these edges without accompanying uncertainty estimates, given growing concerns about replicability. In this paper, we propose a method for attaching uncertainties to the selected edges of undirected graphical models by using a selective inference approach.Our focus in the paper is on the widely used neighborhood selection method, which estimates the conditional dependence relationships in a graph through nodewise multivariate regressions. Unlike the usual single regression framework, the selection of edges does not have a simple polyhedral representation.However, by utilizing external randomization variables, our method provides an exact adjustment factor to account for the selection of edges.This exact adjustment takes the form of a few simple sign constraints, which decouple across the nodewise regressions.To begin addressing selective inference in undirected graphical models, we considered inference on a single graph from Gaussian data.We believe that our current approach will pave the way for other crucial methods to tackle more general models and different types of data. For instance, this will involve extending the approach to a broader range of graphical models that encompass mixed data types, or performing integrative inferences for graphs that are aggregated across multiple time points or data sources. We leave these important extensions to future work.apalike SUPPLEMENTARY MATERIAL § PROOFS OF RESULTS We start with an auxiliary result that we use in the proof of Proposition <ref> for constructing the selection-adjusted density of S: Consider the mappingi(i, i)= i + i[ i; i ] +i = i.Then, the Jacobian associated with i when viewed as a change of variables mapping from i to (i, i) is equal toλ_i^idet(s_i, i + ϵ I_i). Let D_i(i,i) be the differential of the map i, given byD_i(i,i) = ∂i/∂(i,i)Notice from the definition (<ref>) thati(i, i)= i + i[ i; i ] +iand thereforeD_i(i,i) = [ ∂i/∂i ∂i/∂i ] = [ (s_i, i + ϵ I_i)0; s_i, iλ_i I_i ].Noting that D_i(i,i) is a lower triangular matrix, we conclude thatdet D_i(i,i) = det(s_i, i + ϵ I_i) (λ_i I_i) = λ_i^idet(s_i, i + ϵ I_i). Observe that the conditional distribution of S has density proportional tof_Θ(s) ·∏_i=1^p∫_iϕ( i(i, i); 0_p-1, i)·det( s_E_i, E_i + ϵI_i) di∝ (det s)^(n-p-1)/2·exp(- ∑_j,k ∈ [p]θ_jk s_jk)·_^p(s)×∏_i=1^p∫_iϕ( i(i, i); 0_p-1, i)·det( s_E_i, E_i + ϵI_i) di.If we condition further on {S̅_j_0,k_0=s̅_j_0,k_0}, then the conditional density of S_j_0, k_0 at c is proportional to(detc, s̅_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 c )·_^p(c, s̅_j_0,k_0)×∏_i=1^p∫_iϕ(i(c, s̅_j_0,k_0, i, i); 0_p-1, Ω^[i])·det( [c, s̅_j_0,k_0]_i, i + ϵ I ) di.The density in the above display follows from the definition of ·,· and the observation that conditional on S̅_j_0,k_0 = s̅_j_0,k_0, Π_c,s̅_j_0,k_0(i, i) = i(c, s̅_j_0,k_0, i, i)for a fixed c ∈ℝ. From the above-stated expression, we conclude that this conditional density is given by(detc,s̅_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 c ) Λ_j_0,k_0(c, s̅_j_0,k_0)·_(c, s̅_j_0,k_0)/(dett, s̅_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 t) Λ_j_0,k_0(t, s̅_j_0,k_0)·_(t, s̅_j_0,k_0)dt,whereΛ_j_0,k_0(c, s̅_j_0,k_0) = { ∏_i=1^pdet( [c,s̅_j_0,k_0]_i, i + ϵ I ) ×∫_iϕ(i(c, s̅_j_0,k_0, i, i); 0_p-1, Ω^[i]) di}. In the remaining part of the proof, we derive a simplified expression for the conditional density of S_j_0,k_0.In order to obtain this expression, we observe that any term in Λ_j_0,k_0(c, s̅_j_0,k_0) that does not depend on c will produce a constant factor in our density function.We consider three cases for each i ∈ [p], of which CASE II and CASE III, as defined later, give rise to an integral or a determinant term involved in Λ_j_0,k_0(c, s̅_j_0,k_0) that depend on c.We then derive a simplified expression for the conditional density of S_j_0,k_0 using these terms.CASE I. {j_0 ∉i andk_0 ∉i}.Note that in this case, the matrices i, i, and hence Π_c,s̅_j_0,k_0(i, i) does not depend on s_j_0, k_0=c. Therefore, these terms can be disregarded from the conditional density. CASE II. {j_0 ∈i andk_0 ∉i} or {j_0 ∉i andk_0 ∈i}.Recall thati(c, s̅_j_0,k_0, b, z)= ic, s̅_j_0,k_0 + ic, s̅_j_0,k_0[ b; z ] + i= -[c,s̅_j_0,k_0]_-i,i +[ [c,s̅_j_0,k_0]_i, i + ϵ I_i, i0_i, i;[c,s̅_j_0,k_0]_i, iI_i, i ][ b; z ] + i. First, suppose that i∈ [p]∖{j_0, k_0}.Then, we note that ic, s̅_j_0,k_0 depends on s_j_0, k_0=c, while ic, s̅_j_0,k_0 = [c,s̅_j_0,k_0]_-i,i does not depend on the value of s_j_0, k_0. Next suppose that i ∈{j_0,k_0}. In this case, ic, s̅_j_0,k_0 depends on s_j_0, k_0=c, while ic, s̅_j_0,k_0 does not depend on the value of s_j_0, k_0.In both cases, the determinantdet( [c,s̅_j_0,k_0]_i, i + ϵ I )does not involve s_j_0, k_0=c, as i does not contain both j_0 and k_0 at the same time. To sum up, for any such i, the contribution to Λ_j_0,k_0(c, s̅_j_0,k_0) is equal to∫_iϕ(i(c, s̅_j_0,k_0, i, i); 0_p-1, Ω^[i]) di. CASE III. {j_0 ∈i andk_0 ∈i}. It is easy to see that the term ic, s̅_j_0,k_0 depends on s_j_0, k_0=c, and so does the determinant det( [c,s̅_j_0,k_0]_i, i + ϵ I ).Therefore, for each such i,the contribution to Λ_j_0,k_0(c, s̅_j_0,k_0) is equal todet( [c,s̅_j_0,k_0]_i, i + ϵ I ) ×∫_iϕ(i(c, s̅_j_0,k_0, i, i); 0_p-1, Ω^[i]) di. Combining the conclusions from the three possible cases leads us to note that Λ_j_0,k_0(c, s̅_j_0,k_0) can be replaced by Λ_j_0,k_0(c, s̅_j_0,k_0), which completes the proof of the theorem.The conditional density that we numerically compute to obtain our pivot is equal to(detS_j_0 k_0, s̅_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 S_j_0 k_0) Λ_j_0,k_0(S_j_0 k_0, s̅_j_0,k_0)·_(S_j_0 k_0, s̅_j_0,k_0)/(dett, s̅_j_0,k_0)^(n-p-1)/2exp(- θ_j_0 k_0 t) Λ_j_0,k_0(t, s̅_j_0,k_0)·_(t, s̅_j_0,k_0)dtwhen evaluated at c= S_j_0,k_0. Note that this density is an exponential family density:p(S_j_0 k_0;η)=exp[η T(S_j_0 k_0)-A(η)] h(S_j_0 k_0)with the sufficient statistic T(S_j_0 k_0) = S_j_0 k_0 and the natural parameter η =-θ_j_0k_0. Furthermore, h(c) = detc, s̅_j_0,k_0)^(n-p-1)/2Λ_j_0,k_0(c, s̅_j_0,k_0)·_(c, s̅_j_0,k_0,A(-θ_j_0k_0) = log∫exp(- θ_j_0 k_0 t) h(t)dt. Therefore, it admits a monotonic likelihood ratio, that is, for η_0 = -θ_0 < η_1 = -θ_1, the likelihood ratio p(S_j_0 k_0;η_1) / p(S_j_0 k_0;η_0) is a monotonically increasing function in S_j_0 k_0.This implies that for c_1>c_0,p(c_1;η_1) p(c_0;η_0) > p(c_0;η_1) p(c_1;η_0).Now applying the proof of <cit.>, we integrate over c_0 on (-∞, c), c<c_1 to obtainp(c_1;η_1)_; s̅_j_0,k_0(c; θ_0) = ∫_-∞^c p(c_1;η_1)p(c_0;η_0) d c_0> ∫_-∞^c p(c_0;η_1)p(c_1;η_0) d c_0 = p(c_1;η_0)_; s̅_j_0,k_0(c; θ_1).Furthermore, integrating c_1 on (c, ∞) gives(1-_; s̅_j_0,k_0(c; θ_1))_; s̅_j_0,k_0(c; θ_0)> (1-_; s̅_j_0,k_0(c; θ_0) )_; s̅_j_0,k_0(c; θ_1),and thus _; s̅_j_0,k_0(c; θ_0) >_; s̅_j_0,k_0(c; θ_1) for θ_0>θ_1. Hence _; s̅_j_0,k_0(c; θ) is monotonically increasing in θ. § PROMPT VARIABLES
http://arxiv.org/abs/2312.16734v1
{ "authors": [ "Yiling Huang", "Snigdha Panigrahi", "Walter Dempsey" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20231227222715", "title": "Selective Inference for Sparse Graphs via Neighborhood Selection" }
Xiaoyu Luo, Mingming XU, and Chuanhou GaoADPBAADPBA: Efficiently generatingLagrangian cuts for two-stage stochastic integer programsXiaoyu Luo School of Mathematical Sciences, Zhejiang University, Hangzhou,China, [email protected],Mingming Xu School of Mathematical Sciences, Zhejiang University, Hangzhou,China, [email protected],Chuanhou Gao School of Mathematical Sciences, Zhejiang University, Hangzhou,China, [email protected], The use of Lagrangian cuts proves effective in enhancing the lower bound of the master problem within the execution of benders-type algorithms, particularly in the context of two-stage stochastic programs. However, even the process of generating a single Lagrangian cut is notably time-intensive. In light of this challenge, we present a novel framework that integrates Lagrangian cut generation with an adaptive partition-based approach, thereby mitigating this time-related drawback to a considerable extent.Furthermore, we also discuss the dominance relationship between the generated partition-based Lagrangian cut and the Lagrangian cut for the original problem. To provide empirical evidence of our approach's efficacy, we undertake an extensive computational study encompassing instances involving even up to a thousand scenarios. The results of this study conclusively demonstrate the superiority and efficiency of the proposed methodology. Two-stage stochastic integer program, Partition-based, Lagrangian cut, Fixed continuous recourse Free Variable as Effect, in Practice Oleg Kiselyov January 14, 2024 ==================================== § INTRODUCTIONTwo-stage stochastic programs are being broadly applied to model many realistic production scenarios, such as the facility location problem (<cit.>), the railway timetable problem (<cit.>), the network flow problem (<cit.>), etc. Logically, there include deterministic decision variables (often part of them are real while part of them are integral) in the first stage and real deterministic decision variables but affected by random occurrence of scenarios in the second stage. The first-stage decision variables need to be determined before the random variable is revealed, and the second-stage decision variables are determined by an optimization problem parameterized by the first-stage variables and the random variables. The objective function of this class of programs is an expectation of the cost for every realization of the random variable(<cit.>).Benefiting from the technique of sample average approximation that makes an approximation to the distribution of the random variable <cit.>, the following extensive deterministic formulation often acts as a starting point to study a two-stage stochastic integral program (SIP)min_x,y^s c^⊤ x+∑_s ∈ S p^sd^⊤ y^s,s.t.  A x=b,T^s x+W y^s ≥ h^s,  ∀ s ∈ S,x ∈ℝ_+^n_1-p_1×ℤ_+^p_1, y^s ∈ℝ_+^n_2,  ∀ s ∈ S,where x is the first-stage decision variable, p^s is the probability that scenario s occurs, c∈ℝ^n_1, A ∈ℝ^m_1× n_1, b ∈ℝ^m_1, T^s∈ℝ^m_2× n_1 and h^s∈ℝ^m_2 are scenario-specific, W∈ℝ^m_2× n_2 is the recourse matrix independent of scenario (fixed recourse), S = {1, 2, ..., | S |} is the set of scenarios, and y^s is the second-stage decision variable determined after the stochastic scenario s is revealed. Note that here the second-stage cost vector d∈ℝ^n_2 is set to be not scenario-specific, the main reason of which is for the convenience of aggregating y_s under the the adaptive framework. The optimization problem of (<ref>) is essentially a large-scale mixed integer program (MIP), which is referred to as the original problem in the context.Generally, it is difficult to directly solve (<ref>), especially when the number of scenarios is large. An alternative method is to implement Benders decomposition (<cit.>) on the problem to make it tractable. Mathematically, the Benders decomposition approach breaks down (<ref>) into a master problem min_x∈ X c^⊤x + ∑_s∈ S p^sθ^s,s.t.   θ^s≥ F^sx + g^s,  (F^s, g^s) ∈ℱ^s,  ∀ s∈ S with 𝕏 = { x: Ax = b, x ∈ℝ_+^n_1-p_1×ℤ_+^p_1}, and a subproblem f^s(x) := min_y^s∈ℝ^n_2_+{d^⊤ y^s | T^s x+W y^s ≥ h^s } .In (<ref>), ℱ^s is a collection of the Benders cuts, which are generated by the Benders subproblem (<ref>) and used to improve the lower bound the Benders master problem (<ref>), and f^s(x) in (<ref>) represents the second-stage value function. Moreover, the Benders cut can be written as this form:θ^s ≥π^⊤(h^s - T^sx)where π is any extreme point of the set {π∈ℝ^m_+: π^⊤W ≤ d^⊤}. Despite being an available method, the strategy of Benders decomposition is not always valid to achieve the optimal solution. The main reason is that Benders decomposition essentially maps the linear relaxation of the original problem onto a lower-dimensional space, which often suffers that the relaxed optimal solution oscillates during the cutting plane process and the generated Benders cuts can not improve it accordingly. As a result, the strategy often causes slow convergence to the optimal solution. To tackle the issue of slow convergence, lots of efforts have been made towards accelerating Benders decomposition within the framework of two-stage SIPs with finite scenarios. The main focus is on how to strengthen the Benders cuts. Bodur et al. (<cit.>) presented a unified framework called `cut and project' for two-stage SIP with continuous recourse problem. They proved that cutting the Benders subproblem first can result in tighter relaxation of the Benders master problem than implementing the projection first. Zhang and Kucukyavus (<cit.>) cut the subproblem iteratively to approximate the convex hull of the upper graph of the value function f_s(x), whose work can handle situations where the second stage also contains integer variables. Rahmaniani et al. (<cit.>) proposed an innovative technique named Benders dual decomposition, aiming to improve the lower bound of the Benders master problem. This method exhibits a large potential to effectively improve the lower bound, but at the expense of transforming its associated subproblem from a linear program to a MIP. The resultant cut generated by this subproblem is termed `Lagrangian cut' (<cit.>) due to its close relationship with the Lagrangian dual decomposition (<cit.>). There are also some studies on creative algorithms development based on the problem structure for accelerating Benders decomposition. Song and Luestke (<cit.>) proposed a framework called `adaptive scenario partition' for two-stage stochastic linear programming (SLP). They demonstrated the existence of a small sufficient partition in the case of a simple recourse problem. Under the assumption of fixed recourse, the fundamental concept behind the scenario partition is to cluster the scenario set and construct a relatively coarse lower approximation for the second-stage value function by aggregating the scenarios within the same cluster. Pay and Song (<cit.>) integrated the aforementioned framework with `branch-and-cut' techniques to reduce the time consumed to solve the second-stage recourse problem. However, they did not make any attempt to address integer constraints at the root node. Further, Ackooij et al. (<cit.>) proposed an adaptive partition level decomposition while Ramírez-Pico, et al. (<cit.>) proposed adaptive Bender decomposition for two-stage SLP with fixed recourse.In line with these studies, this paper is also concerned about the issue of how to accelerate Benders decomposition. Note that the Lagrangian cut is effective in improving the lower bound of the Benders master problem (<cit.>) and the adaptive partition-based strategy can reduce the time to solve subproblem (<cit.>), we thus try to develop the algorithm of adaptive partition-based Benders dual decomposition to generate Lagrangian cuts (APbLagC) by aggregating scenarios for two-stage SIP of (<ref>), and further solve it in a short time. The main contributions of this paper can be summarized as follows: (i) develop APbLagC algorithm to efficiently generate Lagrangian cuts for addressing two-stage SIPs with fixed recourse and continuous second-stage variables;(ii) prove that in the case of x=1 there is no Lagrangian duality gap for the concerned two-stage SIP, which further leads to the monotonicity of the optimal values of the Lagrangian dual relaxations concerning the partition-based problem along the refinement order of scenario partitions;(iii) prove that the Lagrangian relaxations of the partition-based problem might be tighter than those of the original problem (<ref>), which stands in sharp contrast to the situation of their linear relaxations;(iv) conduct numerical experiments to show that APbLagC algorithm is effective in lifting the lower bound of the master problem through adding partition-based Lagrangian cut. The rest of this paper is organized as follows. Section 2 presents preliminaries about the scenario partition formulation of the two-stage SIP and the Benders dual decomposition. This is followed by Section 3 where APbLagC algorithm is proposed and the generated partition-based Lagrangian cut is proved to be a valid inequality for Benders formulation. In Section 4, some theoretical analyses are made on the dominance of the partition-based Lagrangian cut. Section 5 exhibits some computational experiments to support the proposed APbLagC algorithm and the theoretical results. Finally, we concludes our work in Section 6. § PRELIMINARIES In this section, we provide a brief introduction to the scenario partition formulation of two-stage SIP (<cit.>) and Benders dual decomposition (<cit.>). §.§ The partition-based formulation of two-stage SIPFor the two-stage SIP of (<ref>), the main idea of writing its partition-based formulation is to aggregate scenarios according to a certain partition of scenarios. Assume that 𝒩 = {𝒫_1,𝒫_2,...,𝒫_L} is a kind of partition of the scenario set S satisfying 𝒫_1∪𝒫_2∪ ...∪𝒫_L = S and 𝒫_i ∩𝒫_j = ∅, ∀ i, j ∈{1,2,...,L} and i ≠ j. Then the partition-based problem can be stated as follows: min_x,y^𝒫 c^⊤ x+∑_𝒫∈𝒩 p^sd^⊤ y^𝒫, s.t.   T̅^𝒫 x+W y^𝒫≥h̅^𝒫,  ∀𝒫∈𝒩, x ∈𝕏,  y^𝒫∈ℝ_+^n_2,  ∀𝒫∈𝒩, where y^𝒫=∑_s∈𝒫p^s y^s/∑_s∈𝒫p^s, T̅^𝒫=∑_s ∈𝒫 p^s T^s/∑_s∈𝒫p^s, and h̅^𝒫=∑_s ∈𝒫p^s h^s/∑_s∈𝒫p^s. Compared with the original problem of (<ref>), the current one seems to involve fewer scenarios and is accordingly more easily to solve. We view it as a relaxation of (<ref>), and name it the aggregated problem. Naturally, different partitions will cause different partition-based formulations. The following concept of refinement is used to distinguish them. Let 𝒩_1, 𝒩_2 be two kinds of scenario partitions of S, then 𝒩_1 is a refinement of 𝒩_2 if ∀𝒫∈𝒩_1, 𝒫⊂𝒫^' for some 𝒫^'∈𝒩_2 and |𝒩_1 | > |𝒩_2 |. We give an example to exhibit refinement.Let S = {1, 2, 3, 4} be the scenarios set, and 𝒩_1 = {{1}, {2}, {3, 4}} and 𝒩_2 = {{1, 2}, {3,4}} are two kinds of scenario partitions, then 𝒩_1 is a refinement of 𝒩_2. The refinement notion will play an important role on approximating (<ref>) through (<ref>). <cit.> developed the adaptive partition-based algorithm, given inAlgorithm <ref>, to follow this logic. The algorithm iteratively solves the aggregated problem by adaptively updating the partition 𝒫 (towards more refined partition) according to the second-stage optimal dual solutions. As a result of refinement every time, the aggregated problem (<ref>) will be closer to the original problem (<ref>). Note that in this algorithm it is designed to solve the aggregated problem (<ref>) directly without considering the partition information in the previous steps. <cit.> further developed adaptive partition based level decomposition algorithms that utilize these information. We will follow this line to push our work.§.§ Benders dual decompositionBenders dual decomposition (<cit.>) provides a kind of effective way to improve the lower bound of the Benders master problem, but the separation problem to generate Lagrangian cuts is not designed well. The situation was improved by Chen and Luedtke (<cit.>) who reformulated the original problem of (<ref>) as the following Benders model: min _x, θ^s{c^⊤ x+∑_s ∈ S p_s θ_s:(x, θ^s) ∈ E^s,  s ∈ S}, E^s={(x, θ_s) ∈𝕏×ℝ: A x ≥ b, θ_s ≥ Q_s(x)},Q_s(x)=min _y{d^⊤ y: W y ≥ h^s-T^s x,  y ∈ℝ^n_2_+}. Note that there include integer constraints on the first-stage variables x, so unlike the generation of Benders cuts where a linear programming Benders subproblem is solved, it needs to solve a MIP to generate Lagrangian cuts. For this purpose, they designed the optimization problem Q̅_s^*(π, π_0)= min _x{π^⊤ x+π_0 Q_s(x): A x ≥ b, x ∈𝕏}= min _x, y{π^⊤ x+π_0(d)^⊤ y:(x, y) ∈ K^s}, where (π,π_0) ∈ (ℝ^n×ℝ^+) and K^s:={x ∈𝕏, y ∈ℝ^n_2_+: A x ≥ b, T^s x+W^s y ≥ h^s}. The separation problem thus followsL_s(x̂)=max_(π,π_0)∈π_s{Q_s^∗(π,π_0) - π^⊤x̂ - π_0θ̂^s:(π,π_0) ∈Π_s},with Π_s to be any compact subset of ℝ^n×ℝ^+. The inequality π^⊤ x+π_0 θ^s ≥Q̅_s^*(π, π_0)is called a `Lagrangian cut', which is also a valid inequality for the Benders master problem (<ref>). The reason that Lagrangian cuts can improve its lower bound may be ascertained by the following lemma.(<cit.>)The feasible region defined by all the Lagrangian cuts is equivalent to that of the Lagrangian dual relaxation obtained by dualizing the so-called `nonanticipativity constraint', that is {(x,θ^s)_s∈ S: (x,θ^s)∈ conv(E^s),  ∀ s∈ S}. Clearly, the generated Lagrangian cuts manage to characterise the convex hull of E^s while the Benders cuts could only characterise the linear relaxation of E^s for every scenario s ∈ S. Therefore, the Lagrangian cut is much more effective in improving the lower bound and accelerating the algorithmic convergence.§ THE PARTITION-BASED LAGRANGIAN CUTDespite proving powerful in improving the lower bound of the Benders master problem, it is very time-consuming to generate Lagrangian cuts due to solving MIP subproblems and the sheer number of scenarios. To increase the efficiency of generating Lagrangian cuts, we borrow the technique of aggregating scenarios adopted to Benders cuts (<cit.>) to “reduce" the number of scenarios and accordingly to shorten the solving time. We implement this technique on the partition-based problem (<ref>). The generated Lagrangian cut in this way is termed as the “partition-based Lagrangian cut" (PbLagC) in the context. Imitating (<ref>), we can directly write out the current one to be π^⊤ x+π_0 θ^𝒫≥Q̅_𝒫^*(π, π_0),where θ^𝒫=∑_s ∈𝒫 p^sθ^s/∑_s ∈𝒫 p^s, Q̅_𝒫^*(π, π_0) = min _x, y{π^⊤ x+π_0(d)^⊤ y:(x, y) ∈ K^𝒫} and similarly, K^𝒫:={x ∈𝕏,  y ∈ℝ^n_2_+: A x ≥ b,  T^𝒫 x+W y ≥ h^𝒫}. It is straightforward to get the following proposition. The partition-based Lagrangian cut (<ref>) is a valid inequality for the Benders formulation (<ref>). Proof Given a 𝒫∈𝒩, since every feasible point in the Benders formulation (<ref>) can be stated as {(x,θ^s)_s ∈𝒫:(x,θ^s) ∈ K^s fors ∈𝒫}, (x,θ^𝒫) belongs to K^𝒫. Also, from the expression of Q̅_𝒫^*(π, π_0), we have π^⊤ x+π_0 θ^𝒫≥Q̅_𝒫^*(π, π_0), which means inequality (<ref>) is valid for the Benders formulation (<ref>).Proposition 1 implies that the inequality (<ref>) is qualified to enhance the lower bound of the Benders master problem. Some practical instances, like vehicle routing problem with stochastic demands (<cit.>), also demonstrate the enhancement efficacy through partitioning scenarios. It is expected that the inequality (<ref>) can significantly improve the lower bound in high efficiency. For this purpose, PbLagCs are embedded into the Benders-type branch-and-cut framework. Here, we only focus on the root node, and propose APbLagC algorithm to finish the setting task, given in Algorithm <ref>. In this algorithm, PbLagCs will work together with PbBenCs to improve the lower bound of the Benders master problem. Firstly, at a given scenario partition 𝒩, PbBenCs are added to the branch-and-cut tree to improve lower bound after they are generated from solving the partition-based Benders subproblemf_𝒫(x̂) = min_y^𝒫∈ℛ_+^n_2{d^⊤ y^𝒫| Wy^𝒫≥h^𝒫 -T^𝒫x̂}with 𝒫∈𝒩 and x̂ to be a first-stage solution. This process will go on until there are no more Benders cuts that can be separated. Then, PbLagCs are generated through solving the partition-based Lagrangian subproblemL_𝒫(x̂) = max_(π,π_0)∈π_P{Q_𝒫^∗(π,π_0) - π^⊤x̂ - π_0θ̂_𝒫:(π,π_0) ∈Π_𝒫} for each 𝒫, where Π_𝒫 is also any compact subset of ℝ^n×ℝ^+, and are further added to the branch-and-cut tree to observe the change of lower bound of the Benders master problem. When the lower bound fails to decrease significantly, we update the scenario partition towards more refined one and make a new round of solving until the termination condition is true. The detailed refinement operation (<cit.>) follows: (1) at a given partition 𝒩, ∀𝒫∈𝒩, denote the optimal dual multipliers of (<ref>) by λ̂^s for any s∈𝒫;(2) let {𝒦^1,...,𝒦^M} be a partition of 𝒫 such that |λ̂^s - λ̂^s'|≤δ, ∀ s,s' ∈𝒦^m and ∀ m = 1,...,M, where δ > 0 is a given threshold;(3) remove 𝒫 from 𝒩 and add components 𝒦^1,...,𝒦^M to it. The solving will stop at a certain scenario partition 𝒩. If |𝒩|≪| S |, the time to generate PbLagCs will significantly decrease. § DOMINANCE ANALYSISIn this section, we give some theoretical analysis on dominance relation between Lagrangian cuts and PbLagCs to support Algorithm <ref>. Here, the dominance is defined as follows.Given two classes of benders-type inequalities{θ≥α_1^ix + β_1^i, i ∈ I} and {θ≥α_2^jx + β_2^j, j ∈ J},the latter is said to dominate the former in the weak sense if ∀ x ∈𝕏 that satisfies the latter must satisfy the former. The following proposition exhibits the dominance relation between Benders cuts and PbBenCs. Denote any extreme point of the dual problem of (<ref>) by λ̂^𝒫, then PbBenCs given by θ^𝒫≥(h^𝒫-T^𝒫x)^⊤λ̂^𝒫are valid inequalities for the Benders formulation of (<ref>). Moreover, they must be dominated by some Benders cuts.Proof Note that it is not a new result that PbBenCs of (<ref>) are valid for (<ref>) (<cit.>). However, we will provide a more concise proof towards it which allows to get dominance relation meanwhile. Since (<ref>) has a constant recourse matrix W, the set of extreme points of the dual problem for the partition-based Benders subproblem (<ref>) is the same as that for the Benders subproblem (<ref>). Therefore, the PbBenCs given in (<ref>) are in fact a convex combination of Benders cuts, so they are also valid for (<ref>), and moreover, they are weaker than the corresponding Benders cuts according to definition <ref>. Proposition <ref> implies that the role of every partition-based Benders cut can be replaced by some Benders cuts. Therefore, PbBenCs can not cut off any point in the feasible region of the linear relaxation of the Benders master formulation (<ref>). At this point, the result seems discouraging to demonstrate the role of scenario partition. However, for Lagrangian cuts the situation may be quite different. To this argument, we introduce an extension of Jenson's inequality (<cit.>).(<cit.>) Consider a convex function g:ℝ^n→ℝ, and an integer cubic with volume 1 and vertexes represented by {x_i}_i = 1^2^n. For a point p within the cubic that can be expressed by a convex combination of these vertexes, that is p = ∑_iλ_ix_i, then for any other convex collection of integer points {y_j}_j = 1^m for p, i.e., p = ∑_jν_jy_j, there is g(p)≤∑_iλ_ig(x_i)≤∑_jν_jg(y_j).In addition, under the circumstance of dominance analysis of PbLagCs, based on (<ref>) the recourse value f^s for scenario s at a given first-stage variable x̂ is referred to as the minimum value of the upper graph of the Lagrangian dual relaxation, i.e., f^s(x̂) = min{θ^s, (x̂, θ^s) ∈ conv(E^s)}. Note that this reference is different from (<ref>) except when x̂ is an integer point. Utilizing Lemma <ref> and the above reference of the recourse value, we can analyze the Lagrangian dual gap for the original problem in the case of 1-dimensional first-stage integral variable, and further discuss the corresponding partition-based problem. In the case of 1-dimensional first-stage integral variable of the original problem, i.e., min c^⊤ x+∑_s ∈ S p^sd^⊤y^s,T^s x+Wy^s ≥ h^s , ∀ s ∈ S,x ∈𝕏, 𝕏⊆ℤ, y^s ∈ℝ_+^n_2, ∀ s ∈ S,we have (1) there is no Lagrangian duality gap for this SIP problem;(2) 𝒩_1, 𝒩_2 are two kinds of partitions of S such that 𝒩_1 is a refinement of 𝒩_2, then z^𝒩_1≥ z^𝒩_2, where z^𝒩 denotes the optimal solution of (<ref>).ProofWe continue the proof in two separate items: (1) the given SIP problem can be reformulated as: min c^⊤ x+∑_s ∈ S p_sθ^s, θ^se ≥h̅^s - T̅^sx , ∀ s ∈ S,x ∈𝕏, 𝕏⊆ℝ, θ^s ∈ℝ,  ∀ s ∈ S,where e is a dimension-suited vector with all entries to be 1. We can rewrite E^s to be E^s = {(x,θ^s):x ∈𝕏,θ^s e ≥h̅^s - T̅^sx } and try to prove the claim that ∀ s ∈ S and ∀ (x,θ^s) ∈ conv(E^s), there is (x,θ^s) = λ_1(x_1,θ_1^s) + λ_2(x_2,θ_2^s), where non-negative weights λ_1+λ_2=1 and (x_1,θ_1^s), (x_2,θ_2^s) ∈ E^s. Clearly, the first-stage integral variable x can be written as x = μ⌊ x ⌋ + μ⌈ x ⌉, where μ = ⌈ x ⌉ - x and μ = x - ⌊ x ⌋ satisfying μ+μ=1. Since (x,θ^s) ∈ conv(E^s), it can be represented as a convex combination of points in E^s, that is (x,θ^s) = ∑_i = 1^nλ_i (x_i,θ_i), where (x_i,θ_i) ∈ E^s. Therefore, θ^s ≥∑_i = 1^nλ_i Q_s(x_i) ≥μQ_s(⌊ x ⌋) + μQ_s(⌈ x ⌉), where the last inequality comes from Lemma <ref>. Hence, by setting λ_1=μ and λ_2=μ the above claim is true. Namely, any point in the region of Lagrangian relaxation can be represented as a convex combination of two feasible points. This means there is no Lagrangian duality gap for the considered SIP. (2) The result is straightforward from that of (1). This proposition suggests that the Lagrangian dual gap vanishing may result from the unifying convex combination expression for different Lagrangian feasible point in different scenario, which conversely explains where the Lagrangian dual gap comes from, i.e., from inconsistent convex combination expression. In the following, we present a sufficient condition to say the recourse value will decrease when scenarios are aggregated, which is the same phenomenon as the linear relaxation. For the original SIP problem (<ref>), assume x̂ to be a fixed first-stage solution, s_1, s_2 to be two scenarios with probability weights p_s_1 and p_s_2, respectively, and they are aggregated into class 𝒫={s_1,s_2}. If forpoints (x_i, θ^s_j_i) ∈ E^s_j, i ∈ I, j=1,2, there are (x̂, f^s_1(x̂)) = ∑_i ∈ Iλ_i(x_i, θ^s_1_i), and (x̂, f^s_2(x̂)) = ∑_i ∈ Iλ_i(x_i, θ^s_2_i), where λ_i > 0 and ∑_i ∈ Iλ_i = 1, then we havef_𝒫(x̂) ≤p_s_1f^s_1(x̂) + p_s_2f^s_2(x̂)/p_s_1 + p_s_2, where f_𝒫(x̂) follows the expression of (<ref>). Proof From E^𝒫={(x, θ^𝒫) ∈𝕏×ℝ: A x ≥ b, θ^𝒫≥ Q_𝒫(x)}, we have (x_i, θ^𝒫_i) ∈ E^𝒫, where θ^𝒫_i = p_s_1θ^s_1_i + p_s_2θ^s_2_i/p_s_1 + p_s_2. Also, since (x̂, f^s_j(x̂)) = ∑_i ∈ Iλ_i(x_i, θ^s_j_i), we get (x̂, θ̂^𝒫)∈ conv(E^𝒫). Further, from p_s_1(T^s_1x̂ + Wŷ^s_1) + p_s_2(T^s_2x̂ + Wŷ^s_2) ≥ p_s_1h^s_1 + p_s_2h^s_2, where ŷ^s_j is the optimal solution for f^s_j(x̂), we obtain (<ref>) immediately. Proposition <ref> renders that if the mentioned condition is true, then PbLagCs are dominated by some Lagrangian cuts, which implies that if this condition is not true, it is possible to get dominating PbLagCs compared with the Lagragian cuts.PbLagCs are not necessarily dominated by any Lagrangian cut for the original SIP problem of (<ref>). Proof We use an simple example to support the theorem, which includes two scenarios with equal probability of occurrence, and has the feasible domain characterized by the following two linear systems: [ (I): {[z ≥ x-y; z ≥ -x+y ].,        (II): {[ z ≥ 1 -x - y; z ≥ x + y -1 ].. ]Here, (x,y) ∈{0,1}^2 are the first-stage variables and z ∈ℝ is the second-stage variable. Figure 1 illustrates the feasible domain. As can be seen,in the point (1/2, 1/2, 0) the weighted mean of the two graphs in terms of z axis is strictly lower than 1/2, which results in the Lagrangian duality gap. When the scenario partition technique is adopted, the aggregated problem becomes{[z ≥1/2 - y; z ≥ y - 1/2 ]., which even eliminates the duality gap of the Lagrangian relaxation. Based on Lemma <ref>, the feasible region defined by all the Lagrangian cuts is equivalent to that of the Lagrangian dual relaxation, so we get the statement of Theorem <ref>.Theorem <ref> means that PbLagCs, unlike PbBenCs, may cut off some region within the Lagrangian dual relaxation. It is thus possible to obtain a tighter relaxed feasible region than (<ref>). Moreover, due to aggregation of scenarios, the number of scenarios looks “smaller" in the aggregated problem (<ref>), which will lead to less time to generate PbLagCs than to produce the Lagrangian cuts.§ EXPERIMENTAL STUDIESIn this section, we implement experimental studies to test the proposed Algorithm <ref>.§.§ Implementation detailsThree classes of two-stage SIP problems, including stochastic service location problem (sslp), a variant of the stochastic service location problem (sslpv), and stochastic multi-commodity flow problem (smcf) with different sizes are considered. The sslp problem (<cit.>) is a two-stage SIP with pure binary first-stage and mixed-binary second-stage variables. In this problem, the decision maker has to choose from n_1 sites to allocate servers with cost in the first stage. Then in the second stage, the availability of each client would be observed and every available client must be served at some site also with cost. The objective is to minimize the total cost. Note that integer variables are involved in the second stage of this problem, but to adapt to our specific context, we relax the integer restriction here, as done by Song and Luedtke (<cit.>). The smcf problem (<cit.>) contains pure binary first-stage and continuous second-stage variables, in which the decision maker has to choose some edges with capacity constraint from the node-edge graph to transfer commodity flows. Then in the second stage, the demand of each commodity is available and must be transferred from its original node to the destination node by the chosen edges. Some information about these problems is presented in Table <ref>. In the subsequent experiments, the instances for sslp and sslpv are generated according to the method proposed by (<cit.>) while the smcf instances are drawn upon from the work of Crainic et al. (<cit.>). In smcf, we consider the instances 'r04' and the stochastic demands are generated following the approach of (<cit.>). In the experiments, we follow the method put forth by (<cit.>) to execute the Benders dual decomposition, and the generated Lagrangian cuts are separated through the cutting-plane technique. When generating PbLagCs, i.e., running Algorithm <ref>, the master problem possesses a unified structure for consistence within each scenario partition whilethe separation problem adopts (<ref>). Based on the setting in (<cit.>) and (<cit.>), the refinement parameters δ and κ_1 take δ=2/n^2 with n to denote the number of refinements and κ_1=0.2. We also conduct comparative experiments using the Benders dual decomposition algorithm and the classic Benders decomposition algorithm on the mentioned three classes of two-stage SIP problems. In addition, note that the refinement parameters have a large effect on experimental results, e.g., δ will affect the effectiveness of the adaptive algorithm greatly, we thus further provide testing experiments of Algorithm <ref> on different refinement parameters.All the experiments are conducted on a Windows laptop with 8GB RAM and an Intel Core i5-7200U processor running at 2.5GHz with the optimization solver Gurobi 9.5.0 and the Python (VS Code) compiler environment. The solving time limit for Algorithm <ref> is set to be 1h. During the process of Benders dual decomposition, we stop generating the Lagrangian cuts when the gap closed by the last five iterations is less than 5% of the total gap closed so far or the time limit is beyond. The following notations will be used to identify the details of our experimental results.- Partition-b&d: partition-based Benders dual decomposition (Algorithm <ref>);- B&D: Benders dual decomposition according to (<cit.>);- Benders: classic Benders decomposition to the linear relaxation of the problem;- T: computational time (the unit is second) of an algorithm for the instance;- Ccut: the number of PbLagCs added in the algorithm;- Fcut: the number of Lagrangian cuts or Benders cuts added in the algorithm;- Refine: refinement operation times;- Lower bound: the final optimal value of the master problem;- |𝒩|: the final partition size. §.§ Results and discussionsBased on the given experimental details, we undertake the corresponding experiments on the instances of sslp, sslpv and smcf, with the results shown in Table <ref>, Table <ref>, Table <ref>, respectively. In those tables, the results of lower bounder, the number of cuts added (Ccut or Fcut) and the computational time (T) are reported for Partition-b&d, B&D and Benders, where the optimal results are identified by bold for every instance. As can be seen, for most of the instances our proposed APbLagC algorithm can surpass the Benders dual decomposition algorithm both in the time required for enhancing the lower bound and in the count of cuts incorporated into the master problem. Moreover, the lower bound raised is universally higher through APbLagC algorithm in many instances but with the associated computational time significantly reduced. The main reason is that the size of their final scenario partition, i.e., |𝒩|, is notably smaller than that of the corresponding initial scenario configurations | S |. Of course, there are some instances for which our algorithm performs not so well and even worse than original bender dual decomposition, such as those bold results in the B&D column of Table <ref>. The possible reason may be that APbLagC algorithm cannot exhibit advantage for some instances with small size of scenarios. Additionally, it should be mentioned that some instances has run over the time limit 3600s for B&D, the reason of which is that we need to implement the whole round of generating cuts.! /setdistillerparams where popuserdict /setdistillerparams pop putifelse <</AutoRotatePages /None>> setdistillerparams ! /setdistillerparams where popuserdict /setdistillerparams pop putifelse <</AutoRotatePages /None>> setdistillerparams! /setdistillerparams where popuserdict /setdistillerparams pop putifelse <</AutoRotatePages /None>> setdistillerparamsIn order to observe more visually the evolving trends of lower bounds over time obtained by Partition-b&d and B&D, we display some representative instances for sslp, sslpv and smcf in Figure <ref>, Figure <ref> and Figure <ref>, respectively. It is obvious that our proposed adaptive algorithm can improve the lower bound rapidly. All of the exhibited curves rise rapidly towards the optimal values at the beginning period, and then slow down gradually after implementing refinement. This further demonstrates that APbLagCs algorithm is very efficient in solving two-stage SIP problems compared to B&D. It also needs to be pointed out that for some instances, APbLagCs algorithms end with nearly the same time and lower bound as B&D, which may result from a relatively large size of scenario partitions needed for the former during solving, and a great deal of time is consumed to get slight improvement of lower bound, such as the instance r04.2-1000 in Figure <ref> (b) and Table <ref>. Naturally, there are some rooms to improve the efficiency of our proposed algorithm, such as considering more elaborate (heuristic) refinement policy and termination condition. Here, we will not give more discussions about it, but keep which as a point of future studies. As mentioned in Subsection <ref>, the refinement parameter δ has a large effect on the experimental results, which is set to be 2/n^2 in the above experiments. In the following, we will test its impact by setting it to be different values, including δ = 2/n^2 and δ = 1/n^2. To compare more intuitively, we treat the result at δ = 2/n^2 as the baseline, that is when implementing the partition-based algorithm for δ = 1/n^2, we attempt to make the lower bound attain the former's, and then to observe their running time. Table <ref> reports the corresponding results. As can be seen, for instances of sslp and sslpv δ has a tremendous impact to the algorithmic effectiveness. Basically, δ = 1/n^2 may result in larger scenario partition size and therefore consume much more time.Nevertheless, it is not always the case that the larger δ performs better. The exceptions happen to instances of sslp2_20_100_50, sslpv1_30_70_200, sslpv2_20_100_50 and sslpv2_20_100_200, where smaller δ results in a smaller final scenario partition size and consumes less running time. This is because the larger size of scenario partition in the initial rounds reduces the number of refinements in these instances. For instances of smcf, the impact of δ is relatively small. In addition, we also test the situation where δ=0.5/n^2, in which the result is similar to that at δ = 1/n^2, and we thus ignore it here. More research may be needed on analyzing how to set hyper-parameters and developing more effective heuristic refinement methods to keep the scenario partition size as small as possible. § CONCLUSIONIn this paper, we develop the APbLagCs algorithm based on Benders dual decomposition for solving two-stage SIPs with continuous recourse. The generated partition-based Lagrangian cuts are proved not necessarily dominated by any Lagrangian cut. We conduct extensive experiments using the algorithm on various test instances commonly employed in the literature. Our experiments demonstrate that the proposed algorithm outperforms Benders dual decomposition in terms of computational time, the number of Lagrangian cuts added, and the ability to enhance the lower bound. However, similar to previous adaptive algorithms for two-stage SIP, the proposed approach is applicable only to cases with continuous recourse. Therefore, a potential future research direction is to extend the adaptive framework to encompass more general scenarios. Furthermore, we plan to investigate how the structural characteristics of the SIP influence the lower bound of Lagrangian relaxation during scenario partition implementation andmore effective heuristic refinement technology. This work was funded by the National Nature Science Foundation of China under Grant No. 12320101001 and 12071428. informs2014trsc
http://arxiv.org/abs/2312.16445v1
{ "authors": [ "Xiaoyu Luo", "Mingming Xu", "Chuanhou Gao" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20231227071945", "title": "ADPBA: Efficiently generating Lagrangian cuts for two-stage stochastic integer programs" }
X Modality Assisting RGBT Object Tracking Zhaisheng Ding January 14, 2024 =========================================In this paper we present a novel beamforming technique that can be used with an array of quantum sensors.The transmit waveform is a short-duration frequency comb constructed using a finite number of sinusoidal tones separated by a fixed offset.Each element in the array is tuned to one of the tones.When the radiated signal is received by the aperture, each array element accumulates phase at a different rate since it is matched to only one frequency component of the comb waveform.The result is that over the duration of the received pulse, progressively higher spatial frequencies are generated across the aperture.By summing the outputs of all the array elements, a strong peak is created in k-space at the precise time instant when the phases of all the array elements align.The k-space coordinates of the output can then be transformed to angles as discussed in the paper.This paper also describes how to set waveform parameters and the separation between array elements.A desirable advantage of the proposed approach is that the received signal is amplified by the coherent integration gain of the entire spatial aperture.synthetic aperture, array, k-space, spatial frequency, Rydberg quantum sensor § EMERGING QUANTUM SENSING TECHNOLOGIESOne of the most intriguing emerging technologies for sensing propagating radio frequency (RF) radiation is the use of Rydberg atom probes. These quantum probes have many unique features that set them apart from traditional antennas and offer several advantages.First, the measured electric field strength is traceable to the International System of Units (SI) <cit.>.Furthermore, direct down-conversion of the RF field to baseband by the atoms reduces the need for back-end electronics <cit.>.Also, intrinsic ultra-wideband tunability from kilohertz to terahertz frequencies with a single probe and a tunable laser is possible <cit.>. § CONVENTIONAL SYNTHETIC APERTURE AND PHASED ARRAY BEAMFORMINGThe field v(𝐱, t) radiating outward from a signal source as a function of position 𝐱 and time t is given by,v(𝐱, t) = e^-j2π/λd(𝐱)e^j(2πft + ϕ(t))where d(𝐱) is equal to the distance between the signal source and the receive location 𝐱 = [xyz]^T, f is temporal frequency, and ϕ(t) is a time-varying phase term due to waveform modulation or Doppler shift.Eqn. (<ref>) is valid for any range.Beyond the nominal distance of 2D^2/λ a plane wave approximation is used to characterize the propagating fields.Here D refers to the largest dimension of the physical antenna and λ is the operating wavelength.At far distances, the field of a propagating monochromatic plane wave is approximated by s(𝐱, t) = e^j2π(-𝐤^T𝐱 + ft) + jϕ(t)where𝐤 = (1/λ)[sinθcosϕsinθsinϕcosθ ]^T ≜ [k_xk_yk_z]^Tis the spatial frequency vector.The angles (θ,ϕ) are spherical coordinates and the vector 𝐤 represents the number of wavelengths per unit distance in each of the three orthogonal spatial directions.In narrowband phased array or synthetic aperture applications, beamforming is used to focus the array gain towards different directions and to identify signal sources.The baseband signal outputs s_mn(t) at time t for an array with MN elements can be stacked into a spatial vector 𝐬(t) as𝐬(t) = [ [ s_00(t) s_01(t) … s_MN-1(t) ]].The MN × 1 steering vector 𝐚(u,v) contains the interelement phase shifts for a narrowband plane wave traversing across the aperture from a direction (θ,ϕ),𝐚(u,v) = [ . e^-j2π/λ(md_xu_k + nd_yv_k)| 0≤m≤M-1, 0≤n≤N-1 ]^T.where m=0,…,M-1 is the element index in the x-direction, n=0,…,N-1 is the element index in the y-direction, d_x is the spacing between elements in the x-direction, d_y is the spacing between elements in the y-direction, andu= sinθcosϕv=sinθsinϕ.The beamformed output b(u,v) in the direction (u,v) is formed by taking the dot product of the steering vector 𝐚(u,v) and the array output vector 𝐬(t) b(u,v;t) = 𝐚(u,v)^H𝐬(t).The coherent summation of all array element outputs yields an integration gain at the output of the beamformer that increases signal-to-noise-ratio (SNR) by a factor of MN.The beamforming operation is equivalent to a spatial Fourier transform and for the planar array of M × N homogeneous elements arranged in the xy plane can be written as,b(u,v;t)= E(u, v) ∑_m=0^M-1∑_n=0^N-1s_mn(t)e^jk(md_xu + nd_yv).Here E(u,v) is the element pattern, the wavenumber k = 2π/λ (note some definitions omit the 2π factor), λ is the operating wavelength, and s_mn(t) is the baseband signal sample at the mnth array element for time instant t. If the array elements are uniformly spaced on a rectangular grid then the element locations are given by x_m = md_x and y_n = nd_y where d_x and d_y denote the distance between elements in the x and y directions.The basis functions of the 2-D Fourier transform in (<ref>) have spatial frequencies given by the wavevector 𝐤 in (<ref>).The spatial frequencies k_x and k_y are shown graphically in Fig. <ref> for a planar array.The spatial frequency components of an impinging plane wave are revealed in the phase progression across the array.For a 14-by-14 array with λ/4 spacing at 40 GHz, Fig. <ref> illustrates the phase (in degrees) at each array element calculated using (<ref>).The signal source is at a distance of 2.2 meters with an azimuth angle of 4.3^∘ and an elevation angle of 63.4^∘.Angles close to array boresight correspond to low spatial frequencies and angles close to 90^∘ yield higher spatial frequencies.In this case, the signal source is at a large elevation angle so the spatial frequency is high in the vertical orientation.Fig. <ref> illustrates the reversed situation where the signal source is at an elevation angle of 1.9^∘ and an azimuth angle of 63.4^∘.Now the large azimuth angle yields a high spatial frequency in the horizontal direction.§ CONCEPT OF OPERATIONS FOR K-SPACE BEAMFORMINGThe transmit signal for the proposed array architecture consists of a uniformly-weighted frequency comb s_t(t) with finite duration T that is the sum of N distinct sinusoids separated in frequency by Δf Hz,s_t(t) = ∑_n=1^Ncos(2π(f_0 + nΔf)t).Here f_0 corresponds to the carrier frequency.Starting from the aperture edge, each array element is tuned to a single, progressively higher frequency of the comb.After summing together all the element outputs, the signal at the output of the array for a single source is equal tos_r(t) = A∑_n=1^Ncos(2π(f_0 + nΔf)t + 2πd_n/λ_n)where d_n corresponds to the distance between the signal source and the nth array element, λ_n is the wavelength of the nth frequency, and A is the signal amplitude assumed equal for all the sinusoidal tones.Since each array element is tuned to a different frequency they will accumulate phase at different rates over the duration of the received waveform.The beamforming operation sums together the real element outputs directly at the carrier frequency or after a mixing operation removes f_0 such that the comb is centered at 0 Hz.At each instant in time a different spatial frequency is created across the array as illustrated in Fig. <ref>.As time increases, the high frequency array elements accumulate more phase and the spatial frequency created across the array grows higher as shown in Fig. <ref>.At one precise time instant all the phases across the array align to yield a peak output amplitude corresponding to the signal source.Consider the case of of a linear array with N elements corresponding to N tones in the frequency comb.The time axis from 0 to T corresponds to the spatial frequencies -1 ≤ k_x≤ 1.The spatial frequency k_x can be converted to azimuth angle coordinates according to,tanAZ = k_x/√(1 - k_x^2).Note the time axis at the output of the beamformer is periodic and peaks will repeat every T seconds.Thus, the maximum unambiguous value of T is equal to 1/Δf seconds.§ SIMULATED RESULTSIn this section we provide simulated results for a linear array along the horizontal x-axis with 21 elements.Each array element starting from the left edge at x=0 is matched to a single sinusoid in a frequency comb that varies from 19.001 GHz to 19.005 GHz in steps of 0.2 MHz.The spacing between array elements is λ/2 at 19.005 GHz and the duration of the comb waveform is 5 μsec.There is a single signal source of unity amplitude at spatial coordinates (x=-6, y=0, z=6) meters where z corresponds to the boresight direction and the y-axis is in the vertical direction.The ground truth azimuth angle of the signal source is -45^∘ and its distance from the array origin at x=0 is 8.4853 meters or 28.3 nsecs.Recall however that the time axis at the output of the beamformer will not correspond to distance or delay, but rather to k-space or angular coordinates.Fig. <ref> illustrates the received comb waveform.The initial phase of the kth sinusoid corresponds to the propagation delay between the signal source and the kth array element at the kth comb frequency.Fig. <ref> illustrates the beamformer output versus time after summing the real RF signals across all the array elements.Fig. <ref> illustrates the beamformer output after mixing the RF signals with a local oscillator (LO) signal at 19 GHz and converting the time axis to azimuth angle using (<ref>).As can be seen there is a slight error in the estimated angle of the signal source which will be discussed later.Fig. <ref> illustrates signal amplitudes across the array elements for random time instants.At the precise time t=0.6963 μsec when the sinusoidal phases yield maximum amplitude simultaneously across all array elements, the signal peak is created in the beamformer output. Fig. <ref> shows the relative phase shift between array elements at different time instants.Note that at t=0.6963 μsec the linear phase taper across the aperture matches closely with the theoretical phase shifts expected for the signal's angle of arrival (AoA). Fig. <ref> illustrates the output of the beamformer for 3 signal sources located at azimuth angles 53.1^∘, 8.5^∘, -45^∘ at a distance of 25, 20.2 and 8.5 meters from the array.As given by (<ref>), the phase of a plane wave arriving at an array element depends on the distance traveled.This dependence on distance can impart a curvature to the phase front of the propagating wave, especially at close distances, and induces errors in the angle estimates of a k-space beamformer.For example, for a signal source at 1 meter, Fig. <ref> illustrates the phase curvature across an array of 14-by-14 elements spaced λ/2 apart at 19 GHz.§ CONCLUSIONThis paper proposes a k-space beamforming approach for use with an array of Rydberg quantum sensors.By transmitting a comb waveform and assigning a unique frequency to each array element, different spatial frequencies are generated across the aperture.When the element phases align, a strong peak is created in the array output. IEEEtran
http://arxiv.org/abs/2312.16412v1
{ "authors": [ "Peter Vouras" ], "categories": [ "eess.SP" ], "primary_category": "eess.SP", "published": "20231227050532", "title": "K-Space Beamforming for an Array of Quantum Sensors" }
Analytical Insight of Earth: A Cloud-Platform of Intelligent Computing for Geospatial Big Data [==============================================================================================Sections/0_Abstract Sections/1_Introduction_and_Motivation Sections/2_Adapting_LLMs_for_Conceptual_Learning Sections/3_Discussion Sections/4_Future_Work Sections/5_Acknowledgments Note: Some of the references below were removed in accordance with anonymity requirements.
http://arxiv.org/abs/2312.16378v1
{ "authors": [ "Sanjay Oruganti", "Sergei Nirenburg", "Jesse English", "Marjorie McShane" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20231227023151", "title": "Automating Knowledge Acquisition for Content-Centric Cognitive Agents Using LLMs" }
arabicplain Robustness Verification for Risky Driving Scenes X. Wang, A. Liang, J. Sprinkle, & T.T. JohnsonRobustness Verification for Knowledge-Based Logic of Risky Driving ScenesXia Wang Vanderbilt University Nashville, TN, USA [email protected] Anda Liang Vanderbilt University Nashville, TN, USA [email protected] Jonathan Sprinkle Vanderbilt University Nashville, TN, USA [email protected] Taylor T. Johnson Vanderbilt University Nashville, TN, USA [email protected] ================================================================================================================================================================================================================================================================================================================================================================================================= Abstract. Many decision-making scenarios in modern life benefit from the decision support of artificial intelligence algorithms, which focus on a data-driven philosophy and automated programs or systems. However, crucial decision issues related to security, fairness, and privacy should consider more human knowledge and principles to supervise such AI algorithms to reach more proper solutions and to benefit society more effectively.In this work, we extract knowledge-based logic that defines risky driving formats learned from public transportation accident datasets, which haven't been analyzed in detail to the best of our knowledge. More importantly, this knowledge is critical for recognizing traffic hazards and could supervise and improve AI models in safety-critical systems. Then we use automated verification methods to verify the robustness of such logic.More specifically, we gather 72 accidentdatasets from Data.gov[<https://catalog.data.gov/dataset/?tags=crash>] and organize them by state. Further, we train Decision Tree and XGBoost models on each state's dataset, deriving accident judgment logic. Finally, we deploy robustness verification on these tree-based models under multiple parameter combinations. Keywords: Formal specification, robustness verification, driving accident dataset § INTRODUCTIONAs real-world domains become more complex, sophisticated, and dynamic, knowledge-based systems face the challenge of preserving the coherence and soundness of their knowledge bases.Moreover, since the current mainstream autonomous driving technology still requires the intervention of human drivers, the interpretability of such driving assistance technologies and algorithms, as well as the use of human-understandable logic to build driving scenario applications, has become an essential requirement. Finally, since the application scenario of such data-driven and human-understandable logic is closely related to security issues, it is becoming increasingly important for the system knowledge base to be formally verified <cit.>. Understanding and learning risky driving properties via corresponding traffic accident videos is an intuitive way. Although it may be possible to train a neural network and directly output for the importance of each video frame, such an approach would require a large amount of hand-annotated data <cit.>.Directly collecting car accident videos from dashboard cameras would also be unethical.Additionally, only using deep learning models for video analysis is purely data-driven and therefore lacks comprehensiveness when the data availability is limited <cit.>.To solve such issues mentioned above facing these obstacles, we propose to gather knowledge or rule logic for detecting risky driving behaviors from the public transportation accident dataset, which contains great value to assist other tasks such as anomaly detection and accident prevention but lacks of comprehensive analysis for now.Since the data formats of different states are differentwe train decision tree models and decision tree ensembleson each state's dataset separately. Also, we employed a pre-established formal verification method[https://github.com/chenhongge/treeVerification#configuration-file-parametershttps://github.com/chenhongge/treeVerification#configuration-file-parameters] <cit.> to measure the robustness of these decision-based models. Such formal verification not only enhances our models but is also necessary. In fact, recent studies have demonstrated that neural network models are vulnerable to adversarial perturbations—a small and human imperceptible input perturbation can easily change the predicted label <cit.>. This has created serious security threats to many real applications so it becomes important to formally verify the robustness of machine learning models. Usually, the robustness verification problem can be cast as finding the minimal adversarial perturbation to an input example that can change the predicted class label. In this work, we build tree-based models to gain human-understandable logic to support driving and transportation management tasks. Thus, we target the studies focusing on robustness verification of tree-based models. Recent studies have demonstrated decision tree models are also vulnerable to adversarial perturbations <cit.>. Tree-based models may provide additional insights into situations that result in accidents, which may have benefits for safety and could save lives. However, it is important to understand their robustness in making accurate predictions. In this paper, we will describe our approach to constructing tree-based models using publicly available accident datasets to determine the characteristics of risky driving scenarios and verify the robustness of those decision trees. To summarize, our framework offers the following contributions: * To the best of our knowledge, this work is the first comprehensive collation and analysis of large-scale traffic accident data across the United States.* We extract human-understandable rule and logic to further support driving and transportation management tasks, and even safety-critical AI tasks related to traffic scenarios.* We provide suggestions on unified accident data collection, which could be a guidance for different states to follow and to gather unified data format of traffic accident recordings. § RELATED WORKFormal verification of risky driving situations has been an active area of research for the past few years. This section summarizes some of the relevant works in this field, organized into two subsections: modeling of driving scenarios and formal verification techniques. §.§ Modeling of Driving ScenariosThere have been many research efforts focused on the development of realistic models for driving scenarios, which are used as inputs for formal verification tools. For example, prior research has developed stochastic models of drivers' behavior based on Reachability Analysis <cit.>, Convex Markov Chains <cit.> and tools such as the CARLA simulator, for validation of autonomous driving systems <cit.>.We seek to explore different models and simulators with public transportation datasets and apply formal verification to check the logic and/or rule for consistency and comprehensiveness.Our proposed method combines insights from autonomous vehicle studies and formal verification literature. Alawadhi et al.'s review work on autonomous vehicle adoption factors, including safety, liability, and trust, forms a crucial base <cit.>. Many researchers, like Tahir and Alexander, have proposed coverage-based testing for self-driving vehicles. Their paper aims to increase public confidence in self-driving autonomous vehicles by verifying and validating the techniques used in their development <cit.>.Other research has tackled the challenge of ensuring the coherence and soundness of knowledge-based systems, including those used for video processing. For example, Liu et al. proposed a formal verification method for knowledge-based systems using Petri networks to analyze the reachability of certain states <cit.>. Meanwhile, Kumar et al. explored the application of deep learning models for video analysis <cit.>. Although this study does not employ formal verification methods, it provides insights into the coordination and protocols needed to anticipate, predict, and prevent accidents at common locations such as intersections. Other studies have also examined the general aspects of knowledge-based systems, demonstrating the versatility of formal verification methods in this area. §.§ Formal Verification TechniquesMany formal verification techniques have been proposed for the verification of driving scenarios. These techniques range from model checking and theorem proving to more advanced methods such as abstraction and constraint-based analysis. In fact, prior studies have examined a scenario-based approach for formal modeling <cit.> and scenario-based probabilistic collision risk estimator <cit.>.Robustness verification is a process of evaluating the resilience of a system or a model to different types of perturbations or uncertainties. This type of verification is crucial for ensuring the reliability and safety of complex systems, such as autonomous vehicles, aerospace systems, and medical devices. As we mentioned above, there are several works that focus on robustness verification of deep neural networks <cit.>. Björn et al. <cit.> propose an approach to verify the robustness of the reinforcement learning algorithm, which is demonstrated on a Deep Q-Network policy and is shown to increase robustness to noise and adversaries in pedestrian collision avoidance scenarios and a classic control task. Zhouxing et al. <cit.> provides a formal robustness verification method for Transformers that have complex self-attention layers. Also, some works investigate applications in safety-critical domains, such as autonomous vehicles <cit.>. In this work, we utilize the robustness verification method on tree-based models <cit.>[<https://github.com/chenhongge/treeVerification>] to verify the robustness of the accident detection logic gained from government crash datasets. § PROPOSED METHOD§.§ System Overview Fig. <ref> depicts an overview of our proposed approach.§.§ Unified Feature EngineeringThe unified feature engineering step is an essential part of the data analysis process, as it ensures that the data is uniform and can be used to generate accurate results. In different states, the category values of one common feature vary due to different encoding policies in different states or errors in manual recording, so the key data pre-processing progress is to standardize and unify the encoding policy for these features. For example, the category values of collision types in each state are quite different. We refer to the unified manner of collision code[<https://masscrashreportmanual.com/crash/manner-of-collision/>], which includes 11 collision categories in total, to unify the category values of Maryland and Arizona shown in Table <ref>. The detailed code for all unified feature engineering processes can be found in this repository: <https://github.com/WilliamStar007/decision-tree>. This pre-processing step ensures that the data is on a uniform basis for features and allows us to continue generating decision trees. To facilitate data analysis, categorical variables in the dataset are encoded as numbers for easy manipulation and calculation.Overall, the unified feature engineering step is an important part of the data analysis process as it helps ensure data consistency, reliability, and accuracy in generating results. Furthermore, the unified feature engineering manner could provide suggestions for traffic accident data collection to facilitate accident recordings of different states maintaining unified.§.§ Tree-based ModelsGoing forward, we develop tree-based models that extract rule logic from decision trees and utilize robustness verification methods on XGBoost models. Our analysis involves using a set of independent variables that are commonly found in most states. These independent variables may differ between states and include factors such as weather conditions, lighting conditions, road surface conditions, collision type, causes of accidents, and the number of vehicles involved. Our aim is to classify accidents into two categories: those that result in fatalities, such as injuries or death, and minor accidents that do not result in any fatalities.To accomplish our goal, we initially constructed several binary decision trees using a selected maximum depth and a minimum number of samples for the split.The maximum depth was chosen from [3,4,5],and the minimum number of samples was chosen from [2,10,20,50]. Next, we aim to identify the best performance decision tree for each state.All datasets are split into a training set (20%) and a testing set (80%), and we use grid search to find the best performance tree with the highest F1 score by running it on the test set. F1 score is the harmonic mean of precision and recall, and its value ranges between 0 and 1. A high F1 score indicates that the model has high precision and recall, while a low F1 score suggests that the model either lacks precision or is unable to identify all positive instances. In the case of ties in the F1 score, we selected the tree with the lower depth since its effects are more interpretable. These chosen trees will be used in any further analysis. Also, we consider evaluation metrics of accuracy, precision, and recall rate. Accuracy is the percentage of correctly classified instances out of the total number of instances.Precision is the ratio of true positives (TP) to the total number of instances that the model predicted as positive.Recall rate is defined as the ratio of true positives (TP) to the sum of TP and false negatives (FN). In Table <ref>, we provide additional information on the analysis for each state. §.§ Accident RulesWe can simply extract several severe accident rules from the decision trees mentioned above. For instance, for New York State, we have three rules. Here, we note the classification label as label, and the value mappings are: accidents with injure or death: label_yes, accidents without injure or death: label_no. We note the feature of pedestrian bicyclist action as pba, and the value mappings are: crossing, no signal: pba_0, crossing, with signal: pba_1, getting on/off vehicle: pba_2, in the roadway: pba_3, not in the roadway: pba_4, unknown: pba_5. We note the event descriptor as ed and the value mappings are: collision with bicyclist: ed_0, collision with fixed object: ed_1,collision with animal: ed_2, non-collision: ed_3, collision with motor vehicles: ed_4, collision with pedestrian: ed_5, collision with railroad train: ed_6. We note the number of vehicles involved as vno. We note traffic control device as tcd, and the value mappings are: flashing light: tcd_0, none: tcd_1, officer: tcd_2, railroad crossing: tcd_3, school zone: tcd_4, stop sign: tcd_5, no passing zone: tcd_6, traffic signal: tcd_7, unknown: tcd_8, work area: tcd_9. * ((pba pba_0)(pba pba_1)(pba pba_2)(pba pba_3)(pba pba_4)(labellabel_yes)* ((pba pba_5)(ed ed_0)(labellabel_yes)* ((pba pba_5)((ed ed_1)(ed ed_2)(ed ed_3)(ed ed_4)(ed ed_5)(ed ed_6))(vno ≥ 2) ((tcdtcd_2)(tcdtcd_3)(tcdtcd_4)(tcdtcd_5)(tcdtcd_6)(tcdtcd_7)(tcdtcd_8)(tcdtcd_9))(labellabel_yes) Thus, Rule 1 and Rule 2 indicate that if there are some specific pedestrian or bicyclist actions involved in accidents, we may imply these accidents are injurious or even fatal. The potential reason for this logic may relate to the vulnerability and lack of physical protection of pedestrians or bicyclists in running traffic. Rule 3 indicates that although there's no pedestrian or bicyclist involved in the accidents, chaos situations above average level, which include more vehicles (more than two vehicles involved) and non-strong supervised traffic control (not supervised by a police officer), may also be implied as injurious or even fatal accidents. §.§ Robustness VerificationIn the context of machine learning, robustness verification typically involves testing the performance of a trained model against various perturbations of its input data, such as random noise or deliberate modifications designed to cause the model to make incorrect predictions. The goal of this process is to identify any weaknesses or vulnerabilities in the model's performance and to ensure that it can effectively handle unexpected inputs or situations.Robustness verification is important because machine learning models are often used in high-stakes applications where incorrect predictions can have serious consequences, such as in medical diagnosis or autonomous driving. By verifying the robustness of these models, we can increase their reliability and safety, and reduce the risk of errors or failures.For decision tree or decision tree ensembles, formal robustness verification involves finding the exact minimal adversarial perturbation or a guaranteed lower bound of it. Here, we give the definition of minimal adversarial perturbation in (<ref>). For the input sample x, assuming that y_0 = f(x) is the correct label, where f(.) mean the tree model, and if we add δ to x could change the prediction for sample x, the minimal δ is the minimal adversarial perturbation, noted as r^*:r^* = min_δδ_∞ s.t.f(x+δ) ≠ y_0 For a single tree, a given sample x = [x_1,...,x_d] with d dimensions will start from the root node and traverse the tree to reach a final leaf node based on the decision threshold of each decision node. For example, for decision node i, which has the two children (the left child and the right child), if the samples will separate based on feature t_i and the threshold value is η_i, x will be passed to the left child if x_t_i≤η_i and to the right child otherwise. The main idea of the single tree verification is to compute a d-dimensional box for each leaf node such that any sample in this box will fall into this leaf. Mathematically, the node i's box is defined as the Cartesian product B^i =(l_1^i,r_1^i] ×…× (l_d^i,r_d^i].More specifically, if p, q are node i’s left and right child node respectively, then we can set their boxes B^p=(l_1^p,r_1^p] ×…× (l_d^p,r_d^p] and B^q =(l_1^q,r_1^q] ×…× (l_d^q,r_d^q] by setting (<ref>): (l_t^p,r_t^p] =(l_t^i,r_t^i], if t ≠ t_i(l_t^i,min{ r_t^i, η_i }],if t = t_i(l_t^q,r_t^q] =(l_t^i,r_t^i], if t ≠ t_i(max{l_t^i, η_i }, r_t^i],if t = t_i With the boxes computed for each leaf node, the minimum perturbation required to change x to go to a leaf node i can be written as a vector ϵ(x,B^i) ∈ℝ^𝕕 defined as (<ref>): ϵ(x,B^i)_t =0, if x_t ∈ (l_t^i,r_t^i]x_t - r_t^i, if x_t > r_t^il_t^i - x_t,if x_t ≤ l_t^iThus, the minimal adversarial perturbation could be computed as r^* = min_i:v_i ≠ y_0ϵ(x,B^i) _∞. § EVALUATIONS§.§ Data When searching for “crash” datasets, 72 related links are displayed on Data.gov. By categorizing the data by state, we can see that there are 1 federal-level dataset and 10 state-level datasets, which include Arizona,Louisiana,Iowa,Maryland, Massachusetts,New York,North Carolina,Pennsylvania, Tennessee,and Washington.These datasets are further divided into state-level, city-level, and county-level datasets. To make the data more manageable, we selectArizona, Maryland, New York, and Washington. We chose these states based on the size and format of the data. §.§ Evaluation of Decision TreesIn this study, we fine-tuned our decision tree models for each state, focusing on optimizing performance metrics such as accuracy, precision, recall, and F1 score. To achieve this, we explored various hyperparameters as aforementioned.By closely monitoring the impact of these adjustments on the evaluation metrics, we were able to identify the best-performing model for each state. The evaluation results are shown in Figure <ref>. §.§ Evaluation of Robustness VerificationIn this study, we utilize a state-of-the-art decision tree robustness verification method<cit.> to evaluate the robustness of our decision trees under adversarial attacks.Here we don't solve the problem to get the exact r^*, but to find a lower bound r, which guarantees that no adversarial sample exists within radius r because the exact verification problem is NP-complete. A high-quality lower bound r should be close to r^*. The “average bound” is the lower bound of minimum adversarial distortion averaged over all test examples. A larger value typically indicates better overall robustness of the model. Also, the “verified error” is the upper bound of error under any attacks, which could be treated as an index to measure the worst case performance. By default, the model evaluates the robustness under adversarial attacks of a fixed number (1000) of test points from a given dataset (in LIBSVM format), using an initial epsilon value for the binary search process. We consider 5 parameters: * eps_init: the first epsilon in the binary search. This epsilon is also used to compute verified errors.* max_search: maximum number of binary searches for searching the largest epsilon that our algorithm can verify. By setting max_search to 1, the algorithm will disable binary search and only return the verified error at a certain epsilon.* max_level:maximum number of levels of clique search. A larger number will produce better quality bounds but the verification process becomes much slower. * max_clique: maximum number of nodes in a clique.* dp: by setting DP to 1, the algorithm will use dynamic programming, to sum up nodes on the last level. The default is 0, which means DP is not used, and a simple summation is used instead. For a more in-depth analysis, we run this verification model with different configurations against all our decision trees, exploring the impact of various parameters on the model's ability to verify robustness. The verification results are shown in Table <ref>. Here, we gain two observations: first, since the average bound larger is better and the verified error smaller is better, we see the tree models of Maryland State and Arizona State perform better in robustness than others, which is partly because the two states have larger datasets and the data quality may be better too. Second, we can compare the results under setting 1 and each of setting 2-5, since only one parameter is changed in the latter settings compared to the settings. We can see that (1) the initial epsilon almost doesn't have an influence on the results; (2) binary search is crucial to find the average bound; (3) the larger the maximum number of levels of clique search, the larger maximum number of nodes in a clique and dynamic programming is beneficial for locating a better robustness estimation.§ DISCUSSION AND CONCLUSION The application of robustness verification to tree-based models assumes paramount significance, particularly within transportation systems. This emphasis stems from the indispensable need for accurate and dependable predictions. Our meticulous approach to validation serves as a robust safeguard, ensuring the reliability and safety of these models in critical contexts.In this work, we compile a comprehensive real-world dataset encompassing four states within the United States. This dataset serves as the foundation for training decision tree models dedicated to predicting severe accidents. Through this process, we extract invaluable accident detection insights and rule-based logic. Notably, we introduce an innovative approach to constructing tree-based models, tailored for the identification of high-risk driving scenarios. Subsequently, we employ robustness verification techniques on these tree ensembles, a pivotal stride that gauges both our confidence level and the limitations that safeguard the logic against potential failures.In conclusion, we have determined that the tree path rules possess both meaningful and explicable qualities. Moreover, the implementation of a unified feature engineering process holds the potential to foster a more standardized and uniform paradigm for the collection of traffic accident data in each state. This, in turn, would facilitate the amalgamation of available data nationwide, resulting in a more consistent and comprehensive dataset. Additionally, the utilization of a larger dataset accompanied by enhanced recording quality has the potential to correspondingly elevate the level of robustness. Lastly, it is noteworthy that an increased maximum number of levels in clique search, along with a larger maximum number of nodes in a clique and dynamic programming, holds the potential to significantly enhance the accuracy of robustness estimation.§ ACKNOWLEDGMENTThis material is based upon work supported by the National Science Foundation under Grant 2151500. 8liu1996formal Liu, Nga Kwok. "Formal verification of some potential contradictions in knowledge base using a high level net approach." Applied Intelligence 6 (1996): 325-343.kumar2020video Kumar, Dr T. Senthil. "Video based traffic forecasting using convolution neural network model and transfer learning techniques." Journal of Innovative Image Processing 2.3 (2020): 128-134.seshia2022toward Seshia, Sanjit A., Dorsa Sadigh, and S. Shankar Sastry. "Toward verified artificial intelligence." Communications of the ACM 65.7 (2022): 46-55.chen2019robustness Chen, Hongge, et al. "Robustness verification of tree-based models." Advances in Neural Information Processing Systems 32 (2019).szegedy2013intriguing Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv preprint arXiv:1312.6199 (2013).carlini2017towards Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." 2017 ieee symposium on security and privacy (sp). Ieee, 2017.goodfellow2014explaining Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).chen2019robust Chen, Hongge, et al. "Robust decision trees against adversarial examples." International Conference on Machine Learning. PMLR, 2019.cheng2018query Cheng, Minhao, et al. "Query-efficient hard-label black-box attack: An optimization-based approach." arXiv preprint arXiv:1807.04457 (2018).kantchelian2016evasion Kantchelian, Alex, J. Doug Tygar, and Anthony Joseph. "Evasion and hardening of tree ensemble classifiers." International conference on machine learning. PMLR, 2016.tran2019safety Tran, Hoang-Dung, et al. "Safety verification of cyber-physical systems with reinforcement learning control." ACM Transactions on Embedded Computing Systems (TECS) 18.5s (2019): 1-22.sadigh2014data Sadigh, Dorsa, et al. "Data-driven probabilistic modeling and verification of human driver behavior." AAAI Spring Symposium-Technical Report. 2014.dosovitskiy2017carla Dosovitskiy, Alexey, et al. "CARLA: An open urban driving simulator." Conference on robot learning. PMLR, 2017.alawadhi2020systematic Alawadhi, Mohamed, et al. "A systematic literature review of the factors influencing the adoption of autonomous driving." International Journal of System Assurance Engineering and Management 11 (2020): 1065-1082.tahir2020coverage Tahir, Zaid, and Rob Alexander. "Coverage based testing for V&V and safety assurance of self-driving autonomous vehicles: A systematic literature review." 2020 IEEE International Conference On Artificial Intelligence Testing (AITest). IEEE, 2020.xu2019scenario Xu, Bingqing, et al. "A scenario-based approach for formal modelling and verification of safety properties in automated driving." IEEE Access 7 (2019): 140566-140587.ledent2019formal Ledent, Philippe, et al. "Formal validation of probabilistic collision risk estimation for autonomous driving." 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM). IEEE, 2019.li2020prodeep Li, Renjue, et al. "PRODeep: a platform for robustness verification of deep neural networks." Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 2020.yang2021enhancing Yang, Pengfei, et al. "Enhancing robustness verification for deep neural networks via symbolic propagation." Formal Aspects of Computing 33.3 (2021): 407-435.lutjens2020certified Lütjens, Björn, Michael Everett, and Jonathan P. How. "Certified adversarial robustness for deep reinforcement learning." conference on Robot Learning. PMLR, 2020.shi2020robustness Shi, Zhouxing, et al. "Robustness verification for transformers." arXiv preprint arXiv:2002.06622 (2020).zhang2022robustness Zhang, Zhaodi, et al. "Robustness verification of swish neural networks embedded in autonomous driving systems." IEEE Transactions on Computational Social Systems (2022).sadigh2019verifying Sadigh, Dorsa, S. Shankar Sastry, and Sanjit A. Seshia. "Verifying robustness of human-aware autonomous cars." IFAC-PapersOnLine 51.34 (2019): 131-138.
http://arxiv.org/abs/2312.16364v1
{ "authors": [ "Xia Wang", "Anda Liang", "Jonathan Sprinkle", "Taylor T. Johnson" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20231227001351", "title": "Robustness Verification for Knowledge-Based Logic of Risky Driving Scenes" }
We investigate a class of Vlasov-type kinetic flocking models featuring nonlinear velocity alignment. Our primary objective is to rigorously derive the hydrodynamic limit leading to the compressible Euler system with nonlinear alignment. This study builds upon the work by Figalli and Kang <cit.>, which addressed the scenario of linear velocity alignment using the relative entropy method. The introduction of nonlinearity gives rise to an additional discrepancy in the alignment term during the limiting process. To effectively handle this discrepancy, we employ the monokinetic ansatz in conjunction with the relative entropy approach. Furthermore, our analysis reveals distinct nonlinear alignment behaviors between the kinetic and hydrodynamic systems, particularly evident in the isothermal regime. Unveiling chiral phases: Finite-size scaling as a probe of quantum phase transition in symmetry-enriched c=1 conformal field theories Tigran A. Sedrakyan January 14, 2024 ===========================================================================================================================================§ INTRODUCTIONIn this paper, we consider the following Vlasov-type of kinetic flocking model_tf+·̌_ f+_·̌((f) f)=0,where f=f(t,,)̌ with (t,,)̌∈_+×Ω×^d. The spatial domain Ω can be either the whole space ^d or the periodic domain ^d. The alignment forceis defined as(f)(t,,)̌=∫_Ω×^dϕ(-)Φ(-)̌f(t,,)d d.Here, ϕ is the communication protocol, representing the strength of pairwise alignment interaction. Throughout the paper, we assume that ϕ is radially symmetric, bounded, Lipschitz, and non-increasing along the radial direction. Typical choices are: ϕ()=(1+||)^-α,α≥0. The mapping Φ : ^d→^d describes the type of alignment. One classical choice is the linear mapping Φ()=.The corresponding system (<ref>)-(<ref>) is often referred as the Vlasov-alignment system. It is a kinetic representation of the Cucker-Smale dynamics <cit.> that models the flocking phenomena in interacting particle systems. A generalization of the Cucker-Smale dynamics was introduced in <cit.>:_i=_̌i, ̌̇_i= 1/N∑^N_j=1ϕ(_i-_j)Φ(_̌j-_̌i), (x_i,v_i)∈Ω×^d.The system features a nonlinear velocity alignment, where the mapping Φ takes the formΦ()=||^p-1.When p=2, the mapping Φ is linear, and (<ref>) reduces to the Cucker-Smale dynamics. For p>2, the nonlinearity lead to different asymptotic flocking behaviors, as explored in various studies <cit.>. The system (<ref>)-(<ref>) was derived in <cit.> as a kinetic representation of (<ref>). The global well-posedness theory was also established in the same work.A macroscopic representation of the system (<ref>)-(<ref>)is the following compressible Euler system with alignment interactions:_tρ+_·(ρ)̆= 0, _t(ρ)̆+_·(ρ⊗̆)̆= ρ[ρ,]̆,where the alignment force [ρ,]̆ is defined as[ρ,]̆(t,)=∫_^dϕ(-)Φ((̆t,)-(̆t,))ρ(t,) d.With the linear mapping (<ref>), the system (<ref>)-(<ref>) is known as the Euler-alignment equations. The system has been extensively investigated in the last decade, see e.g. <cit.>. For more results on the Euler-alignment system, we refer to the recent book by Shvydkoy <cit.>.We are interested in the connection between the kinetic equations (<ref>)-(<ref>) and macroscopic system (<ref>)-(<ref>). The formal derivation was first established in <cit.> when the mapping Φ is linear (<ref>). The Euler-alignment equations were derived by taking zeroth and first moments of f on $̌, and formally apply the mono-kinetic ansatz f(t,,)̌=ρ(t,)δ_=̌(̆t,),whereδdenotes the Dirac delta function.The rigorous justification of the hydrodynamic limit is discussed by Figalli and Kang in <cit.>. The starting point of their analysis is the kinetic flocking equation:_t+·̌_+_·̌(() )=1/_·̌((-̌)).In addition to the alignment interaction (<ref>), there is another linear relaxation term on the right-hand side of (<ref>). As the parametertends to zero, the relaxation term enforces the mono-kinetic ansatz (<ref>). This relaxation term was introduced in <cit.>, viewed as local alignment. The macroscopic density and momentum associated withare denoted byandrespectively. These are defined as the zeroth and first moments ofwith respect to velocity$̌, expressed as:(t,)=∫_^d(t,,)̌d,̌(t,)=∫_^d(t,,)̌d.̌Hence, the velocity $̌ is relaxed to the macroscopic velocitygiven by:(t,):=∫_^d(t,,)̌ d/∫_^d(t,,)̌ d.From (<ref>) with (<ref>) and (<ref>), the dynamics ofandcan be derived, resulting in the following system:_t+_·()= 0, _t()+_·(⊗+)=()∫_Ωϕ(-)(()-())()d,whererepresents the Reynold's stress tensor (t,)=∫_^d(-̌)⊗(-̌)(t,,)̌ d.̌ Formally applying the mono-kinetic ansatz (<ref>) to (<ref>) results in≡0. Consequently, (<ref>) transforms into the pressure-less Euler-alignment equations (<ref>) with (<ref>).The rigorous derivation of the hydrodynamic limit, however, is non-trivial. In <cit.>, a relative entropy method is employed to rigorously establish the limit:(t,,)̌→ f(t,,)̌=ρ(t,)δ_=̌(̆t,)in an appropriate sense. Here,(ρ,ρ)̆constitutes the solution to the Euler-alignment equations.In this paper, our primary objective is to generalize the findings on the hydrodynamic limit to the case of nonlinear velocity alignment described by (<ref>) withp>2.The formal derivation for the hydrodynamic limit involving general choices ofphas recently been undertaken by Tadmor in <cit.>. The alignment force (<ref>) withΦin (<ref>) isreferred as p-alignment. The limiting system (<ref>) has been less thoroughly understood compared to the Euler-alignment equations (whenp=2), primarily due to the introduced nonlinearity.Recent investigations, as reported in <cit.>, have shed light on intriguing asymptotic behaviors stemming from the nonlinear nature ofp-alignment.One significant challenge in rigorously justifying this limit arises from the nonlinearity, which introduces an additional term in the momentum equation:_t()+_·(⊗+)=()∫_Ωϕ(-)Φ(()-())()d+.The discrepancy term=(t,)takes the form∫_Ω×^2dϕ(-)(|-|̌^p-2-|()-()|^p-2)(-)̌(,)(,)̌ d dď.We refer to Section <ref> for a formal derivation of the discrepancy. It is noteworthy that when the mappingΦis linear (p=2), the discrepancydoes not exist, i.e.,≡0. Conversely, whenp>2, obtaining additional control over the termbecomes imperative.Formally inserting the mono-kinetic ansatz (<ref>) into (<ref>) results in≡0. Consequently, the expectation is thatthe discrepancy vanishes as→0. However, achieving a rigorous limit requires delicate control ofthrough the relative entropy and the linear relaxation. This undertaking will be thoroughly investigated in the course of this paper.We are ready to state our main result on the rigorous derivation of the hydrodynamic limit.Letand (,) be the solutions to (<ref>) and (<ref>) respectively in the time interval [0,T_*], with well-prepared initial data. Then(t,,)̌⇀ f(t,,)̌=ρ(t,)δ_=̌(̆t,),as→0. The complete details of the theorem, including the definitions of solutions to the systems, the interpretation of well-prepared initial data, and the notation of convergence, will be presented later in the main context. Refer to Theorem <ref> for the comprehensive results.It is crucial to underscore that the mono-kinetic ansatz (<ref>) plays a pivotal role in establishingp-alignment in the resulting system (<ref>). Notably, the alignment interaction within the limiting system does not necessarily conform to ap-alignment in general. We illustrate this aspect in the subsequent discussion.One commonly considered equilibrium state is the Gaussian functionf(t,,)̌=ρ(t,)· (2π)^-d/2 e^-|-̌(̆t,)|^2/2,known as the isothermal ansatz. Plugging in this ansatz to (<ref>) would yield(t,)=(t,) 𝕀_d,where𝕀_ddenotes thed-by-didentity matrix. In the case of a linear mappingΦ, the limiting system corresponds to the Euler-alignment equations with isothermal pressure. Specifically, the momentum equation takes the form_t(ρ)̆+_·(ρ⊗̆)̆+_ρ=()∫_Ωϕ(-)(()-())()d.The rigorous derivation of this type of hydrodynamic limit has been explored in <cit.>, stemming from the following Vlasov-Fokker-Planck equation with alignment_t+·̌_+_·̌(() )=1/_·̌((-̌))+1/Δ_,where the right-hand side enforces the isothermal ansatz (<ref>) as→0.For a nonlinear mappingΦin (<ref>) withp>2, a crucial observation is thatdoes not tend to zero as→0. Consequently, the alignment interaction in the limiting system is notp-alignment. We present the isothermal hydrodynamic limit in Section <ref>, leaving the rigorous justification for future investigation.We would also like to highlight a distinct type of communication protocolϕ, known as singular communications, whereϕis unbounded at the origin. For instance,ϕ()=||^-αwithα>0. The kinetic equation (<ref>)-(<ref>) with singular communicationϕand linear mappingΦhas been investigated in <cit.>. A recent paper <cit.> suggests that singular communications enforce mono-kinetic ansatz (<ref>). A rigorous study of the hydrodynamic limit in this context would be interesting. Some relevant studies have been conducted by Poyato and Soler in <cit.>.The rest of the paper is organized as follows.Section <ref> presents some preliminary results on the kinetic flocking equation (<ref>). Section <ref> consists of a formal derivation of the hydrodynamic limit from (<ref>) to (<ref>), a local well-posedness theory for the limiting system (<ref>), and the complete statement of our main result, Theorem <ref>. The proof of the theorem is furnished in Section <ref>, leveraging the relative entropy method. The key innovation lies in controlling the discrepancythrough the mono-kinetic structure enforced by the linear relaxation. Finally, Section <ref> discusses the hydrodynamic limit with the isothermal ansatz (<ref>). Notably, the limiting system has an alignment force that is different from thep-alignment. § THE VLASOV-ALIGNMENT SYSTEMIn this section, we state a collection of preliminary results on the Vlasov-alignment system (<ref>). Recall the dynamics_t+·̌_+_·̌(() )=1/_·̌((-̌)), (0,,)̌=^0(,)̌,where the alignment forceis defined in (<ref>).We assume non-negative and compactly supported initial data^0(,)̌≥0,diam(supp_^0)≤^0<∞, anddiam(supp_^0)≤^0<∞.For simplicity, we assume unit total mass∫_Ω×^d^0(,)̌ d d=̌1.Note that the total mass is preserved in time. §.§ Local and global well-posednessThe global well-posedness theory for classical solutions to (<ref>) follows from standard argument for Vlasov-type equations. It requires Lipschitz continuity in(,)̌of the forcing terms(). See <cit.> for the casep=2, and <cit.> for more general discussions.For equation (<ref>) with the linear relaxation term, additional a priori control of_xis required to ensure Lipschitz continuity of the term1/(-̌). Let ^0∈(C^1∩ W^1,∞)(Ω×^d) and satisfies (<ref>). There exists a unique classical solution ∈ C^1([0,T)×Ω×^d) to equation (<ref>), provided __L^∞([0,T)×Ω)<+∞.In <cit.>, the authors construct weak solutions to (<ref>) withp=2by regularizingand obtaining uniform control analogous to (<ref>). We state the following version of their theorem for generalp-alignment.Let ^0∈ L^∞(Ω×^d) and satisfies (<ref>). Then there exists a weak solution ∈ L^∞([0,T)×Ω×^d) to equation (<ref>) in the sense of distribution, that is, ∫_0^T∫_Ω×^d(_tφ+·̌_φ+()·_φ̌+1/(-)̌·_φ̌)d dďt+∫_Ω×^d^0φ(0,·) d d=̌0, for any φ∈ C_c^∞([0,T)×Ω×^d).Shvydkoy <cit.> studied the hydrodynamic limits from (<ref>) with a regularized local relaxation term 1/_·̌((v-_̆^δ)), so that condition (<ref>) holds for any fixed δ>0. Applying Proposition <ref>, the kinetic equation has a unique classical solution ^δ. The hydrodynamic limit can then be studied by letting , δ→0 appropriately. For (<ref>), there is no uniqueness guarantee for the weak solution. We will show the hydrodynamic limit starting from any weak solutionsthat satisfy (<ref>). We define the kinetic energy (or entropy)(t)=1/2∫_Ω×^d||̌^2(t,,)̌ d d.̌The energy is dissipated by the alignment force, as well as the local relaxation. Define the kinetic enstrophy(t) =1/2∫_Ω^2×^2dϕ(-)|-|̌^p(t,,)̌(t,,) d d dď, (t) =∫_Ω×^d|-̌|^2(t,,)̌ d d.̌ We have the following bound on the energy dissipation. For any >0, letbe a weak solution to (<ref>). We haved/dt(t) ≤ -(t)- 1/(t),where the energyand enstropy ,are defined in (<ref>), (<ref>) and (<ref>), respectively.Supposeis a classical solution to (<ref>). We utilize (<ref>) and get d/dt(t)=1/2∫_Ω×^d||̌^2∂_t d d=-∫_Ω×^d||̌^2/2·_ d d-̌∫_Ω×^d||̌^2/2_·̌(()) d d+1/∫_Ω×^d||̌^2/2_·̌((-̌)) d d=∫_Ω×^d·̌() d d-̌1/∫_Ω×^d·̌(-̌) d d=∫_Ω^2×^2dϕ(-) ·̌(-)̌|- |^p-2(,)̌(,) d d dď -1/ϵ∫_Ω×^d (-̌)·(-̌) d d=--1/.Here, we have used the identity ∫_^d(-̌) d=̌0 in the penultimate equality, and symmetrized in (,̌) for the last equality. For weak solutions, we apply the calculation above to a sequence of smooth approximations, and pass to the limit to obtain the inequality (<ref>).§.§ Asymptotic flocking behaviorIn this part, we present several properties of the solutionto (<ref>) concerning its support in(,)̌. We define the variation of position and velocity as follows:_(t)= diam(supp_(t)),_(t)= diam(supp_(t)). We begin by stating a maximum principle that will be utilized throughout this paper. Supposeis a weak solution to (<ref>), with initial data ^0 satisfying (<ref>). Then we have _(t)≤^0,and_(t)≤^0+t^0, for any >0 and t∈[0,T). Indeed, a similar argument as in <cit.> yields the following system of inequalities on(_,_):_'(t)≤_(t), _'(t)≤ -2^2-pϕ(_(t))_(t)^p-1, with_(0)≤^0, _(0)≤^0. The maximum principle (<ref>) holds due to a rough estimate_'(t)≤0in (<ref>). Refined estimates can lead to asymptotic alignment and flocking behaviors in the system. For instance, assumingϕhas a positive lower boundϕ>0, we obtain_'(t)≤ -2^2-pϕ_(t)^p-1,_(0)≤^0,implying velocity alignment with an algebraic decay rate whenp>2, namely_(t)≤((^0)^-(p-2)+2^2-p(p-2)ϕ t)^-1/p-2≲ t^-1/p-2.This is notably different from the case of linear mapping (p=2), where the decay rate is exponential.A more interesting setup occurs whenΩ=^dandϕdecays to zero likeϕ(r)∼r^-α. The system (<ref>) exhibits different asymptotic behaviors for various choices ofpandα. Detailed discussions are provided in <cit.>.§ HYDRODYNAMIC LIMIT§.§ A formal derivationWe start with a formal derivation of the hydrodynamic limit from the kinetic system (<ref>) to the Euler-alignment system (<ref>). The derivation was first established in <cit.> for the linear alignment casep=2, and in <cit.> for general nonlinear alignment withp>2. For the sake of completeness, we present a formal derivation in this paper, under our notations.We start with computing the zeroth and first moments of. Integrating (<ref>) in$̌ yields the continuity equation_t+_·() = 0.Multiplying (<ref>) by $̌ and integrating in$̌, we obtain the momentum equation_t()+_x·∫_^d⊗̌ d=̌∫_^d() d.̌We rewrite the second moment by∫_^d⊗̌ d=̌⊗+ℛ_,whereis the Reynold's stress tensor defined as=∫_^d(-̌)⊗(-̌) d.̌ For the alignment term on the right hand side of (<ref>), if p=2, it can be represented by the macroscopic quantities (,). Indeed, we have∫_^d() d=̌ ∫_Ω×^d×^dϕ(-)(-)̌(,)̌(,) d dď= ∫_Ωϕ(-)(()-())()() d=(,).When p>2, the alignment term depends on higher moments of . We decomposeinto two parts()(,)̌=∫_Ω×^dϕ(-)|()-()|^p-2(-)̌(t,,)d d +∫_Ω×^dϕ(-)(|-|̌^p-2-|()-()|^p-2)(-)̌(t,,)d d=:_1()(,)̌+_2()(,)̌.The first term _1 is linear in $̌. Hence, we have∫_^d_1() d=̌ ∫_Ωϕ(-)|()-()|^p-2(()-())()() d= ()∫_Ωϕ(-)Φ(()-())() d=(,).For the remaining term_2, we denote(t,):=∫_^d_2((t,,)̌)(t,,)̌ d =∫_Ω×^2dϕ(-)(|-|̌^p-2-|()-()|^p-2)(-)̌(,)(,)̌ d dď. We summarize the above computation and obtain the following dynamics of(,):_t+_·()= 0, _t()+_·(⊗)+_·= [,]+. Now, we take a formal limit→0. The leading order𝒪(^-1)term in (<ref>) is the local relaxation1/_·̌((-̌))=0.This implies that the limiting profilefis mono-kinetic. More precisely, if →ρ,→ρ in some appropriate sense, then we have(t,,)̌→ f(t,,)̌=ρ(t,)δ_=̌(̆t,).Moreover, the mono-kinetic structure of f implies that→0, →0.Therefore, the limit quantities (ρ,)̆ solve the Euler-alignment system (<ref>). §.§ The Euler equations with p-alignmentFor the Euler-alignment system with p=2, local and global well-posedness theories have been well-established for smooth solutions in Sobolev spaces H^s(Ω)× H^s+1(Ω), as discussed in, for example, <cit.>. One crucial aspect of these theories is the control of the Lipschitz bound on the velocity [(̆t,·)]_Lip. Subsequently, the propagation of higher Sobolev norms follows from energy estimates.However, for the case of general nonlinear alignment, obtaining smooth solutions is more challenging due to the non-smooth behavior of Φ near the origin. Here, we present a well-posedness theory for solutions in the space (L^1∩ L^∞)(Ω)× W^1,∞(Ω). Suppose the initial data(ρ^0,^̆0)∈ (L^1∩ L^∞)(Ω)× W^1,∞(Ω). Then, there exists a time T such that solution to the system (<ref>)-(<ref>) exists with(ρ,)̆∈ C([0,T),(L^1∩ L^∞)(Ω))× C([0,T), W^1,∞(Ω)).Moreover, the time span of the solution can be extended as long as__L^∞([0,T)×Ω)≤ M,where M is a finite number.From (<ref>), we obtain the dynamics of the velocity (_t+·̆_)=̆(ρ,)̆.Applying gradient to the equation yields(_t+·̆_)_=̆-(_)̆^2+_(ρ,)̆.We estimate the p-alignment as follows:|_(ρ,)̆|≤∫_Ω|ϕ(-)|·|Φ((̆)-(̆))|·ρ() d+∫_Ωϕ(-)·|Φ((̆)-(̆))|·|(̆)|·ρ() d ≤ϕ_L^∞·_0^p+ϕ_L^∞· p_0^p-1·__L^∞.This leads to the estimate on _$̆:d/dt_(̆t,·)_L^∞≤_(̆t,·)_L^∞^2+ϕ_L^∞· p_0^p-1·__L^∞+ϕ_L^∞·_0^p.Apply Cauchy-Lipschitz theorem, there exists a timeT>0such that_(̆t,·)_L^∞is bounded for anyt∈[0,T]. Furthermore, (<ref>) holds.Note that(̆t,·)_L^∞≤^̆0_L^∞by maximum principle (argued similarly as in Proposition <ref>). Consequently, we obtain an a priori bound on(̆t)inW^1,∞(Ω).The well-posedness ofρ, given a Lipschitz velocity field (<ref>), follows from standard arguments. In particular,ρ(t,·)_L^1is conserved in time due to the conservation of mass, andρ(t,·)_L^∞has the a priori boundρ(t,·)_L^∞≤ρ^0_L^∞e^∫_0^t|_(̆s,·)_L^∞ ds≤ρ^0_L^∞ e^Mt,for anyt∈[0,T]. This completes the proof.§.§ Statement of the main resultOur main goal is to establish a rigorous derivation of the hydrodynamic limit. We consider the following well-prepared initial data^0satisfying (<ref>) where(_0,_0)are independent of. Moreover,^0is close to the initial data(ρ^0,^̆0)of the limiting system (<ref>), in the senseW_1(^0,f^0)<,wheref^0is defined asf^0(,)̌=ρ^0()δ_=̌^̆0(),andW_1is the 1-Wasserstein metric. It can be defined through the dual representation W_1(f,g)=sup_[φ]_Lip≤ 1∫_X φ(x)(f(x)-g(x)) dxwheref,gare arbitrary real-valued functions onX. In our context,X=Ω×^d.For simplicity, we assume the total mass is normalized to be 1, namely^0satisfies (<ref>), and∫_Ωρ^0() d=1. We now state our main result on the hydrodynamic limit. Assume the initial data ^0 and (ρ^0,^̆0) satisfy (<ref>), (<ref>) and (<ref>).Letbe a weak solution to (<ref>), and (ρ,)̆ be a strong solution to (<ref>) up to time T. Denote f(t,,)̌=ρ(t,)δ_=̌u(t,).Then, we have (t,,)̌⇀ρ(t,)δ_=̌u(t,)in ℳ((0,T)×Ω×^d), where ℳ((0,T)×Ω×^d) is the space of nonnegative Radon measures on (0,T)×Ω×^d. More quantitative estimates to the limit (<ref>) will be presented in (<ref>) and (<ref>).§ RIGOROUS DERIVATIONIn this section, we present the proof of our main theorem regarding the rigorous hydrodynamic limits, as outlined in Theorem <ref>. When the velocity alignment is linear (p=2), a framework has been established in <cit.>. Our approach extends this framework to accommodate situations where the velocity alignment is nonlinear (p>2). It is worth noting that we must establish additional controls to account for discrepancies generated by the nonlinearity, as detailed in Sections <ref> and <ref>.§.§ Relative entropy methodOur principal approach for rigorously establishing the hydrodynamic limit relies on the relative entropy method. We closely adhere to the framework outlined in <cit.> and focus our efforts on analyzing the following quantity:(t) = 1/2∫_Ω(t,x) |(t,x)-(̆t,)|^2 d. Let us remark the meaning of. LetU=(ρ,𝐦)=(ρ,ρ)̆. A convex entropy onUis defined asη(U)=η(ρ,𝐦):=|𝐦|^2/2ρ=ρ||̆^2/2. Then, we may define the relative entropy η(U_|U)=η(U_)-η(U)-Dη(U)·(U_-U) =||^2/2-ρ||̆^2/2-(-||̆^2/2(-ρ)+·̆(-ρ)̆)=1/2  |-|̆^2.Finally, the quantitydefined in (<ref>) is the spatial integration of the relative entropyη(U_|U).We investigate the evolution ofthrough the following calculation:d/dtη_ϵ=d/dt∫_Ω( ||^2/2-·+̆||̆^2/2) d=d/dtE_++,whereE_is the macroscopic energy E_=1/2∫_Ω ||^2 d. In particular, forwe have= ∫_Ω(-_t()·-̆·_t) d= ∫_Ω(_·(⊗+)-[,]-)·d+∫_Ω·( ·̆_-̆[ρ,]̆) d= ∫_Ω⊗(-)̆:_d-∫_Ω(·̆[,]+·[ρ,]̆) d-∫_Ω_·̆ d-∫_Ω·d=_1+_2+_3+_4.Similarly, forwe have=1/2∫_Ω_t ||̆^2 d+∫_Ω·̆_td= -1/2∫_Ω_·() ||̆^2 d+∫_Ω·̆( -·̆_+̆[ρ,]̆) d= -∫_Ω⊗̆(-)̆:_d+∫_Ω·̆[ρ,]̆ d=_1+_2. Next, we estimate all terms inand. We start with two straightforward bounds|_1+_1|=|∫_Ω(-)̆⊗(-)̆:_d|≤__L^∞ ,and|_3|=|∫_Ω_x·̆ d|≤_x_L^∞∫_Ω×^d(,)̌|-̌|^2 d d=̌__L^∞ ,where we recall the definition ofin (<ref>).Then, we focus on the termJ:=_2+_2=∫_Ω(-·̆[,]+(-̆)·[ρ,]̆) d.Start with the first term in (<ref>) and get∫_Ω-·̆[,] d= ∫_Ω^2-()()ϕ(-)(̆)·Φ(()-()) d d=1/2∫_Ω^2()()ϕ(-)((̆)-(̆))·Φ(()-()) d d=1/2∫_Ω^2()()ϕ(-)(()-())·Φ(()-()) d d+1/2∫_Ω^2()()ϕ(-)((̆)-(̆)-()+())·Φ(()-()) d d=:D_+J_1.Here, we symmetrizeandin the second equality, followed by splitting the quantity into two parts. In particular,D_is the macroscopic enstrophy D_ = 1/2∫_Ω^2()()ϕ(-)|()-()|^p d d.We will controlD_later by the kinetic enstrophy.Now we work on the second term in (<ref>). Split the term into two parts:∫_Ω(-̆)·[ρ,]̆ d=∫_Ω^2()ρ()ϕ(-)((̆)-())·Φ((̆)-(̆)) d d=∫_Ω^2()()ϕ(-)((̆)-())·Φ((̆)-(̆)) d d+∫_Ω^2()(ρ()-())ϕ(-)((̆)-())·Φ((̆)-(̆)) d d=:J_2+J_3.We further symmetrizeandinJ_2and obtainJ_2=1/2∫_Ω^2()()ϕ(-)((̆)-(̆)-()+())·Φ((̆)-(̆)) d d.CombingJ_1andJ_2, we getJ_1+J_2=1/2∫_Ω^2()()ϕ(-)((̆)-(̆)-()+())·(Φ(()-())-Φ((̆)-(̆))) d d.Observe that sinceΦis monotone increasing, we have(_1-_2)·(Φ(_1)-Φ(_2))≥0,∀ _1,_2.This yields J_1+J_2≤0. For the remaining termJ_3:J_3=∫_Ω()((̆)-())·[∫_Ω(ρ()-())ϕ(-)Φ((̆)-(̆)) d]d,we obtain the point-wise bound on the inner integral∫_Ω (ρ()-())ϕ(-)Φ((̆)-(̆)) d ≤W_1(ρ,)·[ϕ(-·) Φ((̆·)-(̆))]_Lip ≤W_1(ρ,)([ϕ]_Lip_0^p-1+ϕ_L^∞(p-1)_0^p-2__L^∞) ≤ C(1+__L^∞) W_1(ρ,),for any∈Ω. Here, we have used the maximum principle (<ref>).We then apply Hölder inequality and obtain |J_3|≤ C(1+__L^∞) W_1(ρ,)·√()≤ C(1+__L^∞)(+W_1^2(ρ,)). Collecting all the estimates, we conclude withd/dt≤d/dtE_+D_+_x_L^∞+C(1+__L^∞)(+ W_1^2(ρ,)) +_4.§.§ The control of macroscopic energy and enstrophyIn this part, we aim to obtain a bound on d/dtE_+D_that appears on the right hand side of (<ref>). We compareE_andD_with the kinetic energyand enstrophy, and apply (<ref>) to obtaind/dtE_+D_≤d/dt(E_-)+(D_-)-1/. Now, let us control the differences between kinetic and macroscopic energies and enstrophies. The following inequalities hold:E_(t) ≤(t),D_(t) ≤(t)+|Δ_(t)|,where the discrepancy Δ_(t) is defined asΔ_(t):=1/2∫_Ω^2×^2dϕ(-) (|-|̌^p-2-|()-()|^p-2) |-|̌^2 (t,,)̌(t,,) d d dď.The first inequality (<ref>) follows directly from the Cauchy-Schwarz inequality||^2=(∫_^d d)^2/∫_^d d≤∫_^d||̌^2 d.̌For the second inequality (<ref>), we decomposeinto two parts:=1/2∫_Ω^2×^2dϕ(-)|()-()|^p-2|-|̌^2(,)̌(,) d d dď+Δ_.For the first part, we apply (<ref>) and obtain∫_^2d|-|̌^2(,)̌(,) dď=∫_^2d(||^2-2·+̌||̌^2)(,)̌(,) dď=()∫_^d||^2(,)d-2()()·()()+()∫_^d||̌^2(,)̌d ≥()()(|()|^2-2()·()+|()|^2)=()()|()-()|^2.This leads to the bound1/2∫_Ω^2×^2dϕ(-)|()-()|^p-2|-|̌^2(,)̌(,) d d dď≥ D_.The inequality (<ref>) follows as a direct consequence. Whenp=2, the discrepancyΔ_(t)=0. However, with the nonlinear alignmentp>2,Δ_does not vanish unlessis mono-kinetic. Therefore, we will use the kinetic enstrophyto control the discrepancy.Let a,b∈[0,R]. The following inequalities hold: |a^p-2-b^p-2|≤ |a-b|^p-2,for2<p≤3 ,|a^p-2-b^p-2|≤ (p-2)R^p-3|a-b|,forp>3.For the first inequality, we assume b≤ a without loss of generality. If b=0, the equality holds trivially. If b>0, define z=a/b∈[1,∞). The inequality is equivalent tog(z):=z^p-2-1-(z-1)^p-2≤0.One can easily verify g(1)=0 and g'(z)≤0 for z≥1. This leads to the desired inequality.The second inequality is a direct application of the mean value theorem. Now we apply Lemma <ref> witha=|-|̌andb=|()-()|. Letq = min{p-2,1}≤1,and c_p= 0 p=2, 1 2<p≤3, (p-2)_0^p-3 p>3.We have||-|̌^p-2-|()-()|^p-2| ≤ c_p||-|̌-|()-()||^q≤ c_p(|-̌()|+|-()|)^q≤ c_p(|-̌()|^q+|-()|^q).Note that we have used triangle inequality in the second inequality, and concavity of the functionh(x)=x^qin the last inequality.Utilizing the estimate (<ref>) and Hölder inequality, we can boundΔ_as follows:|Δ_| ≤ c_p∫_Ω^2×^2dϕ(-)|-̌()|^q|-|̌^2(,)̌(,) d d dď≤ c_p(∫_Ω×^d|-̌()|^2(,)̌ d d)^q/2·(∫_Ω×^d(∫_Ω×^dϕ(-)|-̌|^2(,) d d)^2/2-q(,)̌ d d)^2-q/2≤ c_p·^q/2·ϕ_L^∞_0^2≤C_p/2 ^q/2,where we defined the constantC_p=8c_pϕ_L^∞^̆0_L^∞^2. §.§ The control of the discrepancy _4Whenp=2, the termvanishes. Hence we have_4=0. With the nonlinear alignmentp>2,does not vanish unlessis mono-kinetic. Therefore, we may treat_4similarly as the discrepancyΔ_.|_4| ≤_L^∞∫_Ω^2×^2dϕ(-) ||-|̌^p-2-|()-()|^p-2| |-|̌(,)̌(,) d d dď≤ 2c_p_L^∞∫_Ω^2×^2dϕ(-) |-̌()|^q |-|̌(,)̌(,) d d dď≤ 2c_pϕ_L^∞_0^̆0_L^∞·^q/2≤C_p/2^q/2. Collecting the estimates (<ref>), (<ref>), (<ref>) and (<ref>), the bound (<ref>) becomesd/dt≤(-1/+__L^∞+C_p^q/2)+d/dt(E_-)+C(1+__L^∞)(+ W_1^2(ρ,)).Integrate in[0,t]and apply (<ref>). We end up with(t)≤((0)+(0)-E_(0))+∫_0^t(-1/(s)+_(̆s,·)_L^∞(s)+C_p^q/2(s)) ds+C∫_0^t(1+_(̆s,·)_L^∞)((s)+ W_1^2(ρ(s,·),(s,·))) ds.§.§ The control of the relative entropyNow, we proceed to demonstrate the convergence of the relative entropy(t)to zero as→0. To achieve this, we will estimate all three terms on the right-hand side of (<ref>).For the first term in (<ref>), which concerns the initial data, we contend that it can be effectively controlled by the Wasserstein-1 distanceW_1(^0, f^0). As a result, we can establish its smallness, in accordance with the assertion in (<ref>). More precisely, we have the following bounds:(0)+(0)-E_(0) =1/2∫_Ω(^0|^0-^̆0|^2-^0|^0|^2) d+1/2∫_Ω×^d||̌^2^0 d d=1/2∫_Ω×^d|-̌^̆0()|^2^0(,)̌ d d=1/2∫_Ω×^d|-̌^̆0()|^2(^0(,)̌-f^0(,)̌) d d≤1/2[|-̌^̆0()|^2]_Lip_(,)̌W_1(^0,f^0)≤ C.Note that[|-̌^̆0()|^2]_Lip_(,)̌is bounded as^0is compactly supported, and^̆0is Lipschitz. The constantCmay be taken asC=(1+[^̆0]_Lip)_0.Next, we discuss the second term in (<ref>). Whenp=2, we haveC_2=0and there is no discrepancy. We takesmall enough with<1/M. Recall the a priori bound (<ref>). It implies that the second term in (<ref>) is negative. Whenp>2, we need to obtain an additional control to the discrepancyC_p^q/2. Consider the functionF(x) = -ax+bx^γwitha,b>0andγ∈(0,1). One can easily obtain the maximummax_x≥0F(x)=F(x_*)=(γ^1/1-γ+γ^γ/1-γ)a^-γ/1-γb^1/1-γ,wherex_*=γ^1/1-γa^-1/1-γb^1/1-γ.Takesmall enough such that<1/2M. Apply the above estimate with a=1/-_(̆t,·)_L^∞>1/2, b=C_pandγ=q/2.We have-1/(t) +_(̆t,·)_L^∞(t)+C_p^q/2(t)≤((q2)^2/2-q+(q2)^q/2-q)(2)^q/2-qC_p^2/2-q≤ C^q/2-q,where the constantCdepends only onp.Applying (<ref>) and (<ref>) to (<ref>), we deduce(t)≤C+C^q/2-qt+C∫_0^t((s)+ W_1^2(ρ(s,·),(s,·))) ds.The constantCdepends on initial data, the parameterp, and the a priori boundM(see (<ref>)).To close the argument, we state the following control onW_1(ρ,)by the relative entropy. There exist a constant C=C(T,M)>0, such thatW_1^2(ρ(t,·),(t,·))≤ C(W_1^2(ρ^0,^0)+∫_0^t(s) ds),for any t∈[0,T].Some versions of Lemma <ref> have been developed in e.g. <cit.> (controlW_2distance by), and <cit.> (controlW_1distance by kinetic relative entropy). We include a proof of the Lemma here for self-consistency. Consider the flow mapsanddefined as_t(t,)=(t,(t,)), (0,)=, and_t(t,)=(̆t,(t,)), (0,)=.The solutionsand ρ of the continuity equations can be viewed as the push-forward of the initial measure(t)=(t)_#^0,andρ(t)=(t)_#ρ^0.Let us define another measureρ̃_(t)=(t)_#^0,and decomposeW_1(ρ(t,·),(t,·))≤ W_1(ρ(t,·),(t,·))+W_1((t,·),(t,·)).For the first part, ρ and ρ̃_ shares the same flow. We obtainW_1(ρ(t,·),(t,·))=sup_[g]_Lip≤1∫_Ω g()(ρ(t,)-(t,) ) d=sup_[g]_Lip≤1∫_Ω g((t,))(ρ^0()-^0() ) d ≤(t,·)_L^∞ W_1(ρ^0,^0)≤ e^MtW_1(ρ^0,^0).For the second part,andshares the same initial data. We haveW_1((t,·),(t,·))=sup_[g]_Lip≤1∫_Ω g()((t,)-(t,) ) d=sup_[g]_Lip≤1∫_Ω(g((t,))-g((t,)))^0() d ≤∫_Ω|(t,)-(t,)|^0() d=:B(t).Clearly, B(t) is continuous, and differentiable for almost every t∈[0,T]. Compute the time derivative of B(t) and getB'(t)≤∫_Ω|(̆t,(t,))-(t,(t,))|^0() d ≤∫_Ω|(̆t,(t,))-(̆t,(t,))|^0() d+∫_Ω|(̆t,(t,))-(t,(t,))|^0() d ≤(̆t,·)_L^∞∫_Ω|(t,)-(t,)|^0() d + ∫_Ω|(̆t,)-(t,)|(t,) d ≤ M B(t)+√((t)).Together with B(0)=0, we apply Grönwall inequality and obtain W_1((t,·),(t,·))≤ B(t)≤∫_0^t e^M(t-s)√((s)) ds≤√(t)e^Mt(∫_0^t(s) ds)^1/2.Finally, we apply (<ref>) and (<ref>) to (<ref>), yieldingW_1^2(ρ(t,·),(t,·))≤ 2(1+t)e^2Mt(W_1^2(ρ^0,^0)+∫_0^t(s) ds).This finishes the proof of (<ref>), with C=C(T,M)=2(1+T)e^2MT. The initial distanceW_1^2(ρ^0,^0)is small due to the assumption (<ref>). Indeed, we haveW_1(ρ^0,^0)=sup_[g]_Lip≤1∫_Ω g()(ρ^0()-^0()) d=sup_[g]_Lip≤1∫_Ω×^d g()(f^0(,)̌-^0(,)̌) d d≤̌W_1(f^0,^0)<. Adding (<ref>) and (<ref>), we arrive at the inequality(t)+W_1^2(ρ(t,·),(t,·)) ≤ C[+^q/2-q+^2+∫_0^t((s)+ W_1^2(ρ(s,·),(s,·))) ds].Applying Grönwall inequality, we end up with(t)+W_1^2(ρ(t,·),(t,·)) ≤ Ce^Ct(^q/2-q++^2) 0,for anyt∈[0,T]. Here, the powerq/2-q∈(0,1/2]sinceq∈(0,1]. §.§ Proof of Theorem <ref>Now we apply the estimate (<ref>) to obtain our main convergence result (<ref>).Letg=g(,)̌be a test function such that[g]_Lip_,≤ 1.In the following calculation, we fix a timetand suppress thet-dependence. Compute∫_Ω×^d g(,)̌ (f(,)̌-(,)̌) d d=̌∫_Ω×^d g(,(̆)) (f(,)̌-(,)̌) d d+∫_Ω×^d (g(,)̌-g(,(̆))) (f(,)̌-(,)̌) d d=̌: K_1+K_2.We estimate term by term. ForK_1, we have|K_1|= |∫_Ω g(,(̆)) (ρ()-()) d|≤ (1+[]̆_Lip)W_1(ρ,).ForK_2, note that∫_Ω×^d (g(,)̌-g(,(̆))) f(,)̌ d d=̌∫_Ω×^d (g(,(̆))-g(,(̆))) ρ() d=0.Therefore,|K_2|=|∫_Ω×^d (g(,)̌-g(,(̆))) (,)̌ d d| ≤∫_Ω×^d|-̌(̆)|(,)̌ d d≤∫_Ω×^d|-̌()|(,)̌ d d+̌∫_Ω|()-(̆)|() d≤^1/2+^1/2.Combine the estimates above and take supreme on all test functionsg. We obtainW_1(f,)≤ (1+M) W_1(ρ,) + ^1/2 +^1/2. From (<ref>), we have the control∫_0^T(t) dt≤ ^0≤ C.Together with (<ref>), we deduce the bound∫_0^T W_1^2(f(t),(t)) dt ≤ C(^q/2-q++^2) 0,where the constantCdepends onp, T, Mand initial data. Therefore, we conclude with the convergenceW_1(f(t),(t))→0,in L^2(0,T).This leads to the convergence result (<ref>). To see this, we consider a test functiong=g(t,,)̌inC([0,T], Lip(Ω×^d)). Thus computing,∫_0^T∫_Ω×^dg(t,,)̌(f(t,,)̌-(t,,)̌) d dďt ≤∫_0^t [g(t)]_Lip_, W_1(f(t),(t)) dt≤[g(t)]_Lip_,_L^2(0,T)W_1(f(t),(t))_L^2(0,T)→0.Note that Lipschitz functions in a bounded domain are dense in the space of continuous functions. From Proposition <ref>, we know that the measuresf(t)and(t)are compactly supported. Then we apply the density argument and obtain convergence for any test functiong∈ C_c([0,T]×Ω×^d).§ ISOTHERMAL HYDRODYNAMIC LIMITIn this section, we discuss another type of hydrodynamic limit, by isothermal ansatz (<ref>). The main point we would like to make is that the alignment force[ρ,]̆that appears in the limiting system _t(ρ)̆+_·(ρ⊗̆)̆+_ p=ρ[ρ,]̆does not necessarily share the same mappingΦas the kinetic equation (<ref>). To illustrate this point, we perform the following formal calculation.Multiplying (<ref>) by$̌ and integrating in $̌, we obtain the momentum equation (<ref>), with the right-hand side ℱ(t,):=∫_^d() d=̌∫_Ω×^2dϕ(-)Φ(-)̌f(t,,)̌f(t,,) d dď.We suppress thet-dependency in the following calculation for simplicity. Applying isothermal ansatz (<ref>), we getℱ()=(2π)^-d∫_Ω×^2dϕ(-)Φ(-)̌e^-|-̌(̆)|^2+|-(̆)|^2/2ρ()ρ() d dď=ρ()/(2π)^d∫_Ω×^2dϕ(-)Φ((-)̌-((̆)-(̆)))e^-||̌^2+||^2/2ρ() d dď.Substituting𝐚=-$̌ and 𝐛=+$̌ yieldsℱ()=ρ()/(4π)^d∫_Ω×^2dϕ(-)Φ(𝐚-((̆)-(̆)))e^-|𝐚|^2+|𝐛|^2/4ρ() d d𝐚 d𝐛=ρ()/(4π)^d/2∫_Ω×^dϕ(-)Φ(𝐚-((̆)-(̆)))e^-|𝐚|^2/4ρ() d d𝐚=ρ()∫_Ωϕ(-)Ψ((̆)-(̆))ρ() d,where the mappingΨis defined asΨ() = 1/(4π)^d/2∫_^dΦ(𝐚-)e^-|𝐚|^2/4 d𝐚=1/(4π)^d/2∫_^dΦ(𝐚)e^-|𝐚+|^2/4 d𝐚. The mappingΨhas the following property.If Φ is odd, then Ψ is also odd. Ψ(-)=1/(4π)^d/2∫_^dΦ(𝐚)e^-|𝐚-|^2/4 d𝐚=1/(4π)^d/2∫_^dΦ(-𝐚)e^-|𝐚+|^2/4 d𝐚=-Ψ().However, if we takeΦ()=||^p-2as in (<ref>), the mappingΨdefined in (<ref>) is not the same asΦ, except the linear case whenp=2. Therefore, the hydrodynamic limit of (<ref>) does not have ap-alignment force.The rigorous justification of the isothermal hydrodynamic limit will be left for future investigation.plain
http://arxiv.org/abs/2312.16641v1
{ "authors": [ "McKenzie Black", "Changhui Tan" ], "categories": [ "math.AP", "35Q35, 35B25, 92D25" ], "primary_category": "math.AP", "published": "20231227170707", "title": "Hydrodynamic limit of a kinetic flocking model with nonlinear velocity alignment" }
remarkRemark assumptionAssumptiondefinitionDefinition[section] theorem[definition]Theorem lemma[definition]Lemma corollary[definition]Corollary proposition[definition]Proposition observation[definition]Observation claim[definition]Claim example[definition]Example fact[definition]Fact
http://arxiv.org/abs/2312.16414v2
{ "authors": [ "Bao Nguyen", "Binh Nguyen", "Viet Anh Nguyen" ], "categories": [ "cs.CV", "cs.LG" ], "primary_category": "cs.CV", "published": "20231227052020", "title": "Bellman Optimal Step-size Straightening of Flow-Matching Models" }
The emergence of the fifth-generation (5G) New Radio (NR) technology has provided unprecedented opportunities for vehicle-to-everything (V2X) networks, enabling enhanced quality of services. However, high-mobility V2X networks require frequent handovers and acquiring accurate channel state information (CSI) necessitates the utilization of pilot signals, leading to increased overhead and reduced communication throughput. To address this challenge, integrated sensing and communications (ISAC) techniques have been employed at the base station (gNB) within vehicle-to-infrastructure (V2I) networks, aiming to minimize overhead and improve spectral efficiency. In this study, we propose novel frame structures that incorporate ISAC signals for three crucial stages in the NR-V2X system: initial access, connected mode, and beam failure and recovery. These new frame structures employ 75% fewer pilots and reduce reference signals by 43.24%, capitalizing on the sensing capability of ISAC signals. Through extensive link-level simulations, we demonstrate that our proposed approach enables faster beam establishment during initial access, higher throughput and more precise beam tracking in connected mode with reduced overhead, and expedited detection and recovery from beam failures. Furthermore, the numerical results obtained from our simulations showcase enhanced spectrum efficiency, improved communication performance and minimal overhead, validating the effectiveness of the proposed ISAC-based techniques in NR V2I networks. ISAC, V2I, 5G NR, frame structure, overhead analysis Frame Structure and Protocol Design for Sensing-Assisted NR-V2X Communications Yunxin Li, Graduate Student Member, IEEE, Fan Liu, Senior Member, IEEE, Zhen Du, Member, IEEE, Weijie Yuan, Member, IEEE, Qingjiang Shi, Senior Member, IEEE, and Christos Masouros, Senior Member, IEEEA part of this article was presented at the IEEE International Conference on Communications (ICC), Italy, May 2023 <cit.>. Yunxin Li, Fan Liu and Weijie Yuan are with the Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China. Fan Liu is also with Peng Cheng Laboratory, Shenzhen 518066, China.E-mail: [email protected], {liuf6, yuanwj}@sustech.edu.cn. Zhen Du is with the School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China, and is also with the Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China. E-mail: [email protected]. Qingjiang Shi is with the School of Software Engineering, Tongji University, Shanghai 201804, China, and also with the Shenzhen Research Institute of Big Data, Shenzhen 518172, China. E-mail: [email protected]. Christos Masouros is with the Department of Electronic and Electrical Engineering, University College London, London WC1E 7JE, U.K. E-mail: [email protected].(Corresponding author: Fan Liu.) ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION§ INTRODUCTION V2Xnetworks have emerged as a transformative technology with the potential to revolutionize transportation systems by enabling vehicles to establish communications, not only amongst themselves but also with surrounding infrastructure, pedestrians and other road users. At the core of this paradigm shift lies a robust communication framework that facilitates seamless and efficient data exchange, thereby enhancing road safety, optimizing traffic flow and improving overall mobility experiences. To enable effective communication modes, V2X networks rely on a combination of wireless technologies. Two prominent technologies deployed in V2X networks are Dedicated Short-Range Communications (DSRC) <cit.> <cit.> and Cellular V2X (C-V2X) <cit.>. However, despite their advantages, both technologies come with inherent limitations. DSRC, for instance, is restricted to a dedicated frequency band (5.9 GHz), which poses scalibility challenges due to limited bandwidth. Additionally, sharing this frequency band with other wireless services introduces potential interference and coexistence issues. Although the 3rd Generation Partnership Project (3GPP) published specifications on NR sidelink, focusing on improving the latency, capacity and flexibility of V2X since Release 16 <cit.><cit.>, the reliability and accuracy of NR-V2X sidelink positioning still requires hybrid approaches such as global navigation satellite-based systems (GNSS) to achieve more robust and precise location estimation, which suffers from limited refresh rate. Moreover, the high mobility inherent in V2X networks presents significant challenges in channel training and beamforming, necessitating frequent coordination and feedback between the gNB and the vehicles. This poses critical issues in terms of signaling overhead and may lead to a degradation in communication throughput. Among the emerging technologies, millimeter-wave (mmWave) and massive multiple-input-multiple-output (mMIMO) present promising opportunities to overcome the limitations of traditional V2X communication with significantly improved performance <cit.><cit.>. The increased bandwidth in mmWave operating above 30 GHz enables the transmission of larger amounts of data as well as higher resolution for sensing. In conjunction with mmWave, mMIMO array equipped with a large number of antennas can form highly directional beams and create spatially separated communication channels, enabling simultaneous data transmission to multiple V2X nodes. More relavant to this work, the concept of integrated sensing and communications (ISAC) has garnered attention as a promising solution that fully reaps the benefits of mmWave and mMIMO technologies in NR V2X networks <cit.>. Recent studies have reported significant advantages of ISAC signaling in reducing channel estimation overhead compared to conventional mmWave beam training and tracking techniques used solely for communication purposes<cit.><cit.><cit.>. This is primarily due to the removal of the necessity for dedicated downlink pilots in ISAC signaling, as well as eliminating the need for uplink feedback and the resulting transmission and quantization errors, through direct processing of the reflected echoes from the vehicles. These findings highlight the promising prospects of ISAC signaling in optimizing the overall performance of mmWave-based V2X systems.Recent research efforts in ISAC V2X networks have focused on designing advanced techniques and algorithms in beam training and tracking to enhance the performance of V2X systems. These efforts aim to leverage the sensing capabilities for serving communication links, thereby improving the reliability and efficiency of vehicular networks. Predictive beamforming approaches based on extended Kalman filtering (EKF) <cit.>, Bayesian message passing algorithm <cit.> and deep learning <cit.> were proposed to increase the precision of beam alignment while reducing the beam training overhead associated with pilots. To address the challenges posed by extended vehicular targets, a dynamic predictive beamforming technique, namely, ISAC-AB, was proposed in <cit.>, which adaptively adjusts the beamwidth based on the location of the targets. Furthermore, roadway-geometry aware ISAC beam tracking method was designed to remain connectivity on complex trajectories <cit.>. These ISAC-based beam tracking and beamforming algorithms offer multiple advantages compared to conventional methods based on communication-only specifications that require pilot signals and feedback <cit.><cit.>. During tracking, ISAC system employs echo signals to analyze the parameters of interest pertaining to the vehicles, as opposed to relying on periodic channel state information (CSI) feedback from the vehicles. This approach ensures uninterrupted downlink data transmission and enables precise measurement of the targets, consequently mitigating the overhead associated with pilot signals and feedback. Additionally, the utilization of all downlink frames for both sensing and communications enhances the matched-filtering gain, thereby improving the accuracy of sensing, the resulting beamforming prediction and ensuring reliable communication quality. The sensing capability of the ISAC system extends beyond the angular domain, encompassing localization of the vehicles in terms of range and speed. This comprehensive sensing ability enables more precise localization of the vehicles. Despite the considerable body of research on ISAC-enabled V2X networks, the application of these approaches within the framework of 5G NR presents challenges due to the complex frame structures and transmission protocols inherent in 5G systems. Previous studies often assume simplified frame structures for ease of analysis, rendering their direct applicability to the intricate 5G NR framework uncertain. More importantly, while theoretical studies demonstrate the potential for reducing overhead through the utilization of echo signals and by leveraging the inherent sensing capabilities of ISAC, the practical implementation aspect and the resulting overhead reduction remain less explored. Consequently, further investigation is necessary to quantify the actual amount of overhead reduction achievable in practical systems when implementing ISAC techniques within the NR-V2X framework.To address the aforementioned challenges, we propose innovative frame structures and transmission protocols for 5G NR that enable sensing-assisted beam management in V2I networks, aiming to minimize the overhead caused by pilot and reference signals. The feasibility of the proposed approach is evaluated through link-level simulations, employing real channel conditions <cit.> between building groups and vehicles to showcase its enhanced performance. The main contributions of this paper are summarized as follows:=-1 * ISAC-based frame structures for initial access and connected mode. Building upon the existing frame structures of 5G NR, we investigate the pilot and reference signals, identifying opportunities to eliminate redundant components with the aid of ISAC technologies, and thereby reduce overhead. The proposed ISAC-based frame structures maintain the fundamental functionalities of the system while improving communication performance through optimized resource allocation.=-1 * Kinematic parameters-based fast beam failure detection and recovery. In order to mitigate latency associated with conventional beam failure detection and recovery mechanisms, we propose an algorithm that detects and recovers from beam failures based on abrupt changes in the kinematic parameters of the vehicle targets. By continuously monitoring beam failure instead of relying solely on periodic reference signals, the proposed algorithm significantly reduces the time required for beam failure detection and recovery, addressing latency concerns.=-1The remainder of this paper is organized as follows, Section 2 introduces the system models and algorithms used in sensing-assisted NR-V2I communications, Section 3 describes frame structures comparison between conventional NR and ISAC-based NR in different case studies, Section 4 provides numerical results from link-level simulations in different scenarios, and finally Section 5 concludes the paper.=-1 Notations: Without particular specification, matrices,vectors and scalars are denoted by bold uppercase letters (i.e., 𝐀), bold lowercase letters (i.e., 𝐚) and normal font (i.e., N), respectively. (·) and (·) are respectively used to denote the real and imaginary parts of a complex number. (·)^*, (·)^T, (·)^H and (·)^-1 represent the conjugate, transpose, Hermitian and inverse operators, respectively. ⊗ and ⊙ denote the tensor product and the Hadamard product. Moreover, 𝒩(μ, σ^2) and 𝒞𝒩(μ, 𝐑) denote the Gaussian distribution with mean μ and variance σ^2, and the complex Gaussian distribution with mean μ and covariance matrix 𝐑. =-1 § SENSING-ASSISTED COMMUNICATIONS IN NR-V2I NETWORKSIn this paper, we investigate the V2I network operating within the framework of the 5G NR protocol, specifically in the mmWave frequency band, which adopts OFDM waveform for data transmission. We assume that the gNB operates in full-duplex mode and is equipped with an mMIMO uniform planar array (UPA) which has N_t transmit antennas and N_r receive antennas. The vehicle in the network is also equipped with a MIMO UPA and is driving down the road while maintaining continuous communication with the gNB. The communication link between the gNB and the vehicle consists of one LoS path and K-1 NLoS paths, as shown in Fig. <ref>.The maximum time duration of interest is denoted as T_max and can be discretized into multiple small time slots with duration of Δ T, each of which is shorter than the coherence time of the communication channel. Then, it is reasonable to assume that all relevant parameters remain constant within each time slot. All the parameters are defined in the time window of t ∈[0, T_max]. Without loss of generality, the kinematic parameters of the vehicle at the nth time slot are denoted as θ_n=[θ_n, ϕ_n]^T, d_n and v_n, which represent the azimuth and elevation angles, range and speed of the vehicle, respectively.§.§ Radar Signal Model Let us denote the transmitted OFDM signal at the nth time slot with M subcarriers in frequency domain and L symbols in time domain as s_n(t)=∑_l=0^L-1∑_m=0^M-1 s_m,l e^j 2 π m Δ f trect(t-lT_s/T_s)where Δ f denotes the subcarrier spacing in the OFDM signal and s_m,l is the transmitted data carried by the mth subcarrier at the lth symbol. T_s=T_cp+T denotes the time duration of an OFDM symbol, which is the sum of the time duration of cyclic prefix T_cp and the elementary symbol duration T.The gNB received the reflected echoes from the vehicle and K-1 scatterers at the nth time slot can be formulated as 𝐫_n(t) = ζ√(p)∑_k=1^Kβ_k, ne^j 2 πμ_k, n t𝐛(θ_k, n) 𝐚^T(θ_k, n)·𝐟_ns_n(t-τ_k,n) +𝐫_self(t) +𝐳_r(t) where ζ = √(N_tN_r) denotes the array gain factor, with N_t and N_r being the number of transmit and receive antennas respectively, p denotes the transmitted signal power, and β_k, n and μ_k, n denote the reflection coefficient and the Doppler frequency of the kth scatterer, respectively. The reflection coefficient can be expressed as β_k, n=ϵ_k, n(2d_k, n)^-2 if the complex radar cross-section (RCS) ϵ_k, n and the relative distance d_k, n are given. The Doppler frequency μ_k, n=2v_k,nf_cc^-1 and the time delay τ_k,n=2d_k, nc^-1 are determined by the radial velocity v_k,n and the distance d_k,n of the kth scatterer respectively. 𝐫_self(t) denotes the self-interference incurred by the FD mode. Finally, 𝐳_r(t) denotes the complex additive white Gaussian noise with zero mean.We assume that gNB's UPA has the inter-element spacing of half-wavelength. Moreover, 𝐚(θ_k, n) and 𝐛(θ_k, n) in (<ref>) are the transmit and receive steering vectors of the gNB's UPA respectively, which are in the forms of𝐚(θ_k, n)=𝐚(θ, ϕ)=𝐯_az(θ, ϕ) ⊗𝐯_el(ϕ) 𝐛(θ_k, n)=𝐛(θ, ϕ)=𝐮_az(θ, ϕ) ⊗𝐮_el(ϕ)where θ and ϕ are the azimuth and elevation angles, and 𝐯_az(θ, ϕ), 𝐯_el(ϕ) and 𝐮_az(θ, ϕ), 𝐮_el(ϕ) are the transmit and receive steering vectors in the horizontal and vertical directions, respectively.𝐯_az(θ, ϕ)=√(1/N_t,x)[1, e^j πsinθcosϕ,⋯,e^j π(N_t,x-1) sinθcosϕ]^T 𝐯_el(ϕ)=√(1/N_t,y)[1, e^j πsinϕ, ⋯, e^j π(N_t,y-1) sinϕ]^T 𝐮_az(θ, ϕ)=√(1/N_r,x)[1, e^j πsinθcosϕ, ⋯, e^j π(N_r,x-1) sinθcosϕ]^T 𝐮_el(ϕ)=√(1/N_r,y)[1, e^j πsinϕ, ⋯, e^j π(N_r,y-1) sinϕ]^Twhere N_t,x, N_t,y and N_r,x, N_r,y denote the number of transmit and receive antennas in each row and column of the UPA respectively. The beamforming vector at the nth slot 𝐟_n is designed based on the predicted angle θ̂_n | n-1 from the (n-1)th slot as𝐟_n=𝐚(θ̂_n | n-1)§.§ Radar Measurement ModelBy sampling at each OFDM symbol and performing block-wise Fourier Transform, the signal can be discretized and presented in the frequency domain. Then, at the nth time slot, the received discrete signal at the ith antenna and the lth symbol can be expressed as :𝐫_i,l = ζ√(p)∑_k=1^K β_k [𝐛(θ_k,n)]_i 𝐚^T(θ_k,n) 𝐟_n 𝐃(μ_k,n)·( 𝐬_l⊙η(τ_k,n)[ω^*(μ_k,n)]_l) + 𝐳_i,lwhere i=0,...,N_r-1, 𝐫_i,l=[r_i,l[0],.,r_i,l[M-1]]^T, 𝐬_l=[s_0,l,.,s_M-1,l]^T, andη(τ_k,n) = [1, e^-j 2 πΔ f τ_k,n, ·, e^-j 2 πΔ f (M-1) τ_k,n]^T ω(μ_k,n) = [1, e^-j 2 πμ_k,n T_s, ·, e^-j 2 πμ_k,n(L-1) T_s]^T 𝐃(μ_k,n) = diag(1, e^j 2 πμ_k,nT_s/M, ·, e^j 2 πμ_k,n(M-1)T_s/M)𝐳_i,l is the additive Gaussian noise sample. The diagonal matrix 𝐃(μ_k,n) denotes the inter-carrier interference (ICI) in the fast-time domain. We assume that the time duration of the OFDM symbol T_s is significantly smaller than the coherence time, which is inversely proportional to the Doppler shift, leading to μ_k,n T_s≪ 1. The self-interference term is neglected for simplicity, assuming that any observed Doppler shift is solely due to the motion of the target and not influenced by the gNB's own transmitted signal. Thus, the ICI matrix exhibits proximity to the identity matrix. By aggregating L symbols in each time slot, the radar signal received at the ith antenna can be written into a compact matrix form:𝐑_i = ∑_k=1^K α_k( 𝐒⊙η(τ_k,n) ω^H(μ_k,n) ) + 𝐙_iwhere α_k=ζ√(p)β_k [𝐛(θ_k,n)]_i 𝐚^T(θ_k,n) 𝐟_n, 𝐑_i=[𝐫_i,0,.,𝐫_i,L-1]∈ℂ^M × L and 𝐒=[𝐬_0,.,𝐬_L-1]∈ℂ^M × L. 𝐙_i∈ℂ^M × L denotes the additive Gaussian noise matrix where vec(𝐙_i)∼𝒞𝒩(0_ML, σ^2𝐈_ML). The channel information can be extracted independently from the payload data by doing element-wise division between the received signal and the transmitted signal<cit.>. The post-processed signal matrix at the ith antenna can be written as:𝐑_i = ∑_k=1^Kα_kη_τ_k,nω_μ_k,n^H + 𝐙_i= 𝐗_i+𝐙_iwhere vec(𝐙_i)∼𝒞𝒩(0_ML, σ^2𝐈_ML × 1) with σ^2 = σ^2/MLTr( 𝐒^-1𝐒^-H) and 𝐒= diag(vec(𝐒)). The channel information in (<ref>) consists of the time delay τ_k,n and Doppler frequency μ_k, n, which can be effectively leveraged for the estimation of the key target parameters, including range and velocity. Thus, to improve the precision and reduce the complexity of the estimation, we propose a two-step algorithm to extract the range and velocity of the target. First, by implementing 2D-DFT with respect to 𝐑_i, we may get a rough estimation of the range and velocity. To be specific, by employing IFFT and FFT on the fast-time domain and slow-time domain respectively, scatterers and the target are detected at each corresponding index (m̂_k,l̂_k). The resulting distance and radial velocity can be given asd̂_k=m̂_k c/2 M Δ f v̂_k=l̂_k c/2 f_c L T_swhere f_c and c denote the carrier frequency and the speed of light respectively. The scatterers are assumed to be generated from static targets, e.g., buildings. Consequently, the parameters of the moving vehicles can be extracted from the detected peaks by simply eliminating peaks with zero relative velocities.Subsequently, the MUSIC algorithm <cit.> is employed to achieve super-resolution estimation of the range and velocity of the target. Specifically, we identify (m,l) as the corresponding peak index of the target through 2D-DFT. To mitigate computational complexity and reduce time cost, a much narrower interval can be applied in the MUSIC algorithm. In contrast to the conventional grid-based techniques commonly employed in MUSIC, we leverage the Golden-section search <cit.> to expedite convergence and enhance estimation accuracy. The radian frequency of the range and velocity can be defined as w_d=2 πΔ f τ and w_v=2 π T_sμ respectively<cit.>. Accordingly, the steering vectors of them are expressed asa_d = [1, e^-j w_d, ·, e^-j (M-1) w_d]^T a_v = [1, e^j w_v, ·, e^j (L-1) w_v]^T Following the steps of Golden-section search, the initial searching points of τ and μ would be [τ_a,(1-χ)(τ_b-τ_a),χ(τ_b-τ_a),τ_b] and [μ_a,(1-χ)(μ_b-μ_a),χ(μ_b-μ_a),μ_b] respectively, where χ=√(5)-1/2, τ_a=m-1/MΔ f, τ_b=m+1/MΔ f, μ_a=l-1/LT_s, μ_b=l+1/LT_s. To estimate the range, the MUSIC algorithm can be applied to the processed received signal at the lth symbol:𝐑_l=[𝐫_1, ...,𝐫_N_r-1] ∈ℂ^M × N_rThe covariance matrix can be written as Σ_τ=1/L∑_l=1^L𝐑_l𝐑_l^H. After eigenvalue decomposition, it can be further expressed as:Σ_τ=𝐔_s Λ_s 𝐔_s^H+𝐔_n Λ_n 𝐔_n^Hwhere the diagonal matrices Λ_s and Λ_n each contains K and M-K eigenvalues, and the signal and noise subspace 𝐔_s and 𝐔_n contains K and N_r-K eigenvectors. The MUSIC spectrum can be yielded by P_MUSIC(τ)=1/a_d^H(τ) 𝐔_n 𝐔_n^H a_d(τ)By narrowing the interval of the searching in each iteration, the peak can be determined to be the mid point of the interval once the interval is smaller than a certain threshold. The estimation of velocity follows the same procedure whereas it has different covariance matrix Σ_μ=1/M∑_m=1^M𝐑_m𝐑_m^H and MUSIC spectrum P_MUSIC(μ), where 𝐑_m denotes the processed received signal at the mth subcarrier and can be expressed as:𝐑_m=[𝐫_1, ...,𝐫_N_r-1] ∈ℂ^L × N_rThe detailed steps of the range and velocity estimation can be summarized in Algorithm <ref>. The resulting range and radial velocity of the target can be expressed asd=τ c/2, v=μ c/2 f_c Furthermore, 2D-MUSIC can be employed to estimate the DOA of the target with the covariance matrix to be Σ_θ=1/L∑_l=1^L𝐑_l^H𝐑_l. By traversing all the possible directions of the receiving steering vector 𝐛(θ) in (<ref>), the peak of the MUSIC spectrum P_MUSIC(θ) can be detected, thus the angle of the target can be estimated. Although the reflection coefficient is not measured in a direct manner, it can be calculated based on the measurement of d_k,n.§.§ Communication Signal ModelAt the nth time slot, the signal received by the vehicle side from the gNB can be formulated asc_n(t)=ζ√(p)∑_k=1^Kα_k, n𝐯_n^T 𝐮(θ_k, n) 𝐚^T(θ_k, n) 𝐟_ns_n(t) +z_c(t)where ζ = √(N_tM_r) denotes the array gain factor, with M_r being the number of receive antennas at the vehicle side and α_k, n are the path-loss coefficients of different paths.The receive steering vector 𝐮(θ_k, n) has the similar expression as (<ref>). The vehicle side's receive beamforming vector is derived based on the two-step prediction in <cit.>, given as 𝐯_n=𝐮(θ̂_n | n-2)Here, we define the receive Signal-to-Noise Ratio (SNR) asSNR_r= p|ζ∑_k=1^Kα_k, n𝐯_n^T 𝐮(θ_k, n) 𝐚^T(θ_k, n) 𝐟_n|^2/σ_c^2where σ_c^2 denotes the variances of the white Gaussian noise.§ FRAME STRUCTURES AND CASE STUDIES AT MMWAVE FREQUENCY BAND IN V2I NETWORKSIn this section, we introduce the conventional communication-only and the proposed ISAC-based frame structures of NR at mmWave frequencies, and provide case studies of three essential stages in NR beam management to analyze the superiority of sensing-assisted communications in V2I network. §.§ General Frame Structure and Reference Signals in 5G NRSimilar to LTE, OFDM waveform is still adopted in NR. But unlike LTE with only one type of numerology, up to 7 frame structures with different numerologies μ are supported in NR, which is defined by 3GPP specification <cit.>. The relationship between subcarrier spacing and numerology can be expressed as Δ f = 2^μ· 15, μ∈ℕ, μ≤6, where μ∈[2,6] are used in the mmWave frequency band. The time duration of one radio frame and one subframe are 10ms and 1ms regardless of the numerology, with each subframe further divided into 2^μ slots. Each slot is comprised of either 14 symbols or 12 symbols in case of normal CP or extended CP. The smallest physical resource in NR is named as resource element (RE), which occupies one subcarrier in frequency domain and one symbol in time domain. Each resource block (RB) contains 12 subcarriers in frequency domain and the whole frequency band is made up by multiple RBs. Moreover, up to 61 slot formats for normal CP are supported in Time Division Duplex (TDD) to assign the function at symbol level (i.e., Downlink, Uplink or Flexible) in each time slot. In 5G NR, reference signals are essential in facilitating various aspects of wireless communications, including channel estimation, beamforming and synchronization. Some key types of downlink reference signals in 5G NR are:* Synchronization Signal Block (SSB): An SSB consists both synchronization signals and physical broadcast channel (PBCH). Synchronization signals are composed of primary synchronization signal (PSS) and secondary synchronization signal (SSS), which are both aimed to assist initial cell identifications. PBCH demodulation reference signals (DMRS) and PBCH data constitute the PBCH that carries the system information and user data. An SSB occupies 4 symbols and 240 subcarriers in time domain and frequency domain respectively, which is presented in Fig. <ref>. * Demodulation Reference Signal (DMRS): DMRS plays a key function in the coherent demodulation of PDSCH. It provides the necessary reference for accurate demodulation of the received data symbols. Various mapping types, density options and additional DMRS configurations are supported to cater to different system requirements. * Channel State Information Reference Signal (CSI-RS): CSI-RS is utilized for acquiring downlink channel state information and beam refinement. Up to 32 ports are supported in NR, providing options for multiple antenna configurations. The choice of codebook depends on factors such as the number of ports, panel type and the number of users in MIMO systems, enabling adaptive transmission strategies in diverse network scenarios. Upon receiving the CSI-RS, UE processes the signal and extracts crucial parameters which form the basis for reporting back to the gNB. Through CSI-RS feedback, the report that sends back to gNB from UE contains parameters like rank indicator (RI), precoding matrix indicator (PMI), and channel quality information (CQI). The PMI allows the UE to give a recommendation of the preferred precoding matrix for the downlink transmission, which helps in beam refinement and beam switching. The configuration of the CSI-RS in time domain can be periodic, aperiodic or semi-persistent, which depends on its usage and intended purpose. For channel estimation and channel quality monitoring, the period of CSI-RS transmission and its corresponding feedback are determined from a set of discrete values, denoted as T_CSI-RS∈{4, 5, 8, 10, 16, 20, 32, 40, 64, 80, 160, 320, 640} slots. * Phase Tracking Reference Signal (PTRS): To compensate for the common phase error caused by phase noise generated in local oscillators, PTRS is introduced. It assists in mitigating the adverse effects of phase noise on the received signal and improves the accuracy of phase tracking. §.§ Initial AccessInitial access (IA) plays a critical role in the NR protocol where the idle users initialize connections with the network through establishing a reliable communication link with the gNB<cit.>. This process aims for realizing the downlink and uplink synchronization and assigning users a specified ID for the upcoming communication. After that, a beam refinement process is required between the gNB and the user, which leads to better signaling quality via optimized beamforming performance. §.§.§ IA in conventional NRIA in 5G NR can be summarized into three stages, beam sweeping, beam measurement and determination, beam reporting.* Beam sweeping: Beam sweeping is achieved by transmitting multiple SSBs in downlink direction. An SS burst set comprises L_max SSBs, where L_max∈{4,8,64} and varies under different numerologies and frequency bands <cit.>. Each SSB in the SS burst set is beamformed towards a certain angle so that the whole set could sweep the coverage area in both azimuth and elevation directions. The period of 20ms can be assumed by the user for initial cell search. Although up to 64 SSBs are supported in an SS burst set, all of them are transmitted in the first 5ms of the 20ms period. * Beam measurement and determination: The user determines the best SSB beam by measuring the SS reference signal received power (SS-RSRP), which is defined as the linear average over the power contributions of the resource elements that carry SSS<cit.>. SS-RSRP (in dBm) = 10log_10(1/N∑_n=1^N | 𝐗[n] |^2)+30where 𝐗[n] and N denote the SSS resource element and its number, respectively. Meanwhile, downlink synchronization is realized to estimate the time and frequency offset by performing matched filtering. PBCH decoding can be done after extracting the useful system information.* Beam reporting: The feedback of the best SSB beam is transmitted uplink in the random access channel (RACH). After receiving the RACH preamble, random access response (RAR) is beamformed and transmitted downlink in the direction of the best SSB beam. As the user receives the RAR, beam establishment and synchronization in IA are assumed to be completed. §.§.§ IA in ISAC NRThe beam sweeping procedure in conventional communication-only NR IA switches beams to cover each subsection in the coverage area, leading to substantial time delay of connection. In contrast, the sensing ability in ISAC signal could provide real-time monitoring on the movement of the vehicles and initialize synchronization and tracking as soon as the target enters the coverage area of the gNB.To be more specific, the gNB leverages the omnidirectional radar signal to detect if there exists new targets entering the coverage area when the communication is in idle mode. The implementation of 2D-DFT to the channel transfer information 𝐑_i in (<ref>) displays the received signal in delay-Doppler domain as:𝐘_i = 𝐅^H_M𝐑_i𝐅_L= 𝐅^H_M𝐗_i𝐅_L + 𝐅^H_M𝐙_i𝐅_Lwhere 𝐅_L∈ℂ^L × L and 𝐅^H_M∈ℂ^M × M denote the L-point DFT matrix and M-point IDFT matrix, respectively. Following the derivation in <cit.>, the target presence detection problem at the ith antenna, the mth subcarrier and the lth symbol can be formulated into a binary hypothesis testing as:{[ ℋ_0: (𝐘_i,m,l)=(𝐅^H_M𝐙_i𝐅_L)_m, l; ℋ_1: (𝐘_i,m,l)=(𝐅^H_M𝐗_i𝐅_L + 𝐅^H_M𝐙_i𝐅_L)_m, l ].which is further simplified as:{[ ℋ_0: (𝐘_i,m,l) ∼𝒩(0, σ^2 / 2); ℋ_1: (𝐘_i,m,l) ∼𝒩(μ, σ^2 / 2) ].where μ > 0, ℋ_0 and ℋ_1 denote the hypothesis of the vehicle being absent and present, respectively. The threshold determined by the false alarm rate P_FA is:|(𝐘_i,m,l)| ℋ_1ℋ_0≷√(σ^2/2) Q^-1(P_FA)Once the angle and the location of the target are being detected, the gNB sends the beamformed SSB signal in the specific direction and waits for the response to complete the IA procedure. Therefore, instead of transmitting 64 beams in mmWave frequency band every 20ms, only one beamformed synchronization signal will be sent to minimize the time consumed by IA procedure. In the event of a potential failure by the gNB to detect the presence of the target, we consider a scenario where, if a vehicle remains unconnected within the gNB coverage area for a specified duration, such as 20ms, without receiving the SSB signal, the vehicle initiates an uplink signal transmission to the gNB. Following this, the gNB proceeds to execute the conventional communication-only IA procedure. §.§ Connected Mode After undergoing the IA procedure, the UE establishes and maintains a connection with the gNB, a state commonly referred to as the connected mode<cit.><cit.>. Connected mode enables the UE to access various services provided by the V2I network, such as real-time data transmission of surrounding traffic and environment. In the connected mode, the UE and the gNB in the V2I network engage in ongoing interactions, including exchange of control information, synchronization, handover procedures and quality-of-service (QoS) management. To ensure the reliable and efficient communication between the UE and the gNB, the minimization of pilot signals and reference signals is vital in connected mode to maintain high-speed data transmission. However, it is important to acknowledge that reducing the usage of pilot and reference signals can potentially lead to an increased likelihood of errors in demodulation or beam misalignment. Fortunately, the sensing ability in ISAC V2I systems offers novel opportunities to enhance the efficiency of beam tracking procedures while minimizing the need for excessive pilot and reference signals. §.§.§ Connected Mode in conventional NRIn the connected mode of NR, maintaining precise match filtering and coherent demodulation by the UE necessitates synchronization with the gNB. This synchronization is achieved through periodic transmission of synchronization reference signals, specifically SSBs. Furthermore, in the connected mode, the periodic transmission of SSBs may also be considered as a form of beam training. This process enables accurate beam alignment between the UE and the gNB, which is vital for enhancing overall communication quality. To reduce the overhead of beam training, only a subset of the SSBs is transmitted periodically in the connected mode. Moreover, the period of the SSBs transmission T_SS has more options instead of the fixed period of 20ms in initial access, where T_SS=2^k·5ms, k∈ℕ, k≤5. Subsequently, in preparation for data transmission, Downlink Control Information (DCI) which provides essential scheduling information regarding the allocation of transmission resources to the UE, is conveyed through the Physical Downlink Control Channel (PDCCH). During the actual data transmission, the primary carrier of the transmitted data is the Physical Downlink Shared Channel (PDSCH). Within the PDSCH, multiple types of reference signals are mapped to aid in various aspects of reception<cit.>, such as DMRS, CSI-RS and PTRS. In this study, we consider a specific single layer SU-MIMO V2I network under the 5G NR framework with the chosen numerology denoted as μ=3, signifying the system is operating under mmWave frequency band with the subcarrier spacing of 120kHz. The period of SSBs transmission is still considered to be 20ms and only a subset of 8 SSB beams are transmitted instead of the whole 64 SSB beams. We adopt a widely used 5G NR frame structure, denoted as “DDDSU” in V2I network, where “D”, “S” and “U” represent the downlink slot, special slot and uplink slot respectively<cit.>.Given the high mobility of the vehicles in the V2I network, we assume that the DMRS in the PDSCH adopts the mapping type of “A” and an additional DMRS is added. To facilitate CSI estimation, CSI-RS is configured with a periodicity of 5 slots, employing the maximum available 32 antenna ports. The CSI-RS feedback report in the uplink slot contains parameters like CQI and PMI for channel estimation, beam refinement and beam switching, which has the same period as the CSI-RS transmission. For simplicity, PTRS is not considered in this study. The conventional communication-only NR frame structure of the proposed scenario is presented in Fig. <ref>. §.§.§ Connected Mode in ISAC NR Accurate tracking of the vehicle is the crucial element to ensure the high-quality communication in V2I network. To minimize the requirement for excessive pilot signals, it's beneficial to employ an advanced prediction method, such as EKF, for the precise estimation and tracking of the kinematic parameters associated with the vehicle. Following the derivation in <cit.> and based on the geometric relationships in Fig. <ref>, the state evolution model can be summarized as{[θ_n=θ_n-1-d_n-1^-1 v_n-1Δ T cosθ_n-1+ω_θ; d_n=d_n-1-v_n-1Δ T sinθ_n-1+ω_d; v_n=v_n-1+ω_v; β_n=β_n-1(1-d_n-1^-1 v_n-1Δ T sinθ_n-1)^2+ω_β ].where θ_n, d_n, v_n and β_n denote the azimuth angle, distance, velocity and reflection coefficient at the nth slot, respectively. The state evolution model and the measurement model derived in (<ref>) and (<ref>) can be formulated in compact forms as{[ State Evolution Model: 𝐱_n=𝐠(𝐱_n-1)+ω_n;Measurement Model: 𝐲_n=𝐱_n+𝐳_n ].where 𝐱=[θ, d, v, β]^T and 𝐲=[θ̂, d̂, v̂, β̂]^T denote the state variable and the measurement variable respectively, 𝐠 is defined in (<ref>), ω=[ω_θ, ω_d, ω_v, ω_β]^T and 𝐳=[z_θ, z_d, z_v, z_β]^T are the zero-mean Gaussian noises caused by approximation and measurement respectively, whose covariance matrices can be expressed as 𝐐_s=diag(σ_θ^2, σ_d^2, σ_v^2, σ_β^2) 𝐐_m=diag(σ_θ^2, σ_d^2, σ_v^2, σ_β^2) The variances of the measurement noises are directly proportional to CRBs from <cit.>. Before performing EKF, the state evolution model 𝐠 needs to be linearized by deriving its Jacobian matrix, which can be given as:∂𝐠/∂𝐱=[[1+v Δ T sinθ/dv Δ T cosθ/d^2 -Δ T cosθ/d 0; -v Δ T cosθ 1 -Δ T sinθ 0; 0 0 1 0;-2 β v Δ T cosθ/dι 2 β v Δ T sinθ/d^2ι -2 βΔ T sinθ/dι ι^2 ]]where ι=(1-v Δ T sinθ/d). Finally, follow the standard steps of the procedure in <cit.>, the prediction and estimation of EKF can be summarized as follows:1) State Prediction: 𝐱̂_n | n-1=𝐠(𝐱̂_n-1), 𝐱̂_n+1 | n-1=𝐠(𝐱̂_n | n-1). 2) Linearization: 𝐆_n-1=.∂𝐠/∂𝐱|_𝐱=𝐱̂_n-1, 𝐇_n=𝐈_4 3) MSE Matrix Prediction: 𝐌_n | n-1=𝐆_n-1𝐌_n-1𝐆_n-1^H+𝐐_s 4) Kalman Gain Calculation: 𝐊_n=𝐌_n | n-1𝐇_n^H(𝐐_m+𝐇_n 𝐌_n | n-1𝐇_n^H)^-1 5) State Tracking: 𝐱̂_n=𝐱̂_n | n-1+𝐊_n(𝐲_n-𝐱̂_n | n-1)) 6) MSE Matrix Update: 𝐌_n=(𝐈-𝐊_n 𝐇_n) 𝐌_n | n-1where 𝐈_4 denotes the identity matrix of size four. Frequent beam training and reference signals transmission are considered to be overhead in the communication system, due to the fact that the frequency-time resources could be assigned to transmit useful data. This may limit the effectiveness of the communication performance, especially for high-mobility V2I links. Fortunately, with the utilization of ISAC in the NR V2I network, some of the reference signals can be reduced, thus improve the overall throughput, as detailed below. The CSI-RS mapped in PDSCH in connected mode is mainly used for channel estimation. To be specific, UE obtains the information of channel based on the received CSI-RS and sends feedback to gNB which contains the parameters that UE prefers, such as PMI and RI. The downlink CSI-RS and the uplink feedback are useful in designing the transmission scheme for the next period but they also cause considerable overheads which will reduce the throughput and data rate. We highlight that, this dilemma can be tackled by applying the ISAC signaling approach in the NR network, which provides the CSI information to the gNB based on the echo signals reflected by the vehicles. The accurate prediction and estimation using EKF provides precise tracking of the target. Therefore, instead of implementing beam training and transmitting reference signals periodically to find best beam pairs and obtain the CSI, we propose to reduce the number of SSBs in transmission and abolish the transmission of CSI-RS. Now that the SSBs only have the function to achieve synchronization, only one SSB is beamformed to the predicted user direction. A repeated one can be transmitted in the same slot to utilize the resources efficiently. Moreover, in the ISAC NR frame structure, REs that previously occupied by CSI-RS can now be replaced by actual downlink data. The frame structures of conventional NR and ISAC NR in one radio frame and one resource block are compared in Fig. <ref>. Based on the frame structure, the throughput of 5G NR can be expressed as:Throughput(in Mbps) = 10^-6·∑_j=1^J(N_Layers^(j)· Q_M^(j). . ·N_PRB^BW(j),μ·12/T_s^μ·(1-BER^(j)-OH^(j)) )where J, N_Layers, Q_M, N_PRB^BW,μ, T_s^μ denote the number of carriers in carrier aggregation,number of layers in MIMO, modulation order, number of practical resource blocks, and the average OFDM symbol duration, respectively. Moreover, BER and OH represent the bit error rate and the overhead percentage. Compared with the conventional communication-only NR frame structure, the ISAC V2I NR frame offers significant reductions in overhead caused by beam training and reference signals. Firstly, four dedicated time slots are allocated for beam training and synchronization purposes in conventional frame structure. However, this number is reduced to just one time slot with only synchronization purpose. Consequently, the overhead incurred by beam training is reduced up to 75%. Secondly, both DMRS and CSI-RS occupy a certain number of REs within a period and a RB in the conventional frame structure, specifically, 42 REs and 32 REs respectively. While in ISAC frame structure, the overhead caused by CSI-RS is reduced, leading to an overall overhead reduction of 32/(42+32)=43.24%. Furthermore, the tracking algorithm like EKF only utilizes the received echo for analysis, which eliminates the need for an uplink feedback in the ISAC V2I NR frame structure. By reducing the number of beam training slots, optimizing the deployment of reference signals and eliminating the need for an uplink feedback, the ISAC V2I NR frame structure improves resource utilization and enhances the efficiency of V2I communication in the connected mode. §.§ Beam Failure and RecoveryThe radio link quality in NR is susceptible due to the high mobility of users, narrow beamwidth of pencil-like beams or the blockage between the gNB and the UE. These kinds of degradation of the radio link quality will lead to the occurrence of beam failure. In order to cope with this frequent occurring situation, beam failure recovery (BFR) procedures are introduced in NR to help identify new beam pairs and recover from poor communication quality. In this subsection, procedures of beam failure and recovery are compared between conventional communication-only scheme and proposed ISAC scheme, as depicted in Fig. <ref>.§.§.§ BFR in conventional NRBFR in 5G NR takes two steps to complete, beam failure detection and beam failure recovery. * Beam failure detection: Beam failure detection is accomplished by the collaboration between physical layer (L1) and MAC layer (L2). UE monitors the radio link quality constantly by the measurement of L1 RSRP. Once the L1 RSRP of the current beam is below a certain threshold SS-rsrp or CSIRS-rsrp, L1 provides an indication of beam failure instance (BFI) to L2. L2 starts a timer as soon as it receives the first BFI and increases the BFI counter by 1 for every BFI it receives. L2 will trigger the beam failure instantly if the BFI counter reaches to a certain threshold before the timer T_BFDtimer expires <cit.>, i.e. BFI_COUNTER ≥ BFI_max. * Beam failure recovery: After detecting beam failure, the recovery process is initiated by identifying candidate beams that surpass a specified threshold for recovery based on their L1 RSRP. UE performs random access channel (RACH) using the best beam from the previous selection and waits for the random access response (RAR) from the gNB to complete the beam failure recovery process. Note that the radio link failure might be declared if the recovery time exceeds or the communication quality of the new beam pair still suffers at a low level. §.§.§ BFR in ISAC NRThe beam failure detection in conventional communication-only requires constant monitoring on the L1 RSRP of either SSB or CSI-RS, which are only being sent periodically. Consequently, the assessment of the radio link quality occurs only once within each period. This approach, however, incurs a substantial time cost in identifying beam failure due to the minimum period of CSI-RS and SSB being 4 time slots. To address this issue, we further propose to detect the beam failure based on monitoring the kinematic parameters of the target, in order to fully reap the active sensing capability. Specifically, the abrupt change in parameters such as range and velocity implies sudden blockage between the UE and the gNB in V2I network when the UE is being successfully tracked in previous time slots. Hence, when both parameters exhibit sudden variations exceeding certain thresholds, i.e. Δ r_ th, Δ v_ th and persist beyond a specific number of time slots T_ th, beam failure between UE and gNB can be detected. The recurrent beam failure in mmWave band happens because the mmWave wave is susceptible to physical obstructions and suffers from high path loss. Here, we present two potential approaches for beam failure recovery process in mmWave V2I network. Firstly, in order to preserve the quality of communications, the gNB can adopt the strategy of switching to sub-6G band and transmitting omnidirectional signals, thereby mitigating the adverse effects of high attenuation caused by mmWave wave. Secondly, to maintain high data rates and throughput, gNB could leverage the channel reciprocity by analyzing the angle of arrivals from the uplink signal and beamform multiple data streams based on the DOAs of the NLoS paths. § LINK-LEVEL SIMULATIONSThe simulation scenario depicted in Fig. <ref> represents a vehicle driving along a straight road through real building clusters in the city of Shenzhen. The wireless communication channel between the vehicle and the gNB is modeled as a clustered delay line channel, comprising both LoS and NLoS paths. In the communication-only simulation case, the gNB is equipped with the UPA of size 8×8 and employs 32 CSI-RS antenna ports. The transmit array is divided into 4 subarrays in both the horizontal and vertical directions, and each subarray utilizes 4 oversample DFT beams in both directions which forms up the type-I CSI-RS codebook to enable beamforming. In terms of positioning, the gNB is located at the origin of the coordinate system, with the UPA's height of 8 meters. The vehicle's initial position is given as (25m, 40m, 1m). Additionally, the vehicle is driving at a constant speed of 20m/s with fluctuations. The simulation time of the whole process is 4 seconds, which consists of 32000 time slots. Other simulation parameters are specified in Table <ref>. §.§ Performance Comparison in Initial AccessIn this subsection, we present a comparative analysis of the initial access performance between the communication-only scheme and the ISAC scheme. In the simulation scenario, the vehicle commences the trajectory from randomly selected positions along the straight road at arbitrary points, within a time window during 0 to 20ms. In the communication-only scheme, beam training utilizing 64 SSB beams requires a time duration of 5ms out of a total 20ms period and identifies the optimal beam based on the SS-RSRP. By contrast, the ISAC scheme takes advantage of the inherent sensing capabilities of the system. It employs radar-based measurements to detect the presence and kinematic parameters of vehicles. An aggregation of 10 slots is employed for phase accumulation to ensure accurate kinematic parameter estimation. Subsequently, precise directional synchronization signals are transmitted to facilitate synchronization between the target vehicle and the gNB.To evaluate the effectiveness of ISAC scheme in IA, we first present the radar detection probability against the SSB sweeping accuracy in Fig. <ref>. In practical scenarios, the presence of noise introduces variability in the measurement of communication reference signal power and the radar detection of targets, leading to potential inaccuracies and missing detection. To gauge the precision of the beam angle, we employ the Root Mean Squared Error (RMSE) as a pertinent metric. Fig. <ref> shows the comparison between the RSRP-based and the radar-based beam identification. When the radar fails to detect the target's presence, the communication-only IA procedure is engaged. Consequently, the RMSE within the ISAC scheme can be considered as a weighted combination of two distinct scenarios: the RMSE under the condition of a missed radar detection, which corresponds to the communication-only RMSE, and the RMSE associated with a successful radar detection, reflecting the errors introduced by the radar system. Thus, the RMSE in the ISAC scheme represents a weighted aggregation of these two distinct situations. In contrast to traditional approaches that rely solely on predefined angles within the grid of SSB beams, the ISAC scheme leverages radar detection capabilities, significantly enhancing the system's proficiency in precisely ascertaining the location and dynamics of the vehicle during the initial access phase. This supplementary information plays a crucial role in optimizing the synchronization process between the target and the gNB, thereby improving the overall performance of the system.§.§ Performance Comparison in Connected Mode In this subsection, we present a comparative analysis of the performance between the conventional communication-only frame structure and the proposed ISAC frame structure in the connected mode. First, we evaluate the tracking performance of angle and range in Fig. <ref> and Fig. <ref> in terms of the cumulative distribution function (CDF) of the RMSE. Angles in the communication-only scheme obey the standard codebook, which leads to large quantization errors. Through the prediction and estimation procedures in the EKF, precise tracking of angles is achieved, resulting in more accurate beamforming towards the target. The ISAC scheme offers advantages over the conventional communication-only scheme by enabling real-time measurement and prediction of the parameters of interest based on the information carried by the echoes of the signals. This approach enhances the flexibility and precision of tracking. Additionally, the range tracking exhibits similar performance to the angle tracking, with larger number of antennas achieves more precise tracking.In Fig. <ref> and Fig. <ref>, we present a comparison of the communication performance between the communication-only scheme and ISAC scheme using different metrics, i.e. BER and throughput. Fig. <ref> demonstrates a substantial improvement in BER achieved by the ISAC scheme compared to the communication-only scheme, particularly in the case of 64-antenna systems. Additionally, the presence of array gain leads to a significant overall improvement in BER performance in the 64-antenna scenario when compared to the 16-antenna case. Furthermore, we evaluate the performance of the throughput, which is defined by (<ref>). It is important to note that the throughput is influenced by several factors, such as the number of CSI-RS antenna ports and the percentage of beam training. At lower SNR, the 64-antenna communication-only configuration exhibits superior performance to the 16-antenna ISAC configuration, attributed to antenna gain, whereas with increasing SNR, the ISAC 16-antenna setup outperforms the communication-only configuration due to its reduced overhead. Overall, the ISAC scheme exhibits a significant improvement in throughput due to the reduced beam training overhead caused by SSBs, the elimination of reference signals like CSI-RS, and a slight improvement in BER. Notably, a crossover phenomenon is observed at high SNR in the communication-only 64-antenna and 16-antenna cases. This phenomenon arises because the number of CSI-RS antenna ports should be smaller than the actual number of antennas. As a result, only 16 CSI-RS antenna ports are used in the 16-antenna communication-only case, leading to reduced overhead caused by reference signals that slightly improves the throughput. Moreover, it is noteworthy that the ISAC scheme affords an additional reduction in overhead by eliminating the transmission of CSI-RS feedback in the uplink, a facet not explicitly depicted in the comparative analysis. §.§ Performance Comparison in Beam Failure and Recovery Accurate and expeditious detection of failures and timely recovery procedures are crucial considering the high mobility of vehicles and the highly dynamic channel conditions in V2I networks. Taking into consideration the noise power spectral density of -174dBm/Hz and the bandwidth of 300MHz, the noise power is calculated to be -89.23dBm. The RSRP threshold for identifying beam failure is set at a level 3dB higher than the ambient noise floor. The beam failure scenario is designed to simulate a situation where a larger and faster vehicle obstructs the LoS path between the gNB and the tracking target, leading to communication between the gNB and the target relying exclusively on NLoS paths. The comparative analysis of beam failure detection probability under different SNR is presented in Fig. <ref>, contrasting the communication-only RSRP-based approach and ISAC kinematic parameters-based approach. Notably, the RSRP-based scheme exhibits a diminishing trend in detection rates with increasing SNR. This trend can be attributed to the circumstance where, despite the LoS path being obstructed, the received signal from NLoS paths continues to surpass the established threshold, rendering beam failure detection challenging. Concurrently, communication quality remains compromised. Conversely, the kinematic parameters-based scheme demonstrates consistent performance across diverse SNR levels, maintaining its efficacy in failure detection.Communication performance evaluation of beam failure and recovery is carried out for both the communication-only and ISAC approaches in Fig. <ref> and Fig. <ref>. The simulation of beam failure and recovery was conducted with the receive SNR at 20dB.Based on the 5G NR specifications, we assume that out of a total timer of 7.5ms,6 or over BFIs can be identified as beam failures. Therefore, a minimum of 6 periods of CSI-RS are required to detect the beam failure, which corresponds to a time duration of 3.75ms. In the ISAC scheme, under the constant measure of the kinematic parameters of the target, we assume that an abrupt change in both velocity and range that persists for at least 12 slots within a 20-slot timer can be identified as beam failure. In Fig. <ref>, the results show that the communication-only scheme takes approximately 5ms to identify the beam failure after it occurs, whereas the ISAC scheme achieves the same detection in just 2.5ms. This significant reduction in detection time highlights the substantial improvement provided by the ISAC scheme. After the beam failure detection, the recovery process in communication-only scheme involves performing beam training to identify a new beam for re-establishing the connection. The time duration of the beam training process is not presented in Fig. <ref> or Fig. <ref> since it does not involve calculations of BER and throughput. In contrast, the proposed first solution in the ISAC scheme recovering by switching to sub-6G with the carrier frequency of 5GHz and the numerology of μ=1. This change in the frame structure leads to the drop-off in the throughput. However, due to the smaller path loss in sub-6G, the BER achieves better performance compared to mmWave. The ISAC scheme also presents a second solution that utilizes the analysis of the DOAs of possible NLoS paths. By employing beamforming in the angles of the NLoS paths, the BER and throughput are improved compared with the traditional beam training in communication-only scheme recovery process.§ CONCLUSIONIn this paper, we have proposed improved NR frame structures for both initial access and connected mode in ISAC-based NR V2I networks. By efficiently employing the sensing ability and tracking algorithm like EKF, the utilization of pilot and reference signals is minimized, resulting in improved performance in localization, tracking and communication. Moreover, we have introduced efficient schemes for rapid beam failure detection and recovery by monitoring abrupt changes in the kinematic parameters of targets. The algorithm aims to detect beam failures promptly and restore the connection within a short time duration, while striving to maintain high communication quality. Finally, to validate the effectiveness of the proposed frame structures and algorithms, numerical results of link-level simulations have been provided, showing that the resultant communication performance in BER and throughput outperforms the conventional NR frame structures and transmission protocols.IEEEtran
http://arxiv.org/abs/2312.16381v1
{ "authors": [ "Yunxin Li", "Fan Liu", "Zhen Du", "Weijie Yuan", "Qingjiang Shi", "Christos Masouros" ], "categories": [ "eess.SP" ], "primary_category": "eess.SP", "published": "20231227023911", "title": "Frame Structure and Protocol Design for Sensing-Assisted NR-V2X Communications" }
Example 2022 Prospects for low-frequency radio astronomy in S. A. 1Instituto de Astronomía Teórica y Experimental, CONICET-UNC, Laprida 854, X5000BGR – Córdoba, Argentina([email protected]).2Facultad de Matemática, Astronomía, Física y Computación, UNC. Av. Medina Allende s/n , Ciudad Universitaria, CP:X5000HUA - Córdoba, Argentina. 3Instituto Argentino de Radioastronomía, CONICET– CICPBA–UNLP. Cno. Gral. Manuel Belgrano km 40, Pereyra, Buenos Aires, Argentina.4Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata. Paseo del Bosque s/n, FWA, B1900, Argentina.5Department of Space, Earth and Environment, Chalmers University of Technology. SE-412 96 Gothenburg, Sweden.6Center for Computational Relativity and Gravitation (CCRG), School of Mathematical Sciences, Rochester Institute of Technology, 85 Lomb Memorial Drive, Rochester, New York 14623, USA.7Departmento de Física, Universidad de Jaén. Campus Las Lagunillas s/n, 23071 Jaén, España. Araujo Furlan et al.Prospects for detection of radio transients S. B. Araujo Furlan, E. Zubieta, G. Gancio, G. E. Romero, S. del Palacio, F. Garcia, C. O. Lousto, J. A. Combi Araujo Furlan, S. B. Zubieta, E. Gancio, G. Romero, G. E. del Palacio, S. García, F. Lousto, C. O. Combi, J. A. Currently, 6 out of 30 known magnetars had pulsed radio emission detected. In this work, we evaluated the possibility of detecting radio transient events from magnetars with the telescopes of the Instituto Argentino de Radioastronomía (IAR). To this aim, we made daily observations of the magnetar XTE J1810-197 from 02-Sep-22 to 30-Nov-22.We analysed the observations by applying ephemeris folding and single pulse searches. We fitted a timing model to our observations and were able to detect the magnetar on 6 of the 36 observing sessions with signal-to-noise ratios at the limit of detectability, 3.3≤S/N≤4.1. We searched for individual pulses in one of these 6 days and found 7 individual pulses with 8.5≤S/N≤18.8. The dispersion measure changed slightly between pulses within a range of 178 ≤DM [pccm^-3] ≤ 182. The pulse with S/N=18.8 has an associated DM of 180 pccm^-3. We confirmed that we can detect pulsed radio emission in the band of 1400-1456 MHz from magnetars with a time resolution of 146 μ s, being able to detect both integrated pulse profiles and individual pulses.Actualmente, sólo 6 de los 30 magnetares conocidos han tenido emisión en radio reportada.En este trabajo, evaluamos la posibilidad de detectar eventos transitorios en radio provenientes de magnetares, usando los telescopios del Instituto Argentino de Radio Astronomía (IAR). Para ello, realizamos observaciones diarias del magnetar XTE J1810-197 desde el 02-Sep-22 al 30-Nov-22. Analizamos las observaciones empleando integraciones en fase y búsqueda de pulsos individuales. Ajustamos un modelo de timing actualizado para nuestras observaciones y detectamos al magnetar en 6 de 36 días observados con una señal a ruido de 3.3≤S/N≤ 4.1. Buscamos pulsos individuales en uno de estos 6 días y encontramos 7 pulsos con 8.5≤S/N≤18.8. La medida de dispersión varió ligeramente entre los pulsos dentro del rango 178 ≤DM [pccm^-3] ≤ 182.El pulso con S/N=18.8 tiene una DM asociada de 180 pccm^-3. Demostramos que podemos detectar emisión pulsada en radio (1400-1456 MHz) de magnetares con una resolución temporal de 146 μ s, observando perfiles de pulsos integrados y pulsos individuales.Methods: observational Methods: data analysis Stars: magnetars Stars: neutron Radio continuum: general Prospects for detecting fast transients with the radio telescopes of the Argentine Institute of Radio AstronomyS. B. Araujo Furlan,1,2E. Zubieta,3,4 G. Gancio,3 G.E. Romero,3 S. del Palacio,3,5 F. García,3 C. O. Lousto,6and J. A. Combi3,4,7 January 14, 2024 ======================================================================================================================================================§ INTRODUCTIONMagnetars are a particular class of young, slowly rotating neutron stars (P∼ 1–12 s) with extremely strong surface magnetic fields (B∼10^13–10^15 G). They exhibit a rich transient phenomenology, showing giant flares, short bursts, and outbursts, detected mainly at X-rays <cit.>. The energy for the observed X-ray and γ-ray emission is provided by the decaying magnetic fields <cit.>. Only 6 out of the 30 known magnetars had radio emission detected so far <cit.>[<https://www.physics.mcgill.ca/ pulsar/magnetar/main.html>]. Five of them presented detectable transient radio pulsations, always associated with X-ray outbursts. The remaining one, SGR 1935+2154, showed Fast Radio Burst (FRB)-like bursts<cit.>. Studying the pulsed radio emission of magnetars is a tool to probe their spectral and temporal phenomenology. This emission has a switch on-off behaviour, going through quiescent states. It also shows great pulse-to-pulse variability, in both single and averaged pulses (). Additionally, magnetars have been long suspected to be the source of FRBs. This hypothesis has been strongly supported by the recent detection of a FRB-like episode from SGR 1935+2154. Magnetars have been detected over a wide range of frequencies. For example, the magnetar XTE J1810-197 has been detected in frequencies as low as 300 MHz <cit.> and as high as 353 GHz <cit.>. In this context, we aim to assess the observational capabilities offered by the radio telescopes at the Argentine Institute of Radio Astronomy (IAR, after the acronym in Spanish), with a special focus on FRB-like phenomena. Previous efforts on the detection of magnetars were reported in <cit.>. There we reported the result of observations of the magnetar XTE J1810-197 following its last outburst in late 2018. We present an analysis of the prospects for detecting transient radio emissions from magnetars and compact objects with the recently updated radio telescopes Carlos M. Varsavsky (A1) and Esteban Bajaja (A2) at IAR <cit.>. These instruments can observe sources within a declination range of -90<δ<-10, with a maximum time on-source of ∼ 3h40m.For this preliminary investigation, we observed XTE J1810-197, since this source had previously been detected with both A1 and A2 <cit.>. XTE J1810-197 was the first magnetar found to emit in radio <cit.>. It had two outbursts, the most recent one in late 2018 <cit.>. Different studies were performed on the time variability of the radio average profile, single pulses, flux density, and spectral index in the years following this outburst <cit.>. The most recent study incorporates observations until April 2021 <cit.>. An interesting result was the detection of Giant Pulses (GP)-like emission <cit.> and GP <cit.> from this magnetar. Here we present a summary of the main results of the observations of XTE J1810-197 made in the second half of 2022 from IAR.§ OBSERVATIONS The IAR radio observatoryis located in the Pereyra Iraola Park in Argentina.It has two single dish radio telescopes of 30-metre diameter each.The acquisition module for each antenna has been updated during the past couple of years <cit.>. Nowadays, we routinely perform observations with ETTUS boards on the receivers. Each antenna has two of these boards, with a bandwidth of 56 MHz for each polarisation. The boards of A1 are configured as consecutive bands; this configuration gives a resulting bandwidth of 112 MHz with a total of 128 channels for a single polarisation. In the case of A2, the boards are configured to add both polarisation signals, measuring total power;this setting has a total bandwidth of 56 MHz. Both instruments have the same maximum observing frequency, 1456 MHz. We started a high-cadence monitoring campaign of the source on 02-Sep-22 with A1 and on 27-Sep-22 with A2. The observations were taken with a sampling time of 146 μs, a frequency resolution of 0.875 MHz[During 27-Sep-22 to 02-Oct-22 the acquisition configuration was set to 32 channels with a resolution of 1.75-MHz.] and an average total time on source of 2.4 h. Observations with A1 were strongly affected by radio frequency interference (RFI). We thus focused the analysis on observations made with A2 as they were cleaner. The time span analysed goes from 27-Sep-22 until 30-Nov-22, with a total of 37 days observed, summing ∼72 h on source. § REDUCTION AND DATA ANALYSISWe used two independent methods in searching for emission from the magnetar: one for obtaining an integrated pulse profile, and another one for searching for single pulses bright enough as to be individually detected.§.§ Ephemeris folding The search for integrated pulse profiles was done with the PuMA pipeline[<https://github.com/PuMA-Coll/PuMA>]. The pipeline makes an RFI excision with the taskfrom [<https://github.com/scottransom/presto>] and then it folds the observation with . The signal-to-noise ratio (S/N) for the magnetar was very low to fit its period each day. Instead, wemade an iterative process consisting of i) folding the observations with the most updated timing solution, ii) computing the time of arrival (TOA) for the observations with S/N ≳ 3, and iii) fitting the residuals with<cit.> to improve the timing model. In step i), we used as a seed ephemeris the one reported in <cit.>.In step ii), we calculated the S/N using<cit.>. We iterated this process until the timing solution converged.§.§ Search of single pulses We followed the process described in <cit.> to detect single pulses and radio transients. First, we corrected the data for the dispersion caused by the interstellar medium. 'stask corrects for dispersion a filterbank file and creates time series for each dispersion measure (DM) value in a previously specified range. The radio pulse emission from the magnetar should have the highest S/N for the dedispersed time series at the magnetar's DM. Each time series is obtained as a .dat and a .inf file. We used the mask previously created by the PuMA pipeline for RFI excision. We made 400 time series, with a DM step of 1 pcm^-3, starting at a DM of 100 pcm^-3. We searched within a broader range of DMs to search also for other transient events, such as FRBs. We then searched for single pulses within each time series. We employedfrom , with an S/N threshold of 8. This script returns a list of candidates for each time series. The output list can contain both astrophysical signals and RFIs. It also makes a diagnostic plot of the DM value, the S/N and the time for each candidate. We visually inspected this plot. To discriminate RFIs we employed , a machine learning classifier designed to identify astrophysical signals from RFI <cit.>, optimised for the discrimination of FRBs. The final results ofare the single pulses most likely to be of astrophysical origin. We searched with 'option and without it. With this option, it does not filter the pulses reported. We cross-checked the diagnostic plot ofand the results of .§ RESULTS §.§ Ephemeris folding We did the folding as described in  <ref>.With the fitted timing model, we detected radio emission in 6 out of 37 days with S/N of3.3≤S/N≤4.1. The parameter values of the model are given in Table <ref>. We detected pulsating periodic emissions on 27 and 29 of September, 18 and 21 of October and on the 03 and 06 of November. In Figure <ref> we show the profile for 29-Sep-22, the day with S/N=4.1. We can see the pulse centred at ∼0.65 of the pulse phase on the dynamic spectrum as in the integrated pulse profile. In the phase vs frequency plot, the pulse appears brighter on the outer parts of the bandwidth observed. §.§ Single pulse searchHere, we present the search for single pulses on the data taken on 29-Sep-22 with A2. The analysis of the remaining days will be reported in a future work. The diagnosis plot obtained withshowed 7 candidates centred around the magnetar's DM. When we employedwith the parameter , it selected 6 of the pulses. Without that option, it recognised only 2. In Table <ref> we present the MJD, the time of the pulse from the start of the observation, the S/N, and the DM at whichobtained the best S/N. The pulse at t=2492.18 s was the one thatdid not recognise in either search. The remaining were recognised with 'parameter. The only reported pulses without theoption were the ones at t=1649.71s and t=4326.79s.We visually inspected the time series for DM=180 pcm^-3, as it is the assigned DM value for the highest S/N pulse. In Figure <ref> we show the pulses of S/N=18.8 andS/N=12.9. This latter one had that S/N at an associated DM=181 pcm^-3. We can see on the right panel that the pulse is recognisable at another DM. The left panel shows the highest S/N pulse. The vertical axis is power in arbitrary units as we did not have a flux calibration. The pulse centred at t=1649.71 s extends for ∼13 time bins, corresponding to (1.9±0.1 ) ms, while the pulse at t=4326.79s extends for 11 time bins, that is, (1.6± 0.1) ms. § DISCUSSION AND CONCLUSIONS We explored the possibility of detecting fast radio transients with the radio telescopes of the Argentine Institute of Radio Astronomy. We detected integrated pulse profiles from the magnetar XTE J1810-197 on 6 out of 37 days of observations with S/N close to the limit of detectability. The emission probably corresponds to the afterglow of the 2018 outburst.As a successful detection of the integrated pulses profile greatly depends on the employed timing solution, improving our model will lead us to better pulse profiles and more detections. As shown in Table <ref>, we use a rather simple model. Increasing the complexity of the model and upgrading it to our observations will be necessary for improvement. We successfully detected 7 single pulses from XTE J1810-197 on the observation made on 29-Sep-22, with a sampling time of 146 μ s, and with 8<S/N for each of the single pulses. The values of S/N for each pulse are shown in Table <ref>.changes the number of reported pulses if we employ theparameter, as this tells it to not filter pulses. We noticed that if not used, we obtained only 2 of the 6 pulses reported when we use it. All the pulses were centred around the magnetar's DM within a range of 178 ≤DM [pccm^-3] ≤ 182. The DM shown in Table <ref> is the value for whichobtains the highest S/N for the pulses. The remaining pulses reported withhave 8.5≤S/N≤14.6.The lower limit of S/N=8 corresponds to the chosen threshold S/N for . As the pulses reported differ fromand , we concluded that a visual inspection of the diagnosis plot and the time series is necessary for a correct interpretation of the results. As our observations have an important presence of RFI, especially A1's, we want to study the effect of employing other RFI excision processes for our search for single pulses. In <cit.> they concluded that usingplusincreased greatly the S/N of A1's pulse profiles, while for A2 the improvement was not that significant. We aim to study the benefits of employing both excisions for the search of individual pulses. Previous studies reported single pulse radio emission up to MJD 59300 <cit.>. We detected single pulses emission on MJD 59851.8. We suspect by the value of the obtained S/N for the pulses, that they may be Giant Pulses. A direct estimate of the peak flux density of the brightest pulse, employing the radiometer equationfor a temporally resolved pulse <cit.>: S^SP_ peak = ( S/N)_ peak2k_BT_ sys/A_e(z)√(n_ pWΔν)yieldsS∼ (41±4) Jy. In this expression T_sys is the system temperature, A_e(z) is the effective collecting area as a function of zenith angle (z), Δν is the observed bandwidth, n_ p is the number of polarizations (two for A2) and ( S/N)_ peak is the peak S/N of the pulse, corresponding to a smoothing optimum for its observed width of W. We did not smooth the time series; instead we used the reported ( S/N)_ peak obtained with , with the apparent width of the pulse and the aperture efficiency taken from <cit.>.It seems clear that the pulses are of significant intensity. We are currently searching pulses for the remaining observed days of this campaign. The monitoring of magnetar XTE J1810-197 is ongoing since September 2022.We have demonstrated in this study that we are able to detect transient radio events with IAR's radio telescopes with a sampling time of 146 μs at frequencies of (1400-1456 MHz).Acknowledgements: Araujo Furlan S.B. thanks Marcus E. Lower for the discussion regarding the source we observed in this study. We thank the staff of IAR for help with the instruments during the observing campaign. [Bochenek et al.(2020)]2020Natur.587...59B Bochenek, C. D., Ravi, V., Belov, K. V., et al. 2020, , 587, 59[Caleb et al.(2022)]2022MNRAS.510.1996C Caleb, M., Rajwade, K., Desvignes, G., et al. 2022, , 510, 1996 [Camilo et al.(2006)]2006Natur.442..892C Camilo, F., Ransom, S. M., Halpern, J. P., et al. 2006, , 442, 892 [CHIME/FRB Collaboration et al.(2020)]2020Natur.587...54C CHIME/FRB Collaboration, Andersen, B. C., Bandura, K. M., et al. 2020, , 587, 54 [Dai et al.(2019)]2019ApJ...874L..14D Dai, S., Lower, M. E., Bailes, M., et al. 2019, , 874, L14[Duncan & Thompson(1992)]1992ApJ...392L...9D Duncan, R. C. & Thompson, C. 1992, , 392, L9 [Eie et al.(2021)]2021PASJ...73.1563E Eie, S., Terasawa, T., Akahori, T., et al. 2021, , 73, 1563 [Esposito et al.(2020)]2020ApJ...896L..30E Esposito, P., Rea, N., Borghese, A., et al. 2020, , 896, L30 [Esposito et al.(2021)]2021ASSL..461...97E Esposito, P., Rea, N., & Israel, G. L. 2021, Timing Neutron Stars: Pulsations, Oscillations and Explosions, 461, 97[Gancio et al.(2020)]2020A A...633A..84G Gancio, G., Lousto, C. O., Combi, L., et al. 2020, , 633, A84. doi:10.1051/0004-6361/201936525 [Dai et al.(2019)]2019ApJ...874L..14D Dai, S., Lower, M. E., Bailes, M., et al. 2019, , 874, L14[Del Palacio et al.(2018)]2018ATel12323....1D Del Palacio, S., Garcia, F., Combi, L., et al. 2018, The Astronomer's Telegram, 12323 [Hewitt et al.(2022)]2022MNRAS.515.3577H Hewitt, D. M., Snelders, M. P., Hessels, J. W. T., et al. 2022, , 515, 3577[Hobbs et al.(2006)]2006MNRAS.369..655H Hobbs, G. B., Edwards, R. T., & Manchester, R. N. 2006, , 369, 655 [Kaspi & Beloborodov(2017)]2017ARA A..55..261K Kaspi, V. M. & Beloborodov, A. M. 2017, , 55, 261 [Lam(2017)]2017ascl.soft06011L Lam, M. T. 2017, Astrophysics Source Code Library[Levin et al.(2019)]2019MNRAS.488.5251L Levin, L., Lyne, A. G., Desvignes, G., et al. 2019, , 488, 5251[Lousto et al.(2022)]2022MNRAS.509.5790L Lousto, C. O., Missel, R., Prajapati, H., et al. 2022, , 509, 5790 [Lyne et al.(2018)]2018ATel12284....1L Lyne, A., Levin, L., Stappers, B., et al. 2018, The Astronomer's Telegram, 12284[Maan & Aswathappa(2014)]2014MNRAS.445.3221M Maan, Y. & Aswathappa, H. A. 2014, , 445, 3221[Maan et al.(2019)]2019ApJ...882L...9M Maan, Y., Joshi, B. C., Surnis, M. P., et al. 2019, , 882, L9[Maan et al.(2022)]2022ApJ...931...67M Maan, Y., Surnis, M. P., Chandra Joshi, B., et al. 2022, , 931, 67 [Manchester et al.(2005)]2005AJ....129.1993M Manchester, R. N., Hobbs, G. B., Teoh, A., et al. 2005, , 129, 1993 [Michilli et al.(2018)]2018MNRAS.480.3457M Michilli, D., Hessels, J. W. T., Lyon, R. J., et al. 2018, , 480, 3457 [Olausen & Kaspi(2014)]2014ApJS..212....6O Olausen, S. A. & Kaspi, V. M. 2014, , 212, 6 [Torne et al.(2022)]2022ApJ...925L..17T Torne, P., Bell, G. S., Bintley, D., et al. 2022, , 925, L17
http://arxiv.org/abs/2312.16333v1
{ "authors": [ "Susana Beatriz Araujo Furlan", "Ezequiel Zubieta", "Guillermo Gancio", "Gustavo Esteban Romero", "Santiago del Palacio", "Federico García", "Carlos Oscar Lousto", "Jorge Ariel Combi" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231226211054", "title": "Prospects for Detecting Fast Transients with the Radio Telescopes of the Argentine Institute of Radio Astronomy" }
Department of MathematicsHebei Normal University 050016 Shijiazhuang, P.R.China curraddr [email protected] Department of MathematicsHebei Normal University 050016 Shijiazhuang, P.R.China curraddr [email protected][2020]Primary 47B13, 47B32 Secondary 32L05, 47B35 Let L^2_a() be the classical Bergman space and denote M_h for the multiplication operator by a function h. Let B be a finite Blaschke product with order n. An open question proposed by R. G. Douglas is whether the operators M_B on L^2_a() similar to ⊕_1^n M_z on ⊕_1^n L^2_a()? The question was answered in the affirmative, not only for Bergman space but also for many other Hilbert spaces withreproducing kernels. Since the operator M_z^*is in Cowen-Douglas class B_1() in many cases, Douglas's question can be expressed as a version for operators in B_1(), and it is affirmative for many operators in B_1(). A natural question is how about Douglas's question in the version for operators in Cowen-Douglas class B_n() (n>1)? In this paper, we investigate a family of operators, which are in a norm dense subclass of Cowen-Douglas class B_2(), and give a negative answer. This indicates that Douglas's question cannot be directly generalized to general Hilbert spaces with vector-valued analytical reproducing kernel. On the similarity of powers of operators with flag structure Kui Ji Dec.27th, 2023 ============================================================ § INTRODUCTION Letbe the open unit disk inandbe the closed unit disk. A function B onis said a finite Blaschke product with order n if it has the following formB(z) = e^i θ∏_j=1^nz-a_j/1-a̅_̅j̅z, z ∈,for some θ∈ [0,2π) and some a_1,…,a_n ∈. Let L^2_a() be the classical Bergman space onand denote M_h for the multiplication operator on L^2_a() by a function h ∈(). A previously open question proposed by R. G. Douglas (Question 6 in <cit.>) is whether the operator M_B on L^2_a() similar to ⊕_1^n M_z on ⊕_1^n L^2_a(), where n is the order of the finite Blaschke product B?The question was answered in the affirmative (see <cit.> or <cit.>). Later, the question was also answered in the affirmative on many other analytic function Hilbert spaces, such as weighted Bergman spaces A^2_α (see <cit.>), Sobolev disk algebra R() (see <cit.> or <cit.>), and Dirichlet space 𝔇 (see <cit.>). Recently in <cit.>, Hou and Jiang proved that the question still holds on the weighted Hardy space of polynomial growth, which covers the weighted Bergman space, the weighted Dirichlet space, and many weighted Hardy spaces defined without measures.Douglas's question originated in the study of reducing subspaces of analytic multiplication operators on Bergman space. Before the question was proposed, there had been many studies of the reducing subspaces of the multiplication operator by a finite Blaschke product (see <cit.>, <cit.>, and <cit.>). An application of Douglas's question, together with the technology of strongly irreducible operator and K_0-group, is the similarity of analytic Toeplitz operators (e.g. Theorem 1.1 in <cit.>), which is analogous to the result on Hardy space (see <cit.>).Note that M_B is also equal to the analytic function calculus B(M_z), then we can describe Douglas's question in the following general form: let T be a certain bounded linear operator on a complex separable Hilbert space H and suppose σ(T) ⊆, then for any finite Blaschke product B, is the operator B(T) similar to ⊕_1^n T on ⊕_1^n H, where n is the order of B?Obviously,an operator T satisfies Douglas's question if and only if so does its adjoint T^*. Since the adjoints M_z^* of the multiplication operators M_z by z on many analytic function Hilbert spaces are in Cowen-Douglas class B_1() proposed in <cit.>, the works mentioned earlier correspond to Douglas's question in the case that T is in B_1(). In that way, how about Douglas's question in the case that T is in Cowen-Douglas class B_n()(n>1)?For a long time, there has been a lack of sufficient understanding of Cowen-Douglas class B_n(Ω) for the higher rank case. A new but important subclass FB_n(Ω) has been introduced by G. Misrain <cit.>. All irreducible homogeneous operators in B_n() are in FB_n() (see <cit.>), and the class FB_n(Ω) (or even its subclass CFB_n(Ω)) is norm dense in B_n(Ω) (see <cit.>).G. Misra etc. proved the operator in FB_n(Ω)possesses a flag structure. It is also proved that the flag structure is rigid, that is, the unitary equivalence class of the operator and the flag structure determine each other (see <cit.>).In this paper, we investigate a family of operators in class FB_2() and give a negative answer to Douglas's question.Denote by () the set of all analytical functions onand denote by () the set of all analytical functions on . Let α be a weighted Hardy space with weight sequence α onand denote M_α,h for the multiplication operator by h ∈() on α. Let β be another weighted Hardy space with weight sequence β onand denote M_α,β,h for the multiplication operator by h ∈() from α to β. These concepts will be introduced in detail later.The following theorem is the main theorem of this paper. Let α and β be two Möbius invariant weighted Hardy spaces with property A for some integer n_0 ≥ 2. Suppose the weight sequences α and β satisfysup_k ≥ 0β(k)/α(k) < ∞andlim_k →∞α(k)/kβ(k) = 0.Denote T be the operator[ M_α,z^* M_α,β,h^*; 0 M_β,z^*; ]on α⊕β, where h ∈().If for any finite Blaschke product B, B(T) is similar to ⊕_1^n T, where n is the order of B, then h=0.In other words, under the hypothesis of Theorem <ref>, for any non-zero function h in (), the operator[ M_α,z^* M_α,β,h^*; 0 M_β,z^* ]doesn't satisfy Douglas's question.§ PRELIMINARIES§.§ Weighted Hardy space In the present article, we denote the power series coefficients of an analytical function f as f̂, i.e. f(z)=∑_k=0^∞f̂(k) z^k.§.§.§Let α = α be a positive sequence, then the weighted Hardy space α with weight sequence α onis defined asf ∈() ∑_k=0^∞f̂(k)^2 α(k)^2 < ∞.α has an inner product ⟨· , ·⟩_α defined as⟨ f,g ⟩_α = ∑_k=0^∞f̂(k) ĝ(k)α(k)^2,where f,g ∈α. In the present article, we always assume the weight sequence α satisfyinglim_k →∞α(k+1)/α(k) = 1.In this case, α is a Hilbert space and contains (). What's more, if a complex sequence a = a satisfies ∑_k=0^∞a(k)^2 α(k)^2 < ∞ and a function f is defined as f(z) = ∑_k=0^∞ a(k) z^k, then f ∈() and hence f ∈α.The norm of f ∈α is f_α = (∑_k=0^∞f̂(k)^2 α(k)^2)^1/2. Let e_k(z)=z^k, z ∈, then { e_k }_k=0^∞ forms an orthogonal basis of α (thus, α is separable). In addition, e_k_α = α(k) and f = ∑_k=0^∞f̂(k) e_k in the sense of norm ·_α. It can be checked that for each ω∈, the linear function δ_ω f ∈α→ f(ω) ∈ is continuous. Then by Riesz representation Theorem, α has a reproducing kernel { k_ω}_ω∈, i.e. for each ω∈, k_ω∈α and ⟨ f,k_ω⟩_α = f(ω) whenever f ∈α.§.§.§Let α be a weighted Hardy space. The multiplier algebra (α) of α is defined as the set of all h ∈() such that h f ∈α for any f ∈α.For each h ∈(α), one can define a multiplication operator M_α,h f ∈α→ h f ∈α, which is bounded by closed graph Theorem. Sometimes M_α,h is also abbreviated as M_h. As well known, (α) ⊆H^∞() ∩α, where H^∞() is the set of all bounded analytic functions on . The Mapping h ∈(α) → M_α,h∈ℒ(α) is an algebraic monomorphism with 1 → I, and for each h ∈(α), M_α,h is invertible if and only if 1/h∈(α).As well known, the assumption (<ref>) guarantees that the multiplication operator M_α,z is a bounded linear operator on α and M_α,z = sup_k ≥ 0α(k+1)/α(k). Denote { M_α,z}' for the commutant algebra of M_α,z, i.e. the set of all X ∈ℒ(α) such that X M_α,z = M_α,z X. Then it can be verified that X is in { M_α,z}' if and only if X is equal to M_α,h for some h ∈(α).Furthermore, the assumption (<ref>) also guarantees that () ⊆(α) (see Lemma 3.4 in <cit.> or see <cit.>). Then it can be verified that the spectrum σ(M_α,z) of M_α,z is contained inand for any h ∈(), the analytic function calculus h(M_α,z) is M_α,h.Let α and β be two weighted Hardy spaces. The multiplier (α,β) from α to β is defined as the set of all h ∈() such that h f ∈β for any f ∈α.For each h ∈(α,β), one can define a multiplication operator M_α,β,h f ∈α→ h f ∈β, which is bounded by closed graph Theorem. Sometimes M_α,β,h is also abbreviated as M_h. Obviously, (α,β) ⊆β. In addition, it can also be verified that X ∈ℒ(α,β) satisfies X M_α,z = M_β,z X if and only if X is equal to M_α,β,h for some h ∈(α,β). A simple example of weighted Hardy space involved in the present article is the Hilbert space H_(λ). For any λ∈, H_(λ) is defined as the weighted Hardy space α_λ with weight sequence α_λ(k) = (k+1)^λ, k=0,1,….This type of space contains many classical analytic function spaces on . For example, the classical Hardy space H^2() (λ = 0), the classical Bergman space L^2_a() (λ = -1/2) and the classical Dirichlet space 𝔇 (λ = 1/2). A complicated example of weighted Hardy space involved in the present article is the weighted Hardy space of polynomial growth (which has been studied in <cit.> recently). A weighted Hardy space α (usually assume α(0)=1) is called of polynomial growth if the weight sequence α satisfiessup_k ∈ (k+1) α(k)/α(k-1) - 1 < ∞. The weighted Hardy spaces of polynomial growth cover the weighted Bergman space, the weighted Dirichlet space, and many weighted Hardy spaces defined without measures.§.§ Cowen-Douglas operators Recall some basic concepts of the Cowen-Doug-las operator class B_n(Ω), which was introduced in <cit.>, and the subclass FB_2(Ω) of B_2(Ω), which was introduced in <cit.>.Let H be a complex separable Hilbert space and ℒ(H) denote the collection of bounded linear operators on H. For Ω a connected open subset ofand n a positive integer, let B_n(Ω) denote the operators T in ℒ(H) which satisfy:(a) Ω⊆σ(T) = ω∈ T-ω not invertible;(b) (T-ω) = H for ω∈Ω;(c) ∨_ω∈Ω (T-ω) = H; and(d) (T-ω) =n for ω∈Ω.As well known, the adjoint M_α,z^* of the multiplication operator M_α,z acting on the weighted Hardy space α is in class B_1().The operator class FB_2(Ω) is the set of all bounded linear operators T of the form[ T_0 S; 0 T_1; ],where T_0 and T_1 are in class B_1(Ω) and the operator S is a non-zero intertwiner between them, i.e. T_0 S = S T_1.Let α and β be two weighted Hardy spaces. Throughout this paper, we will denote T_α,β,h as the operator[ M_α,z^* M_α,β,h^*; 0 M_β,z^*; ]on α⊕β for a function h in (α,β). One can seethat T_α,β,h is in class FB_2() whenever h is a non-zero function.§ TWO PROPOSITIONS Let α and β be two weighted Hardy spaces. Let T = T_α,β,h, where h is a non-zero function in (α,β). To investigate whether for any finite Blaschke product B, B(T) is similar to ⊕_1^n T, where n is the order of B, we will treat B in two cases: the case of B(z) = e^i θz-a/1-a̅z and the case of B(z) = z^n. In <ref>, we deal with the first case and the main point is Proposition <ref>. In <ref>, we deal with the second case and the main points are Proposition <ref> and Proposition <ref>. Before that, in <ref>, we will first prove the following two propositions: Proposition <ref> and Proposition <ref>.In this paper, we write T ∼T̃ for two bounded linear operators T and T̃ if T is similar to T̃ (i.e. there exists an invertible operator X such that T X = X T̃).Let T_0 and T_1 be two bounded linear operators on Hilbert spaces H_0 and H_1 respectively. Denote σ_T_0,T_1 (X) := T_0 X-X T_1 for any X ∈ℒ(H_1,H_0). Then a linear operator σ_T_0,T_1ℒ(H_1,H_0) →ℒ(H_1,H_0) can be defined. Let σ_T be the operator σ_T,T.From Lemma 2.18 in <cit.> and Theorem 2.19 in <cit.>, a useful conclusion can be obtained that for any T in class B_1(Ω), σ_T∩σ_T = 0. But we need to slightly generalize this. Let α and β be two weighted Hardy spaces. If the weight sequences α and β satisfy conditionlim_k →∞α(k)/kβ(k) = 0,then σ_M_α,z^*,M_β,z^*∩σ_M_α,z^*,M_β,z^* = 0.Clearly, we just need to explain that for any Y ∈ℒ(α,β), the equations Y M_α,z = M_β,z Y and Y = X M_α,z - M_β,z X, for some X ∈ℒ(α,β), imply Y=0.The equation Y M_α,z = M_β,z Y is equivalent to Y = M_α,β,h for some h ∈(α,β). Then we have Y = X M_α,z - M_β,z X, X M_α,z = M_β,z X + M_α,β,h. Hence, X z^k = z X z^k-1 + h z^k, k=1,2,…. Let g := X z^0∈β. Then by induction, we have that X z^k = z^k g + k z^k-1 h, k=1,2,….Let X z^k = ∑_j=0^∞ x_jk z^j, k=0,1,…. Then⟨ X z^k,z^l⟩ = ⟨∑_j=0^∞ x_jk z^j,z^l⟩ = ∑_j=0^∞ x_jk⟨ z^j,z^l⟩ = x_lkβ(l)^2,where k,l=0,1,…. On the other hand,z^k g(z) = ∑_j=0^∞ĝ(j) z^j+k = ∑_j=0^∞ĝ(j-k) z^j, z^k h(z) = ∑_j=0^∞ĥ(j) z^j+k = ∑_j=0^∞ĥ(j-k) z^j,where k = 0,1,…, and ĝ(j) = 0 = ĥ(j) whenever j<0. Hence, ⟨ X z^k,z^l⟩ = ⟨ z^k g + k z^k-1 h,z^l⟩= ⟨ z^k g,z^l⟩ + k ⟨ z^k-1 h,z^l⟩= ⟨∑_j=0^∞ĝ(j-k) z^j,z^l⟩ + k ⟨∑_j=0^∞ĥ(j-k+1) z^j,z^l⟩= ĝ(l-k) β(l)^2 + k ĥ(l-k+1) β(l)^2,where k=1,2,… and l=0,1,…. Note that X z^0 = g, equation (<ref>) still holds for k=0. From (<ref>) and (<ref>), it follows thatx_lk = ĝ(l-k) + k ĥ(l-k+1),where k,l=0,1,….Let t=0,1,…, k=1,2,…, and l=k+t-1. Then x_k+t-1,k = ĝ(t-1) + k ĥ(t).Hence,x_k+t-1,k/k = ĝ(t-1)/k + ĥ(t) →ĥ(t), k →∞, for t=0,1,…. From condition (<ref>) and assumption (<ref>), we can see thatα(k)/kβ(k+t-1)→ 0, k →∞, for t=0,1,….From (<ref>), it follows thatx_k+t-1,k = ⟨ X z^k,z^k+t-1⟩/β(k+t-1)^2≤Xz^k_αz^k+t-1_β/β(k+t-1)^2= α(k)/β(k+t-1)X.Hence,x_k+t-1,k/k≤α(k)/kβ(k+t-1)X→ 0, k →∞, for t=0,1,….From (<ref>) and (<ref>), it can be observed that ĥ(t)=0 when t=0,1,…. This implies h=0 and consequently Y=M_α,β,h=0.Let λ,μ∈. The weight sequences of the Hilbert spaces H_(λ) = α_λ and H_(μ) = α_μ are α_λ(k) = (k+1)^λ and α_μ(k) = (k+1)^μ, respectively. Then the weight sequences α = α_λ and β = α_μ satisfy the condition (<ref>) if and only if λ - μ < 1.The following proposition, which can be used to deal with two similar operators in FB_2(Ω), is Proposition 3.3 in <cit.>. If X is an invertible operator intertwining two operators T and T̃ in FB_2(Ω), i.e. XT=T̃X, then X is upper triangular:X=[ X_11 X_12;0 X_22;],and the operators X_11 and X_22 are invertible.Let α and β be two weighted Hardy spaces, where the weight sequences α and β satisfy condition (<ref>). If the operators T_α,β,h and T_α,β,h̃ are similar, where h and h̃ are two non-zero functions in (α,β), then there exist h_1 ∈(α) and h_2 ∈(β) such that 1/h_1∈(α), 1/h_2∈(β), and h = h̃ h_1 h_2.Since T := T_α,β,h∼T̃ := T_α,β,h̃, there exists an invertible operator X such that XT=T̃X. According to Lemma <ref> and T and T̃ are in FB_2(), it follows that X is upper triangular:X=[ X_11 X_12;0 X_22;],and the operators X_11 and X_22 are invertible.From XT=T̃X, we haveX_11 M_α,z^* = M_α,z^* X_11, X_22 M_β,z^* = M_β,z^* X_22, X_11 M_α,β,h^* + X_12 M_β,z^* = M_α,z^* X_12 + M_α,β,h̃^* X_22.The first two equations show that X_11=M_α,g_1^* and X_22=M_β,g_2^* for some g_1 ∈(α) and g_2 ∈(β). The third equation shows thatX_11 M_α,β,h^* - M_α,β,h̃^* X_22 = σ_M_α,z^*,M_β,z^* (X_12). Since M_α,g_1^*=X_11 and M_β,g_2^*=X_22 are invertible, we have 1/g_1∈(α) and 1/g_2∈(β). What's more, we can directly calculateσ_M_α,z^*,M_β,z^* (X_11 M_α,β,h^* - M_α,β,h̃^* X_22) = 0. Therefore, X_11 M_α,β,h^* - M_α,β,h̃^* X_22 is in σ_M_α,z^*,M_β,z^*∩σ_M_α,z^*,M_β,z^*. According to Proposition <ref>, this implies X_11 M_α,β,h^* = M_α,β,h̃^* X_22 and consequently M_α,β,h M_α,g_1 = M_β,g_2 M_α,β,h̃. Thus, h g_1 = g_2 h̃.Finally, take h_1 = 1/g_1 and h_2 = g_2, and then we complete the proof.Let α and β be two weighted Hardy spaces. Let h and h̃ be in (α,β). If there exist h_1 ∈(α) and h_2 ∈(β) such that 1/h_1∈(α), 1/h_2∈(β), and h = h̃ h_1 h_2, then T_α,β,h and T_α,β,h̃ are similar.To see this, take the invertible operatorX=[ (M_α,h_1^-1)^*0;0M_β,h_2^*;],then, by simple calculation, X T_α,β,h X^-1 = T_α,β,h̃, and this implies that T_α,β,h and T_α,β,h̃ are similar. § THE CASE OF B(Z) = E^I ΘZ-A/1-A̅Z Recall a basic concept of weakly homogeneous operators, which was introduced by Clark and Misra <cit.>. Denote by () the analytic automorphism group of , which is the set of all analytic bijection fromto itself. As well know, a function ϕ is in () if and only if it has the following form: ϕ(z) = e^i θz-a/1-a̅z, z ∈, for some θ∈ [0,2π) and some a ∈. A bounded linear operator T on a complex separable Hilbert space H is called weakly homogeneous if σ(T) ⊆ and for any ϕ∈(), ϕ(T) is similar to T.We say a weighted Hardy space α be Möbius invariant if for each ϕ∈(), f∘ϕ∈α whenever f ∈α. Let α be a Möbius invariant weighted Hardy space. Then for each ϕ∈(), one can define a composition operator C_α,ϕ f ∈α→ f ∘ϕ∈α, which is bounded by closed graph Theorem. What's more, C_α,ϕ is invertible and C_α,ϕ^-1=C_α,ϕ^-1. By simple calculation, ϕ(M_α,z) C_α,ϕ = M_α,ϕ C_α,ϕ = C_α,ϕ M_α,z. Thus, M_α,z is a weakly homogeneous operator. It is known that for each λ∈, the Hilbert space H_(λ) is Möbius invariant (see <cit.>, <cit.>).Theorem 3.1 in <cit.> shows that each weighted Hardy space of polynomial growth is Möbius invariant. The main idea of the next proposition originates from Theorem 3.16 in <cit.>. Here we use Proposition <ref> to improve it in part. Let α and β be two Möbius invariant weighted Hardy spaces. Suppose the weight sequences α and β satisfy condition (<ref>). If the operator T_α,β,h is weakly homogeneous, where h ∈𝒞() ∩(α,β), then h is zero everywhere onor h is non-zero everywhere on .If h is a zero function on , then h is a zero function onsince h ∈𝒞(). So we suppose that h is a non-zero function on . Let ϕ∈() and let ϕ̃ be the function defined by ϕ̃(z)=ϕ(z). Then ϕ̃ is also in (). Since T := T_α,β,h is weakly homogeneous, ϕ̃(T) ∼ T. From M_α,z^* M_α,β,h^* = M_α,β,h^* M_β,z^*, it can be calculated thatϕ̃(T)= [ϕ̃(M_α,z^*) ϕ̃'(M_α,z^*) M_α,β,h^*;0ϕ̃(M_β,z^*);]= [M_α,ϕ^* M_α,β,h ϕ'^*;0M_β,ϕ^*;].Möbius invariance guarantees that the composition operators C_α,ϕ and C_β,ϕ are invertible and that C_α,ϕ^-1=C_α,ϕ^-1 and C_β,ϕ^-1=C_β,ϕ^-1. Then, by simple calculation, [ C_α,ϕ^* 0; 0 C_β,ϕ^*; ]ϕ̃(T) [ C_α,ϕ^* 0; 0 C_β,ϕ^*; ]^-1= [ M_α,z^* M_α,β,(h∘ϕ^-1)(ϕ'∘ϕ^-1)^*; 0 M_β,z^*; ].SoT = T_α,β,h∼ϕ̃(T) ∼ T_α,β,(h∘ϕ^-1)(ϕ'∘ϕ^-1). Since ϕ is a bijection fromto itself and ϕ'(z) = e^i θ1-a^2/(1-a̅z)^2 is non-zero everywhere on , then (h∘ϕ^-1)(ϕ'∘ϕ^-1) is a non-zero function on . Consequently, according to Proposition <ref>, there exists h_1 = h_1,ϕ∈(α) and h_2 = h_2,ϕ∈(β) such that 1/h_1∈(α), 1/h_2∈(β), and h = (h∘ϕ^-1)(ϕ'∘ϕ^-1) h_1 h_2. Thus,h(ϕ(z)) = h(z) ϕ'(z) h_1(ϕ(z)) h_2(ϕ(z)), z ∈. Then, similar to the last part of the proof of Theorem 3.16 in <cit.>, we can obtain that h is non-zero everywhere on .Let α and β be two Möbius invariant weighted Hardy spaces. Let h be in () ∩(α,β). If h is zero everywhere onor h is non-zero everywhere on , then T = T_α,β,h is weakly homogeneous.It is trivial if h is zero everywhere on . So we suppose that h is non-zero everywhere on . Then 1/h∈(). Let ϕ∈() and let ϕ̃ be the function defined by ϕ̃(z)=ϕ(z).From the proof of Proposition <ref>, we see that ϕ̃(T) ∼ T_α,β,(h∘ϕ^-1)(ϕ'∘ϕ^-1). It is not hard to see that the function h̃ := (h∘ϕ^-1)(ϕ'∘ϕ^-1) is in () and non-zero everywhere on . Then 1/h̃∈(). Since h = h̃1/h̃ h and () is contained in both (α) and (β), then according to Remark <ref>, T = T_α,β,h∼ T_α,β,(h∘ϕ^-1)(ϕ'∘ϕ^-1). Thus, ϕ̃(T) ∼ T. Since ϕ̃ is arbitrary in (), we obtain that T is weakly homogeneous. § THE CASE OF B(Z) = Z^N§.§ Power of a special operator Let α be a weighted Hardy space and n be a positive integer. Denoteα,j := ∨z^j+kn_k=0^∞⊆α, j=0,1,…,n-1.Then, it can be verified thatα = α,0∔α,1∔⋯∔α,n-1is a Hilbert direct sum, and z^j+kn_k=0^∞ is an orthogonal basis of α,j. What's more, α,j is an invariant subspace of the multiplication operator M_α,z^n = M_α,z^n.We say that a weighted Hardy space α has property A for a positive integer n, if there exists c_1=c_1(n)>0 and c_2=c_2(n)>0 such thatα(k) ≤ c_1 α(j+kn)andα(j+kn) ≤ c_2 α(k)for any j=0,1,…,n-1 and k=0,1,….Inspired by <cit.>, we give the following lemma. Let α be a weighted Hardy space with property A for a positive integer n. Then for each j=0,1,…,n-1, the mapX_α,j f = ∑_k=0^∞f̂(k) z^k ∈α→∑_k=0^∞f̂(k) z^j+kn∈α,jis an invertible bounded linear operator. Moreover, for each j=0,1,…,n-1, M_α,z^n|_α,j X_α,j = X_α,j M_α,z. Thus, M_α,z^n is similar to ⊕_1^n M_α,z.Let f = ∑_k=0^∞f̂(k) z^k ∈α, then∑_k=0^∞f̂(k) z^j+kn^2= ∑_k=0^∞f̂(k)^2 α(j+kn)^2 ≤ c_2^2 ∑_k=0^∞f̂(k)^2 α(k)^2 = c_2^2 f^2,which implies g = ∑_k=0^∞f̂(k) z^j+kn∈α,j and g≤ c_2 f. Thus, it can be seen that X_α,j is not only well defined but also a bounded linear operator. Obviously, X_α,j is an injection.Let g = ∑_k=0^∞g̃(k) z^j+kn∈α,j, then∑_k=0^∞g̃(k) z^k^2= ∑_k=0^∞g̃(k)^2 α(k)^2 ≤ c_1^2 ∑_k=0^∞g̃(k)^2 α(j+kn)^2 = c_1^2 g^2,which implies that f = ∑_k=0^∞g̃(k) z^k ∈α and g = X_α,j f. Thus, X_α,j is a surjection.Therefore, X_α,j an invertible operator.The remaining conclusions can be easily checked.Form the assumption (<ref>), we can see that a weighted Hardy space α has property A for a positive integer n if and only if there exists c_1=c_1(n)>0 and c_2=c_2(n)>0 such thatc_1≤α(n-1+kn)/α(k)≤ c_2for any k=0,1,….For each λ∈, the weighted Hardy spaces α_λ = H_(λ) has property A for any positive integer n. In fact, this follows Remark <ref> sinceα_λ(n-1+kn)/α_λ(k) = n^λfor any k=0,1,…. Each weighted Hardy space α of polynomial growth has property A for any positive integer n.The weighted Hardy space α is of polynomial growth means that there exists N ∈ such that for each k ∈,k+1/k+N+1≤α(k)/α(k-1)≤k+N+1/k+1(see <cit.>). Then for each k=0,1,…, whenever n-1+kn>k,∏_j=k+1^n-1+knj+1/j+N+1≤α(n-1+kn)/α(k) = ∏_j=k+1^n-1+knα(j)/α(j-1)≤∏_j=k+1^n-1+knj+N+1/j+1.Form∏_j=k+1^n-1+knj+N+1/j+1 = ∏_j=1^Nn(k+1)+j/k+1+j≤∏_j=1^N n = n^N,we get1/n^N≤α(n-1+kn)/α(k)≤ n^N.The last inequality holds for each k=0,1,…. Thus, this remark follows Remark <ref>.Let α and β be two weighted Hardy spaces. It is not hard to check that the following conditions are equivalent:(a) α⊆β;(b) 1 ∈(α,β);(c) sup_k ≥ 0β(k)/α(k) < ∞.If one of the conditions above is true, the multiplication operator M_α,β,1 is the inclusion mapping ια→β. In addition, () ⊆(α) ⊆(α,β). Let λ,μ∈. The weighted Hardy spaces α_λ = H_(λ) and α_μ = H_(μ) satisfy H_(λ)⊆H_(μ) if and only if λ - μ≥ 0.Let α and β be two weighted Hardy spaces with property A for a positive integer n. Suppose α⊆β. Denote T = T_α,β,1, then T^n is similar to T ⊕ (⊕_j=1^n-1T̃), where T̃ = T_α,β,z.It is enough to prove (T^*)^n ∼ T^* ⊕ (⊕_j=1^n-1T̃^*), whereT^*=[ M_α,z 0; M_α,β,1 M_β,z; ],T̃^*=[ M_α,z 0; M_α,β,z M_β,z; ]. From M_α,β,1 M_α,z = M_β,z M_α,β,1, it can be calculated that(T^*)^n=[ M_α,z^n 0; n M_α,β,z^n-1 M_β,z^n ].Since[ I 0; 0 1/n I; ](T^*)^n [ I 0; 0 1/n I; ]^-1 = [ M_α,z^n 0; M_α,β,z^n-1 M_β,z^n; ] =: A,it followed that (T^*)^n ∼ A.LetK_0 = α,0⊕β,n-1, K_1= α,1⊕β,0, ⋯K_n-1 = α,n-1⊕β,n-2,thenα⊕β = K_0 ∔ K_1 ∔⋯∔ K_n-1is Hilbert direct sum. One can check that α,j and β,j are invariant subspace of M_α,z^n and M_β,z^n, respectively, and thatM_α,β,z^n-1(α,0)⊆β,n-1, M_α,β,z^n-1(α,1)⊆β,0,⋯M_α,β,z^n-1(α,n-1)⊆β,n-2.Hence, K_0,K_1,…,K_n-1 are invariant subspace of A.To complete the proof, it is enough to prove thatA|_K_0∼ T^*,A|_K_1∼T̃^*, …,A|_K_n-1∼T̃^*. This implies that (T^*)^n ∼ A ∼ T^* ⊕ (⊕_j=1^n-1T̃^*).It is not hard to see thatA|_K_0 = [M_α,z^n|_α,0 0; P_β,n-1 M_α,β,z^n-1 P_α,0M_β,z^n|_β,n-1; ], A|_K_1 = [M_α,z^n|_α,1 0; P_β,0 M_α,β,z^n-1 P_α,1M_β,z^n|_β,0; ], ⋯A|_K_n-1 = [M_α,z^n|_α,n-1 0; P_β,n-2 M_α,β,z^n-1 P_α,n-1M_β,z^n|_β,n-2; ].Lemma <ref> shows that for any j=0,1,…,n-1, the maps X_α,jα→α,j and X_β,jβ→β,j are invertible operators. Take the invertible operatorsX_0 = [ X_α,0 0; 0 X_β,n-1; ], X_1 = [ X_α,1 0; 0 X_β,0; ], …, X_n-1 = [ X_α,n-1 0; 0 X_β,n-2; ]. By calculation we obtain that M_α,z^n|_α,j X_α,j = X_α,j M_α,z, M_β,z^n|_β,j X_β,j = X_β,j M_β,z and thatP_β,n-1 M_α,β,z^n-1 P_α,0 X_α,0 = X_β,n-1 M_α,β,1, P_β,0 M_α,β,z^n-1 P_α,1 X_α,1 = X_β,0 M_α,β,z, ⋯P_β,n-2 M_α,β,z^n-1 P_α,n-1 X_α,n-1 = X_β,n-2 M_α,β,z.The above equation indicates thatA|_K_0 X_0 = X_0 T^*, A|_K_1 X_1 = X_1 T̃^*, …, A|_K_n-1 X_n-1 = X_n-1T̃^*, and henceA|_K_0∼ T^*,A|_K_1∼T̃^*, …,A|_K_n-1∼T̃^*.§.§ Dissimilarity In the remaining part of this section, we need some conclusions related to strongly irreducible operators (`strongly irreducible' is also abbreviated as `(SI)').Denote K_0 (ℬ) for the K_0-group of a Banach algebra ℬ. For a bounded linear operator A on a Hilbert space H, denote A' for the commutant algebra of A and A^(n) for the n copies ⊕_n^n A of A. Then the following theorem is Theorem 1 in <cit.>. Let T = A_1^(n_1)⊕ A_2^(n_2)⊕⋯⊕ A_k^(n_k), where A_1,A_2,…,A_k are strongly irreducible Cowen-Douglas operators, A_i and A_j are not similar whenever i ≠ j, and n_1,n_2,…,n_k are positive integers. Then K_0 (T') ≅ℤ^k. The following proposition, which is Proposition 2.22 in <cit.>, gives a characterization of strong irreducibility in FB_2(Ω). An operatorT=[ T_0 S; 0 T_1 ]in FB_2(Ω) is strongly irreducible if and only if S ∉σ_T_0,T_1.Let α and β be two weighted Hardy spaces, where the weight sequences α and β satisfy condition (<ref>). Then(a) for any non-zero function h in (α,β), the operator T_α,β,h is strongly irreducible.(b) for any non-zero functions h and h̃ in (α,β) such that h and h̃ have different zero points on , the operators T_α,β,h and T_α,β,h̃ are not similar.(a) Since σ_M_α,z^*,M_β,z^* (M_α,β,h^*) =0, it follows that M_α,β,h^* ∉σ_M_α,z^*,M_β,z^* according to Proposition <ref>. Then T_α,β,h is strongly irreducible by Lemma <ref>.(b) Clearly, this is a corollary of Proposition <ref>. 5pt Let α and β be two weighted Hardy spaces with property A for a positive integer n ≥ 2. Suppose α⊆β and suppose the weight sequences α and β satisfy condition (<ref>). Denote T = T_α,β,1, then T^n is not similar to ⊕_1^n T.Proposition <ref> shows that T^n ∼ T ⊕ (⊕_j=1^n-1T̃) = T^(1)⊕T̃^(n-1), where T̃ = T_α,β,z. Lemma <ref> shows that T and T̃ are strongly irreducible Cowen-Douglas operators and T T̃. Assume T^n is similar to ⊕_1^n T = T^(n), thenK_0 (T^(1)⊕T̃^(n-1)') ≅K_0 (T^n') ≅K_0 (T^(n)').But from Theorem <ref>, it follows thatK_0 (T^(1)⊕T̃^(n-1)') ≅ℤ^2,K_0 (T^(n)') ≅ℤ.So we get a contradiction since ℤ^2 ℤ. § MAIN RESULTS In this section, we complete the proof of the main theorem and give some concrete examples. Suppose h ≠ 0. Since the functions in () are finite Blaschke products with order 1, T is weakly homogeneous. Since sup_k ≥ 0β(k)/α(k) < ∞, i.e. α⊆β, () ⊆(α) ⊆(α,β). Hence, according to Proposition <ref>, h is non-zero everywhere on , which implies 1/h∈(). Thus, h,1/h∈(α). From Remark <ref>, we have T = T_α,β,h∼ T_α,β,1 =: T̂. Since B(z) = z^n_0, z ∈ is a finite Blaschke product with order n_0, it follows that T^n_0 = B(T) ∼⊕_1^n_0 T and hence T̂^n_0∼⊕_1^n_0T̂. But this contradicts the conclusion of Proposition <ref> that T̂^n_0⊕_1^n_0T̂.Let λ - μ∈ [0,1). LetT = [ M_z^* M_h^*; 0 M_z^*; ],on H_(λ)⊕H_(μ), where h ∈(). If for any finite Blaschke product B, B(T) is similar to ⊕_1^n T, where n is the order of B, then h=0.This corollary now follows Remarks <ref>, <ref>, <ref>, <ref> and Theorem <ref>. Let α and β be two weighted Hardy spaces of polynomial growth. Suppose the weight sequences α and β satisfysup_k ≥ 0β(k)/α(k) < ∞andlim_k →∞α(k)/kβ(k) = 0.Denote T = T_α,β,h, where h ∈(). If for any finite Blaschke product B, B(T) is similar to ⊕_1^n T, where n is the order of B, then h=0.This corollary now follows Remark <ref>, <ref> and Theorem <ref>.For many weighted Hardy space α, the multiplier algebra (α) has better properties, such as (α) = H^∞(). In Theorem <ref>, if a higher condition is required for multiplier algebra of weighted Hardy space, a stronger conclusion can be obtained. Let α and β be two Möbius invariant weighted Hardy spaces with property A for some integer n_0 ≥ 2. Assume 𝒞() ∩() ⊆(α). Suppose the weight sequences α and β satisfysup_k ≥ 0β(k)/α(k) < ∞andlim_k →∞α(k)/kβ(k) = 0.Denote T = T_α,β,h, where h ∈𝒞() ∩(). If for any finite Blaschke product B, B(T) is similar to ⊕_1^n T, where n is the order of B, then h=0. Similar to the proof of Theorem <ref>.Let H^2() be the classical Hardy space and L^2_a() be the classical Bergman space. LetT = [ M_z^* M_h^*; 0 M_z^*; ],on H^2() ⊕L^2_a(), where h ∈𝒞() ∩(). If for any finite Blaschke product B, B(T) is similar to ⊕_1^n T, where n is the order of B, then h=0.As will known, (H^2()) and (L^2_a()) are both H^∞(). Note that H^2() is H_(0) and L^2_a() is H_(-1/2). Then this corollary now follows Remarks <ref>, <ref>, <ref>, <ref> and Theorem <ref>. amsplain
http://arxiv.org/abs/2312.16459v1
{ "authors": [ "Jianming Yang", "Kui Ji" ], "categories": [ "math.FA" ], "primary_category": "math.FA", "published": "20231227081229", "title": "On the similarity of powers of operators with flag structure" }
[email protected] S3, CNR-Istituto di Nanoscienze, I-41125 Modena, Italy Centro S3, CNR-Istituto di Nanoscienze, I-41125 Modena, ItalyStrain represents an ubiquitous feature in semiconductor heterostructures, and can be engineered by different means in order to improve the properties of various devices, including advanced MOSFETs and spin-based qubits. However, its treatment within the envelope function framework is well established only for the homogeneous case, thanks to the theory of Bir and Pikus. Here, we generalize such theory to the case of inhomogeneous strain. By fully accounting for the relativistic effects and metric aspects of the problem, we derive a complete envelope-function Hamiltonian, including the terms that depend on first and second spatial derivatives of the strain tensor. Envelope-function theory of inhomogeneous strain in semiconductor nanostructures Filippo Troiani January 14, 2024 ================================================================================*Introduction. Strain represents a common feature in semiconductor nanostructures. It develops spontaneously during their fabrication process, because of the lattice mismatch between heterogeneous layers, and can be induced by cooling the system to cryogenic temperatures, due to the presence of materials with different thermal-expansion coefficients <cit.>. As an uncontrolled or unaccounted phenomenon, strain can result in significant differences between the actual and the nominal properties of the nanostructure. On the other hand, strain can be intentionally engineered, in order to modulate the band structure and increase the carrier mobility, an approach that is actively pursued, e.g., with MOSFETs <cit.> or silicon nanowires <cit.>. These effects are particularly relevant in semiconductor-based implementations of quantum computing. Silicon and germanium quantum dots have emerged as promising hosts of electron- or hole-spin qubits <cit.>, whose properties can be strongly affected by strain. In particular, it has been shown that in these systems inhomogeneous strain can modify both the localization of the confined particle and its coupling to external fields, specifically through a modulation of the Rabi frequency <cit.> and of the g-factor <cit.>. The tool of election for simulating the properties of spin qubits in semiconductor quantum dots is represented by the Luttinger and Kohn (LK)'s envelope-function formalism <cit.>. This applies to crystalline systems subjected to a spatially slow-varying external potential, such as the one generated by the metal gates used in electrostatically defined nanostructures. Describing the effects of strain on the electron and hole states requires an extension of LK's theory, which was developed by Bir and Pikus (BP) for the case where the strain tensor is small and homogeneous <cit.>. Even in these conditions, the absolute displacements of the ions (with respect to the unstrained crystal) may be comparable or larger than the lattice constant. This makes the displacements unsuitable as an expansion parameter for the electron-nuclei potential, unlike for the theory of electron-phonon interactions <cit.>. BP's key idea was to introduce a new set of electron coordinates that make the power expansion of the electron-nuclei potential in the strain tensor possible, thus enabling a perturbative calculation of the electron and hole states. In view of the above, a generalization of BP's theory to the case of inhomogeneous strain would be highly desirable, but is far from trivial. In our understanding, the previous attempts that have been made in this direction are affected by significant shortcomings. These consist either in an incorrect treatment of the Schrödinger equation in the required set of curvilinear coordinates, resulting in the non-hermiticity of the particle Hamiltonian <cit.>, or in the use of a non-practical basis set within a non-relativistic treatment, which precludes from the outset an accurate description of spin-orbit interactions <cit.>.In this Letter, we extend BP's theory to the case of inhomogeneous strain in a rigorous and comprehensive way. This is achieved by properly taking into account the modifications to the quantum-mechanical formalism that arise when the metric is non-Cartesian <cit.>, and by including relativistic corrections to the Schrödinger equation via a low-energy expansion of the covariant Dirac equation <cit.>. Our central result — applicable to a variety of semiconductor nanostructures, in the presence of slowly-varying inhomogeneous strain and external electrostatic potential — is a set of equations, whose solution gives the envelope functions within a manifold of arbitrary dimension. From these we derive, as a relevant case, the strain-related 6-bands Hamiltonian for the hole states in silicon and germanium, and more generally in crystals with diamond structure. For the sake of readability, the main logical steps that have been followed are reflected in the structure of the manuscript, which contains the main results. The complete derivations are reported in the Supplementary Material <cit.>, to which we provide detailed reference at each step. *Inhomogeneous strain.The first step consists in the introduction of a curvilinear coordinate system, which allows to express the nuclei potential in the strained system as a perturbative expansion in the strain tensor. This approach, introduced by BP for the case of homogeneous strain and generalized to the inhomogeneous case by Zhang <cit.>, is recalled in the present paragraph for the reader's convenience. In the original Cartesian reference frame, let r_ C define the electronic coordinates, while R_i,0 and R_i≡R_i,0+u_i are the nuclei positions in the absence and in the presence of strain, respectively. Generalizing BP's approach to the case of inhomogeneous strain, one introduces a set of curvilinear coordinates r. These are related to the r_ C by the equation <cit.>:r_ C^α = r^α + u^α(r),where the Greek indices label vector components (α=1,2,3). The continuous inhomogeneous displacement u(r) is assumed to be an invertible and differentiable function of r to all needed orders. It fulfils the conditions u(R_i,0) = u_i and | u(r) | ≪| r|.The former condition allows for the expansion of the nuclei potential in powers of the strain in the transformed coordinate frame, while the latter condition follows from the assumption that the strain tensor is small everywhere.Given the displacement functions u^α, the components of the strain tensor can be defined as follows:ε^α_β(r) ≡∂_β u^α(r) , where ∂_β≡∂/∂ r^β. Provided that the strain tensor is small and varies slowly over the scale of a unit cell, by applying the transformation in Eq. (<ref>), one can express the potential U_ n generated by the nuclei in the strained system in the form: U_ n(r)≈ U_ n,0( r)+ε^α_β( r)U^β_α(r ) .Here, U_ n,0 is the potential in the unstrained system and U^β_α is a strain-independent function that has the same periodicity as the unstrained lattice, while the product ε^α_β( r)U^β_α(r ) is in general not periodic. Further details on the nuclei potential relations are provided in Section I of the SM <cit.>. Here and in the remainder of this Letter, we use Einstein's summation convention on repeated Greek indexes. *The Schrödinger problem in curvilinear coordinates.The second step consists in deriving the general expression of the Schrödinger equation for an electron in curvilinear coordinates, with the inclusion of the spin-orbit term. The adoption of the curvilinear coordinates r implies the introduction of a nontrivial metric tensor, i.e. a g_μν≠ - δ_μν <cit.>. As a result, the matrix element of a local operator  between two arbitrary electron (spinorial) states is given by:< Ψ| Â| Φ> = ∫ d r √(-g(r)) Ψ^†( r ) · A(r)Φ(r),where g(r) = [ g_μν(r) ]. The definition of inner products can be obtained from the above equation simply by replacing the generic operator A with the identity. As a technical but crucial point, we note that, in a curvilinear coordinate system, the definition in Eq. (<ref>) should be used in evaluating the hermiticity of operators and the scalar products between states, rather than its Cartesian counterpart, corresponding to √(-g(r)) = 1 <cit.>. In order to obtain the correct Hamiltonian in curvilinear coordinates and to include spin-orbit coupling, we generalize the covariant formulation of the Schrödinger equation given in Ref. <cit.>, which applies to a non-relativistic Hamiltonian. Starting from the covariant Dirac equation for the 4-component electron field in an electromagnetic potential <cit.>, which holds for arbitrary metric tensors, we take the non-relativistic limit and allow for a nontrivial metric in the spatial sector only. The result is a Schrödinger equation for 2-spinors that can be augmented with any order of relativistic corrections, while inheriting the covariance of the initial Dirac equation. In the absence of magnetic field and up to the first order in the relativistic corrections, the Hamiltonian can be written as: H = H_ kin + H_ rel + U, where the kinetic term readsH_ kin=- ħ^2 /2 m[( ∇_ C^2 r^ν) ∂_ν - g^μν∂_μ∂_ν], U is a generic scalar potential, and the dominant component in the relativistic term is given by the spin-orbit HamiltonianH_ so= -iħ^2 /4 m^2 c^2 (∂ r^μ/∂ r^α_ C∂ r^ν/∂ r^β_ Cσ^αβ) ( ∂_μ U )∂_ν . Here, we adopt the notation ∇^2_ C≡∂^2 / (∂ r^α_ C∂ r^α_ C). Besides, σ^αβ = -ϵ^ αβγσ_γ, where ϵ^ αβγ is the invariant Levi-Civita symbol and -σ_γ are the Pauli matrices.Relying on the generalized expression of the matrix elements and of the inner product [Eq. (<ref>)], one can write the matrix elements of the Hamiltonian between 2-component spinors in a manifestly Hermitian way, assuming that the wave functions either vanish at infinity, or satisfy the Born-von Karman boundary conditions. Further details on the relativistic terms of the Hamiltonian and on the boundary conditions are provided in Sect. II and III of the SM, respectively <cit.>. *Curvilinear coordinates from the strain tensor. The equations reported in the previous paragraph provide a general framework, which can be applied to the present problem, where the non-Cartesian character of the coordinates results from the presence of inhomogeneous strain.In fact the strain tensor determines the JacobianJ^α_β≡∂_β∂ r_ C^α=δ^α_β+ε^α_β, as can be deduced from Eqs. (<ref>,<ref>).As to the metric tensor, to first order in the strain tensor, it can be expressed as g^μν≈-δ^μν -ε^ν_μ - ε^μ_ν ,under the assumption that ∥ε_i(r) ∥ ≪ 1, and thus J^-1≈ 1 - ε. From this it also follows that √(- g )≈ 1 +tr(ε).The above relations provide the dependence of H_ kin and H_ so on the strain tensor, mediated by the inverse Jacobian, ∇_ C^2 and g^μν. To the first order in the strain tensor, the kinetic and spin-orbit components of the Hamiltonian can thus be written as follows:H_ kin= - ħ^2 /2 m∂^2/∂ r^μ∂ r^μ- ħ^2/2 m∂_μ( ε^ν_μ+ ε^μ_ν)∂_ν , H_ so= -iħ^2 /8 m^2 c^2 [ ( ∂_μ U_ n,0)σ^μν∂_ν -∂_νσ^μν( ∂_μ U_ n,0) ] -iħ^2 /8 m^2 c^2 ( Σ^ν∂_ν -∂_νΣ^ν) .The arrows above the differential operators specify whether these must be applied to the wave function on the left or right sides of the Hamiltonian when evaluating its matrix elements, whileΣ^ν ≡( ∂_με^α_β) U^β_ασ^μν +ε^α_β( ∂_μU^β_α)σ^μν -ε^ν_β( ∂_α U_ n,0)σ^αβ- ε^α_μ( ∂_α U_ n,0)σ^μν .As to the potential, induced by the nuclei, its expression is given by the sum of the unstrained contribution and of a perturbation that depends linearly on the strain tensor [Eq. (<ref>)]. The derivations of the above equations can be found in Section II and III of the SM <cit.>. *Generalized Luttinger-Kohn theory. In the LK solution scheme, the electron Hamiltonian matrix is derived in an orthonormal basis, and then reduced to a block structure by means of a suitable canonical transformation, which effectively separates the relevant manifold from the others, while perturbatively accounting for the inter-manifold coupling. In the present paragraph, this procedure is generalized in order to include the case of a curvilinear set of coordinates, with consistently defined orthonormality relations. In order to identify a complete basis set, we initially consider the part of the Hamiltonian that is of order zero in the strain. Being this a periodic function of r, one can apply Bloch's theorem in order to derive its eigenfunctions ψ_n, k =e^ ik·ru_n, k(r) and eigenvalues E_n(k).In view of an expansion around, e.g., the Γ point, it is convenient to introduce also the LK functions <cit.> χ_n, k≡ e^ ik·r u_n, 0(r). In the curvilinear coordinates neither the Bloch nor the LK functions form an orthonormal basis <cit.>, according to the inner product defined in Eq. (<ref>).However, the orthonormality relations can be recovered by suitably modifying the LK functions, according to the relations: <cit.>χ_n, k≡χ_n, k/[-g(r) ]^1/4≈[ 1 - 1/2 tr ε(r)]e^ ik·r u_n, 0(r).Analogous modifications can be applied in order to recover the orthonormality relations for the Bloch functions.In the modified LK basis, the matrix elements of the first-order strain-dependent component of the Hamiltonian read: < χ_n, k|Ĥ_ε| χ_n', k'> = - ħ^2/4 m| k - k' |^2 ε^ μ_μ(k - k')δ_n,n'+ ε^ μ_ν(k - k')(𝒟_μ^ν+ k^αℒ_α; μ^ν + k'^αℒ_α; μ^*ν + k^α k'^β𝒬_αβ; μ^ν)_n, n' , where ε^ μ_ν(q) is the Fourier transform of the strain tensor. The last parentheses on the right include the deformation-potential terms; in particular, the non-relativistic and dominant terms of the k- independent quantities 𝒟, ℒ and 𝒬 are given respectively byD^ν_μ≡U_μ^ν - 1/ m p_μ p_ν L_α; μ^ν≡ - ħ/2 m(δ^ν_αp_μ + δ^μ_αp_ν) Q_αβ; μ^ν≡ - ħ^2/2 m(δ^μ_αδ^ν_β+δ^ν_αδ^μ_β), where p_μ = -iħ∂_μ. Finally, the terms depending on the band indices n and n' in Eq. (<ref>) are defined by the relation ( A )_n, n'≡(2 π)^3/Ω_ cry∫ d ru^†_n, 0(r) · Au_n', 0(r). The next step consists in decoupling the low-energy manifold of interest (n ≤ N) from the higher-energy states (n > N), using Löwdin partitioning <cit.>. This amounts to applying a canonical transformation to the Hamiltonian, ℋ̂ = e^-Ŝ Ĥe^Ŝ≡Ĥ + ΔĤ, and to its eigenstates, |ϕ⟩ = e^-Ŝ |ψ⟩. The transformation is such that ℋ̂ is approximately block-diagonal in band space, and specifically displays negligible coupling terms between the relevant manifold and the remote bands.If inter-manifold couplings related to the deformation-potential terms can be neglected, then the correction ΔĤ for the N-dimensional low-energy manifold has the standard effective-mass form <cit.>.The relativistic components of 𝒟, ℒ and 𝒬, and further details on the manifold decoupling, are given respectively in Sections V and VI of the SM <cit.>.*Envelope functions.The last step consists in the derivation of the confined particle states within the relevant N-dimensional manifold. The electron Hamiltonian includes an external confining potential U_ ext, such as that induced by metallic gates in electrostatically-defined quantum dots, which adds to the nuclear contribution: U = U_ n + U_ ext. The external potential is assumed to be a slowly-varying function of r on the scale of the lattice constant, so as to justify an envelope-function approach.In particular, the eigenfunctions of ℋ can be written as:ϕ(r) ≡< r| ϕ> =1/[-g(r) ]^1/4∑_n ≤ NF_n(r) u_n, 0(r),where the N quantities denoted as F_n(r) are the unknown envelope functions. These are determined by diagonalizing in band and position spaces the Hamiltonian ℋ̂^ EF, whose strain-dependent term is given by: ℋ̂^ EF_ε=ħ^2/4 m[ ∇^2 ε^μ_μ(r) ] 1 +ε^ μ_ν(r) [𝒟_μ^ν +(ℒ_α; μ^ν+ℒ_α; μ^*ν) k̂_α + 𝒬_αβ; μ^νk̂_αk̂_β] - i[ ∂_αε^ μ_ν(r) ] (ℒ_α; μ^ν +𝒬_αβ; μ^νk̂_β) ,with k̂_α≡ -i∂_α. Here 𝒟, ℒ, and 𝒬 are matrices in band space, with matrix elements defined according to Eq. (<ref>); 1 is the identity matrix.It should be emphasized that Eq. (<ref>) is the spinor wave function in the curvilinear reference frame r, while the wave function in the Cartesian frame is given by ϕ_ C(r_ C) = ϕ [r(r_ C)], where r(r_ C) is the inverse of the transformation Eq. (<ref>). Further details on the derivation of the envelope-function Hamiltonian are given in Section VII of the SM <cit.>.*Valence states in diamond structures. As a specific but practically relevant application, we consider the valence bands of a crystal with diamond structure, such as silicon or germanium. In this case, the three relevant orbital states are built from p-type atomic orbitals, and thus carry an angular momentum l=1. This, combined with the s=1/2 spin of the electron, gives rise to a j=3/2 quartet and a j=1/2 doublet (N=6). In this basis, the dominant part of the spin-orbit is diagonal, and gives rise to a splitting Δ_ SO between the j = 3/2 and j = 1/2 states at the Γ point <cit.>. If all other spin-orbit contributions can be neglected, then: ℒ coincides with L, which vanishes at the band maximum; 𝒟 reduces to the non-relativistic deformation potentials D; 𝒬 equals Q, which consists of purely intraband (n=n') contributions. The strain-dependent component of the envelope-function Hamiltonian matrix [Eq. (<ref>)] thus reads:ℋ̂^ EF_ε ≈{ħ^2/4 m[ ∇^2 ε^μ_μ(r) ]- ħ^2/mε^ sym _αβ(r) k̂_αk̂_β.- .ħ^2/m[ k̂_αε^ sym_αβ(r) ]k̂_β}1+ε^ μ_ν(r)D_μ^ν . Here, ε^ sym_αβ≡1/2(ε^α_β+ε^β_α) is the symmetric part of the strain tensor, and D_μ^ν is the matrix of non-relativistic deformation potentials.Even though strain-dependent relativistic corrections obtained in the present derivation [Eq. (<ref>)] have not been included in Eq. (<ref>), the resulting Hamiltonian ℋ̂^ EF_ε for holes in Si and Ge contains terms that have not been considered in the literature, and that cannot be inferred from the homogeneous case by replacing a constant with a position-dependent strain tensor. These terms can be either both intra- (n=n') and inter-band (n ≠ n'), and depend on the first or on the second spatial derivatives of the strain tensor. Besides, we obtain an intraband term (last one in the first line), which is non-zero also in the case of homogeneous strain, but has been neglected in previous analyses.Conclusions. By combining solid-state theory and relativistic quantum mechanics in a non-Cartesian geometry, we have derived the envelope-function Hamiltonian for a general semiconductor nanostructure subjected to a small and slowly-varying inhomogeneous strain. Our theory requires, as an input, the strain tensor, which can be computed for each given device via finite-element methods based on the minimization of the elastic energy density <cit.>. Numerical calculations of the electron/hole states based on our theory are expected to provide an accurate modelling of the effects of inhomogeneous strain on quantum-dot spin qubits. In particular, they will allow to engineer spin-orbit interactions and g-tensor modulations aimed at improving the qubits' manipulability. The authors acknowledge financial support from the European Commission through the project IQubits (H2020-FETOPEN-2018-2019-2020-01, Project No. 829005) and from the PNRR MUR Project No. PE0000023-NQSTI, and useful discussions with S. Pittalis. supplementary[S]() supplementary SUPPLEMENTARY MATERIALIn this Supplementary Material, we provide all relevant details related to the derivation of the results presented in the main text. In particular: in Section I we discuss preliminary notions about the definition of inhomogeneous strain, the expansion of the nuclei potential in the components of the strain tensor (in the curvilinear coordinate system), and the correspondence between operators and states represented in different coordinate systems; in Section II we derive the covariant Schrödinger equation in curvilinear coordinates starting from the covariant Dirac equation, in order to account for first-order relativistic corrections; in Section III we show how to write the matrix elements of the Hamiltonian in curvilinear coordinates in a manifestly Hermitian form; in Sections IV and V we derive the matrix elements of the Hamiltonian in curvilinear coordinates on the basis of modified Luttinger-Kohn states, in Section VI we detail the procedure for decoupling the manifold of interest from the remote bands, and in Section VII we derive the equation satisfied by the envelope functions. In the following, equations specified by just a number [e.g., Eq. (1)] are those of the main text, while those specified by a number preceded by S [e.g., Eq. (S1)] are those of the present Supplementary Material. § I. PRELIMINARY NOTIONS§.§ Inhomogeneous strain In the literature, there are two definitions of the strain tensor. The first one, which we adopt in the present work, is given by Eq. (2) and is consistent with that used in other envelope-function treatments <cit.>. The strain tensor reported in the second definition is the symmetrized version of that given in the first one: ε_α, β^ sym(r) ≡1/2[ ∂ u^α(r)/∂ r^β + ∂ u^β(r)/∂ r^α] = 1/2[ ε^α_β(r) + ε^β_α(r) ].This is the quantity that enters the expression of the infinitesimal variation in the distance between two points, in going from an unstrained to a strained system <cit.>. The two definitions do not necessarily coincide, since in general ε^α_β(r) ≠ε_α^β(r). The formal solution of Eq. (2),as noticed also in Ref. <cit.>, isu^α(r) = u^α(r_0) + ∫_r_0^rε^α_β(r') d r'^β ,where the line integral is performed over any path connecting r_0 to r. In the specific case of an homogeneous strain, considered by Bir and Pikus <cit.>, this reduces tou^α(r) = ε^α_β r^β .§.§ Expansion of the nuclei potential in the strain tensor In the absence of strain, the potential generated by the nuclei is periodic, and is given byU_ n, 0(r_ C)= ∑_i U_1 n(r_ C- R_i, 0) . The same quantity can be rewritten asU_ n, 0(r_ C) = ∑_iU_1n(r_ C- R_i, 0)Θ[ r_ C∈𝒞_0( R_i, 0) ], where U_1n is a pseudopotential, and the contribution due to the nucleus at R_i, 0 goes to zero outside the unit cell 𝒞_0( R_i, 0), centered on the same nucleus. This amounts to a mere re-summation of contributions due to all nuclei, and can be done in the strained system as well. In the rigid-ion approximation, with reference to the strained unit cells 𝒞[ R_i, 0 +u( R_i, 0 ) ], the nuclei potential is written asU_ n; C(r_ C)= ∑_iU_1n[r_ C- R_i, 0 -u( R_i, 0 ) ]Θ{r_ C∈𝒞[R_i, 0 +u( R_i, 0 ) ] } . After the coordinate transformation, setting U_ n; C(r_ C) = U_ n; C[f(r)] ≡ U_ n(r), one obtainsU_ n(r)= ∑_ i U_1n[ r +u(r)- R_i, 0 -u( R_i, 0 )]Θ{f(r)∈𝒞[ f(R_i, 0) ] } . Since the strain, besides being small, varies slowly within a unit cell, the condition imposed by the function Θ is satisfied only when r is close to R_i, 0 and, therefore, u(r)-u( R_i, 0 ) is small. Therefore, the following approximation holds:u^ν(r)-u^ν( R_i, 0 )≈∂ u^ν( r )/∂ r^μ( r^μ - R^μ_i, 0)≡ε^ν_μ( r) ( r^μ - R^μ_i, 0).From this it follows thatU_1n[ r +u(r)- R_i, 0 -u( R_i, 0 )]≈U_1n{[ δ^α_β+ ε^α_β( r) ] ( r^β - R^β_i, 0) }≈U_1n( r - R_i, 0)+∂U_1n( r - R_i, 0)/∂( r^α - R^α_i, 0) ε^α_β( r)( r^β - R^β_i, 0) .Summing over the atoms, one obtains the total potential as in Eq. (3),whereU^β_α(r ) ≡∑_i∂U_1n( r - R_i, 0)/∂( r^α - R^α_i, 0) ( r^β - R^β_i, 0)has the same periodicity as the unstrained lattice.§.§ Concepts related to point transformationsThe Schrödinger equation,Ĥ| ψ> = E | ψ>,can be written in position representation after introducing a spatial coordinate frame. The purpose of this Section is to compare the representations corresponding to two different spatial coordinate frames ℝ and ℝ', connected by a spatial transformation. In the first (unprimed) frame, positions are measured by the coordinates r, and a position eigenstate is defined byr̂| ℝ ; r > = r| ℝ ; r > ,where r̂ is the position operator in ℝ. The decomposition of the identity in coordinate space is <cit.>∫ d Ω| ℝ ; r> < ℝ ; r| = 1̂ ,where d Ω≡ dr√(-g(r)) is the elementary volume in coordinate space, g(r) being the determinant of the metric tensor. The orthonormality relation between the position eigenstates in the ℝ reference frame is< ℝ ; r_1 | ℝ ; r_2 > ≡δ(r_1, r_2) ≡1/√(- g(r_1)) δ(r_1 - r_2).Multiplying Eq. (<ref>) by < ℝ ; r|, one obtains the usual coordinate-space Schrödinger problem:H(r)ψ(r) = E ψ(r),whereψ(r) ≡< ℝ ; r| ψ> , < ℝ ; r| Ĥ| ψ> ≡ H(r)ψ(r) , < ℝ ; r_1 | Ĥ| ℝ ; r_2 > ≡δ_ℝ(r_1, r_2)H(r_2),and the eigenstate of the Hamiltonian in Hilbert space representation is| ψ> = ∫ d Ω | ℝ ; r> ψ(r). To solve the same Schrödinger problem as in Eq. (<ref>), one can equivalently choose the different coordinate frame ℝ', with coordinates r', connected to the previous representation via r = f(r'). It should be noted that this is a relation between the position eigenvalues measured in different reference frames on the same position eigenstate. This means that the following relations between the position eigenstates in the two reference frames hold:| ℝ ; f(r') > = | ℝ' ; r' > , | ℝ ; r> = | ℝ' ; f^-1(r)> .In other words, if a position measurement on a position eigenstate gives the result r' in the reference frame ℝ', then the result of the position measurement on the same state gives the result f(r') in the reference frame ℝ:r̂ | ℝ ; f(r') > = f(r') | ℝ ; f(r') >, r̂'| ℝ' ; r' > = r' | ℝ' ; r' > ,where r̂' is the position operator in ℝ'.In the primed representation, the identity decomposition is∫ d Ω' | ℝ' ; r' > < ℝ' ; r' | = 1̂ , < ℝ' ; r'_1 | ℝ' ; r'_2 > = δ'(r'_1, r'_2).The Schrödinger problem is represented in ℝ' asH'(r') ψ'(r') = E ψ'(r'),with | ψ> = ∫ d Ω' | ℝ' ; r' > ψ'(r'). One should now derive the relation between ψ(r) and ψ'(r'), and that between H(r) and H'(r').The most direct way to do so is to consider the scalar product between any two states | Φ_1 > and | Φ_2 >, which must be the same independently of the reference frame where it is evaluated:< Φ_1 | Φ_2 > = ∫ d Ω[Φ_1(r)]^*Φ_2(r) = ∫ d Ω' [Φ_1'(r')]^*Φ'_2(r') .We now change variables in the first equality according to r = f(r'), using the transformation property of the determinant of the metric tensor,√(- g'(r')) = | J(r') | √(-g[f(r')]) ,where J(r') is the determinant of the Jacobian matrix. Therefore, under this change of coordinates it holds that∫ d Ω {Φ_1(r)}^*Φ_2(r)=∫ d Ω'{Φ_1[f(r')]}^*Φ_2[f(r')]= ∫ d Ω' [Φ_1'(r')]^*Φ'_2(r').Since this must hold for any couple of states | Φ_1 > and | Φ_2 >, one concludes thatΦ'(r') = Φ[f(r')]ifr = f(r').Therefore,ψ'(r') = ψ[f(r')]ifr = f(r'),which provides the relation between the wave functions in the two reference frames.In order to derive an analogous relation between the Hamiltonians, let us consider again Eq. (<ref>). Multiplying both sides of the first equation by < ℝ; r|, and using the orthonormality of the position eigenstates in ℝ, one obtains< ℝ; r| ℝ' ; r' > = δ[r, f(r')] =δ[r - f(r')]/√(-g(r)) .Analogously, multiplying both sides of the second equation by < ℝ'; r' |, and using the orthonormality of the position eigenstates in ℝ', we obtain< ℝ'; r' | ℝ ; r> = δ'[ r' , f^-1(r)] = δ[ r' -f^-1(r)]/√(- g'(r')) .The two transformation laws between position eigenstates in the two reference frames are equivalent, due to the composition law of a Dirac delta with a function. Equipped with Eqs. (<ref>) and (<ref>), it is now possible to compare the quantitiesH(r_1 , r_2 ) ≡< ℝ ; r_1 | Ĥ| ℝ ; r_2 >and H'(r'_1 , r'_2 ) ≡< ℝ' ; r'_1 | Ĥ| ℝ' ; r'_2 >.Inserting twice the decomposition of the identity in the original reference frame in Eq. (<ref>) and using the scalar product relations derived above, one obtainsH'(r'_1 , r'_2 ) = ∫ d Ω_1 ∫ d Ω_2 < ℝ' ; r'_1 | ℝ ; r_1 >H(r_1, r_2)< ℝ ; r_2| ℝ' ; r'_2 >= H[f(r'_1), f(r'_2)],which provides the relation between the Hamiltonians in the Cartesian and in the curvilinear coordinates.§ II. DERIVATION OF THE COVARIANT SCHRÖDINGER EQUATION, WITH RELATIVISTIC CORRECTIONS In this Section, Greek indices refer to the components of vectors and tensors in a curvilinear reference frame, while Latin indices refer to such components in a Cartesian (rectilinear) reference frame, which coincides with the laboratory reference frame of the main text (note that in the main text Greek indices are used for both frames). The 4-component space-time coordinate is denoted as x in the curvilinear frame, and as x_ C in the Cartesian frame. The metric tensor in the x frame is denoted as g_μν, while the metric tensor in the x_ C frame is η_ab =diag(1, -1, -1, -1). The relation between the two metric tensors is <cit.>g_μν = e^a_μe^b_ν η_ab ,where the tetrad fields e^μ_a satisfy e^μ_a e^a_ν = δ^μ_ν ,e^μ_a e^b_μ = δ^b_a .In the case at hand, since the two coordinate frames are connected by a point transformation, one hase^a_μ = ∂ x^a_ C/∂ x^μ , e_a^μ = ∂ x^μ/∂ x^a_ C ,i.e., the tetrad vectors coincide with the Jacobian matrix of the transformation between the two coordinate systems. This ensures the conservation of the infinitesimal arc length squared,η_abdx^a_ Cdx^b_ C = g_μνdx^μdx^ν .§.§ Covariant Dirac equation We start from the covariant Dirac equation <cit.> for the 4-component electron field Ψ,γ^μ(iħ∇_μ - e A_μ) Ψ - m c Ψ = 0,where A_μ is the 4-potential of the electromagnetic field, and ∇_μ is the covariant derivative; the latter acts on the 4-spinor as follows:∇_μΨ≡∂_μΨ + Γ_μΨ ,where ∂_μ is the ordinary derivative, andΓ_μ≡1/2η_a c e^c_ν( ∂_μ e^ν_b + e^ρ_b Γ^ν_ρμ) G^ab ,G^ab = 1/4( γ^a γ^b - γ^b γ^a ).The coordinate-invariant Dirac matrices satisfyγ^a γ^b + γ^b γ^a = 2 η^ab ,while the spacetime-dependent Dirac matrices (entering the covariant Dirac equation) are defined as γ^μ = e^μ_a γ^a; thus, they satisfyγ^μγ^ν + γ^νγ^μ = 2 g^μν . In terms of the tetrads, the Christoffel symbols are written as:Γ^ν_ρμ ≡1/2 g^νβ(∂_μ g_βρ + ∂_ρ g_βμ- ∂_β g_ρμ)= 1/2e^ν_i (∂_μe^i_ρ +∂_ρe^i_μ) + 1/2η^klη_ij e^ν_k e^β_l [ e^i_ρ(∂_μ e^j_β - ∂_βe^j_μ)+e^i_μ(∂_ρ e^j_β-∂_β e^j_ρ) ] .In the case at hand, since Eq. (<ref>) holds, one has∂_μ e^i_ρ = ∂^2 x_ C^i/∂ x^μ∂ x^ρ = ∂_ρ e^i_μ ,and Eq. (<ref>) simplifies asΓ^ν_ρμ =e^ν_i ( ∂_ρe^i_μ) .The combinations needed for the covariant derivative are 1/2η_ac e^c_ν e^ρ_b Γ^ν_ρμ G^ab = 1/2η_ac e^ρ_b ( ∂_ρe^c_μ) G^ab ,andγ^μΓ_μ=1/2η_a c[e^c_ν( ∂_d e^ν_b )+ e^μ_d( ∂_be^c_μ)] γ^d G^ab=1/2η_a c[e^c_μ( ∂_d e^μ_b )- ( ∂_b e^μ_d)e^c_μ] γ^d G^ab= 0,where we have used the fact that ∂_μ( e^c_ν e^ν_b ) = 0, and thus e^c_ν∂_μ e^ν_b = - e^ν_b ∂_μ e^c_ν. Therefore, the covariant Dirac equation, when the tetrad coincides with the Jacobian matrix, reduces toiħ e^μ_n γ^n∂_μΨ - e e^μ_n A_μγ^nΨ - m c Ψ = 0. In the following, we choose the coordinate-invariant Dirac matrices in the Dirac form, γ^0 = (1_2× 20_2× 20_2× 2-1_2× 2); γ^a = (0_2× 2 σ^a - σ^a 0_2× 2),whereσ^a, with a ∈{ 1, 2, 3 }, are the Pauli matrices.§.§ Restriction to spatial-only transformation In the problem at hand, the coordinate transformation only involves the spatial coordinates (μ∈{ 1, 2, 3 }), affecting the corresponding sector of the metric tensor, while the time coordinate (μ = 0) is untouched. Therefore, we specialize our treatment to the cases where e^0_μ = δ^0_μ and e^a_0 = δ^a_0. All tetrads are independent of time. It is then convenient to write the Dirac equation in a way that explicitly separates time from the spatial coordinates. Equation (<ref>) can thus be transformed into ( 1 0 0 -1 )iħ∂_t Ψ=( 0 c σ^μ P_μ- c σ^μ P_μ0) Ψ + ( m c^2 + e V 0 0 m c^2 - e V )Ψ ,where x_0 = c t, ∂_0 = 1/c∂_t, A_0 = 1/c V, P_μ≡ -iħ∂_μ+ e A_μ, and the coordinate-dependent Pauli matrices are defined asσ^μ≡ e^μ_n σ^n. This is rephrased as an eigenproblem, by settingΨ≡ e^-i E_ D t / ħ( Ξ Φ),where E_ D is the (Dirac) energy eigenvalue, while Ξ and Φ are two-component spinors depending only on the spatial coordinates (eigenstates). This results in the following coupled equations for the 2-component spinor eigenstates:( E_ D - mc^2 - eV ) Ξ=c σ^μ P_μΦ , ( E_ D + mc^2 - eV ) Φ =c σ^μ P_μΞ .The spinor Φ is obtained as a function of Ξ from the second equation; substituting the resulting expression in the first equation, one obtains an eigenvalue equation for the spinor Ξ alone: [ c σ^μ P_μ1/2 mc^2 + E - eV c σ^ν P_ν + eV ] Ξ = E Ξ .where the Schrödinger eigenenergy is E ≡ E_ D - mc^2. This equation is exact.§.§ Schrödinger equation with relativistic corrections, in curvilinear coordinates In order to recover the Schrödinger equation and the lowest-order relativistic corrections, one must perform an approximation based on the assumption that 2 mc^2 ≫ |E - eV |, namely,1/2 mc^2 + E - eV = 1/2 mc^2( 1 + E - eV /2 mc^2)^-1≈1/2 mc^2 - E - eV /4 m^2 c^4 .Inserting this into Eq. (<ref>), one obtains[ ( σ^μ P_μ)^2 /2 m -σ^μ P_μE - eV/4 m^2 c^2σ^ν P_ν+ eV ] Ξ≈EΞ .To remove E from the left-hand side of the above equation (to lowest order in v/c), we notice that( E - eV ) σ^ν P_νΞ=σ^ν P_ν( E - eV )Ξ-iħσ^ν( ∂_νeV ) Ξ≈( σ^ν P_ν)^3 /2mΞ-iħσ^ν( ∂_νeV ) Ξ .Equation (<ref>) can thus be written as an eigenvalue equationHΞ≈ EΞ ,where the Hamiltonian is given byH =(σ^μ P_μ)^2 /2 m+ eV_nonrelativistic- ( σ^μ P_μ)^4/8 m^3 c^2+iħ( σ^μ P_μ)σ^ν( ∂_νeV ) /4 m^2 c^2 _relativistic corrections≡ H_ nonrel + H_ rel .§.§ Hamiltonian in curvilinear coordinates Finally, by making the electromagnetic potentials explicit, and using the properties of the Pauli matrices, one can rewrite the terms H_ nonrel and H_ rel appearing in Eq. (<ref>) as follows:H_ nonrel= 1/2 m{- ħ^2 ( - g^μν)∂_μ∂_ν- ħ^2 ( ∇^2_ C x^ν) ∂_ν+ iħ e[ e^μ_a ( ∂_μ e^a_ν)A^ν +( ∂_μ A^μ)+ 2A^μ∂_μ]-e^2 A_μ A^μ - e ħ σ_cB^c }+ eV , H_ rel= -1 /8 m^3 c^2 { - ħ^2 ( - g^μν)∂_μ∂_ν- ħ^2 ( ∇^2_ C x^ν) ∂_ν+ iħ e[ e^μ_a ( ∂_μ e^a_ν)A^ν +( ∂_μ A^μ)+ 2A^μ∂_μ]-e^2 A_μ A^μ - e ħ σ_cB^c }^2+ ħ^2 /4 m^2 c^2 ( ∇^2_ C eV ) -iħ/4 m^2 c^2 ( ∂_μ eV )g^μνP_ν + ħ/4 m^2 c^2 (e^μ_a e^ν_b σ^ab) ( ∂_μ eV )P_ν . In the above expressions, σ_a = η_abσ^b, where the implicit summation on the Latin indices only involves the three spatial coordinates. Besides, we have introduced σ^ab≡ - ϵ^abcσ_c and the components of the magnetic field with respect to the rectilinear reference frame, B^c ≡ϵ^abc(∂_aA_b); here, ϵ^abc is the totally antisymmetric Levi-Civita symbol, which, unlike the Levi-Civita tensor, takes the same values in all reference frames. Finally, we have used the symbol ∇^2_ C, which can be converted to curvilinear coordinates through:∇^2_ C= ∂_a ∂_a = - ∂_a ∂^a = - e^α_a ∂_α e_β^a ∂^β = - e^α_a ( ∂_α e_β^a ) ∂^β - ∂_α∂^α= - e^α_a ( ∂_α e_β^a ) g^βγ∂_γ - ( ∂_α g^αβ) ∂_β-g^αβ∂_α∂_β .Alternatively, the relevant derivatives can be computed in the rectilinear coordinate frame first, and then the resulting expressions can be converted in curvilinear coordinates.In the main text, we consider the case where there is no magnetic field (A^μ = 0), and only spin-orbit coupling is retained among the relativistic corrections. In this case, the Hamiltonian simplifies to the sum of the kinetic term given in Eq. (5),the scalar potential U ≡ eV, and the spin-orbit term given in Eq. (6).For notational convenience and self-containedness, in the main text we have written the tetrads explicitly in terms of the inverse Jacobian matrix, and we have used Greek indices also for the components of the Cartesian coordinates; since such coordinates themselves are indicated explicitly by means of the subscript C, there is no ambiguity. § III. MATRIX ELEMENTS OF THE HAMILTONIAN IN A MANIFESTLY HERMITIAN FORM Using the definition of the matrix elements in the presence of an arbitrary metric tensor [Eq. (4)],one can write the matrix elements of the Hamiltonian between 2-component spinors | Ξ_n > in a manifestly Hermitian way, assuming that the wave functions either vanish at infinity, or satisfy the Born-von Karman boundary conditions (BvKBCs). The three terms of the Hamiltonian [see Eqs. (5)and (6)]give the following contributions:< Ξ_n | Ĥ_ kin| Ξ_m > = - ħ^2/2 m∫ d r√(-g) g^μν∂Ξ^†_n/∂ r^μ·∂Ξ_m/∂ r^ν , < Ξ_n | Û| Ξ_m > = ∫ d r√(-g)UΞ^†_n ·Ξ_m , < Ξ_n | Ĥ_ so| Ξ_m > = -iħ^2 /8 m^2 c^2 ∫ d r√(- g) ∂ r^μ/∂ r^α_ C∂ r^ν/∂ r^β_ C( ∂_μ U ) [ Ξ^†_n ·σ^αβ·( ∂_νΞ_m ) -( ∂_νΞ^†_n )·σ^αβ·Ξ_m].These forms are obtained by applying partial integration, using the boundary conditions, and exploiting the properties of the metric tensor. As an example, we show the explicit derivation of the first contribution. One starts from< Ξ_n | Ĥ_ kin| Ξ_m > = - ħ^2/2 m∫ d r√(-g) Ξ^†_n ·[( ∇_ C^2 r^ν)- g^μν∂/∂ r^μ] ∂Ξ_m /∂ r^ν ,which follows from Eq. (5).Now, partial integration with respect to r^μ is applied to the second term of this integral; the boundary term vanishes due to the boundary conditions, and the remaining term is < Ξ_n | Ĥ_ kin| Ξ_m > = - ħ^2/2 m∫ d r[ √(-g)( ∇_ C^2 r^ν)Ξ^†_n ∂( g^μν√(-g) Ξ^†_n) /∂ r^μ] ·∂Ξ_m /∂ r^ν .The derivative with respect to r^μ at the second term inside the square brackets is carried on by applying the following identities:∂_μ(-g^μν) = - η^ab e^c_μ (∂_c e^μ_a) e^ν_b + ( ∇^2_ C r^ν), ∂_μ√(-g) = √(-g) e^α_a (∂_μ e^a_α),g^μν e^α_a (∂_μ e^a_α) + η^ab (∂_c e^μ_a) e^c_μ e^ν_b = 0,which follow directly from the definitions of the metric tensor and of the tetrads, and from the fact that ∂_a e^α_c = ∂_c e^α_a in the case at hand, because the tetrads are defined via a point transformation from a Cartesian reference frame. After this, one directly obtains Eq. (<ref>). Equation (<ref>) follows from a similar derivation.The symmetry of the matrix elements in Eqs. (S<ref>-S<ref>) can be made explicit within the operators themselves, using the following notation:Ĥ_ kin = - ħ^2/2 m∂_μ g^μν∂_ν , Ĥ_ so= -iħ^2 /8 m^2 c^2 [ ∂ r^μ/∂ r^α_ C∂ r^ν/∂ r^β_ C( ∂_μ U ) σ^αβ∂_ν - ∂_ν∂ r^μ/∂ r^α_ C∂ r^ν/∂ r^β_ C( ∂_μ U ) σ^αβ] ,where the arrows above derivative operators indicate the direction along which the derivative operators act, when evaluating a matrix element; it is intended that, within this convention, the derivatives do not act on the metric factor √(-g) inside the integrals. These two definitions are equivalent to Eqs. (5)and (6);the same convention is used in Eqs. (8)and (9),which are the expansions of Eqs. (<ref>) and (<ref>), respectively, to the first order in the strain tensor. § IV. SECULAR EQUATION UP TO FIRST ORDER IN THE STRAIN TENSORWe now derive the expressions of the matrix elements of the Hamiltonian up to first order in the strain components on the χ basis, i.e. we derive Eq. (12)by evaluating the matrix elements of Eqs. (8) and (9).By construction, the χ are orthonormal, i.e., they satisfy< χ_n, k| χ_n', k'>= ∫ d r√(- g(r) ) χ^†_n, k(r) ·χ_n', k'(r)= δ_n, n' δ(k - k').with the representation of the scalar product in a curvilinear reference frame, given by Eq. (4). It is convenient to introduce the quantity(Ψ| Â| Φ) ≡∫ d r Ψ^†( r ) · A(r)Φ(r),which, as mentioned in the main text, is not a scalar product in the curvilinear reference frame, which is given instead by Eq. (4).The quantity in Eq. (<ref>), nevertheless, will appear in the following derivation, due to the fact that the accuracy of the theory up to the first order in the strain tensor also requires the expansion of √(-g(r)) in Eq. (4).For the purposes of the present derivation, it is convenient to rewrite the Hamiltonian H = H_ kin + U_ n + H_ so+ U_ ext as H = H_0 + H_1 + U_ ext, whereH_0 ≡ H_ kin,0 + U_ n,0 + H_ so,0 = - ħ^2 /2 m∂^2/∂ r^μ∂ r^μ + U_ n,0-iħ^2 /8 m^2 c^2 [ ( ∂_μ U_ n,0)σ^μν∂_ν -∂_νσ^μν( ∂_μ U_ n,0) ]collects the terms which are independent of the strain tensor, andH_1 ≡ H_ kin,1 + U_ n,1 + H_ so,1 =- ħ^2/2 m∂_μ( ε^ν_μ+ ε^μ_ν)∂_ν+ε^α_βU^β_α_ H_1,nonrel -iħ^2 /8 m^2 c^2 ( Σ^ν∂_ν -∂_νΣ^ν) _ H_1,socollects the terms which are linear in the strain tensor; U_ ext is left untouched. Using the equation∂χ_n', k'/∂ r^ν≈∂χ_n', k'/∂ r^ν - 1/2∂ε^γ_γ/∂ r^νχ_n', k' - 1/2ε^γ_γ∂χ_n', k'/∂ r^ν ,and keeping the BvKBCs into account, one obtains:< χ_n, k|Ĥ_0| χ_n', k'> = ∫ d r√(- g) χ^†_n, kH_0 χ_n', k' =ħ^2 /2 m∫ d r√(- g)( ∂χ^†_n, k/∂ r^μ·∂χ_n', k'/∂ r^μ) + (χ_n, k|Û_ n , 0| χ_n', k')-iħ^2 /8 m^2 c^2 ∫ d r√(- g)( ∂_μ U_ n, 0) ( χ^†_n, k·σ^μν·∂χ_n', k'/∂ r^ν- ∂χ^†_n, k/∂ r^ν·σ^μν·χ_n', k')≈(χ_n, k|Ĥ_0 | χ_n', k')- ħ^2 /4 m∫ d r∂ε^γ_γ/∂ r^μ[ ∂/∂ r^μ( χ^†_n, k·χ_n', k') ] =(χ_n, k|Ĥ_0 | χ_n', k')+ ħ^2 /4 m∫ d r∂^2 ε^γ_γ/∂ r^μ∂ r^μ ( χ^†_n, k·χ_n', k') ,accurately to the first order in the strain tensor. Then, the equation< χ_n, k|Û_ ext| χ_n', k'> = ( χ_n, k|Û_ ext| χ_n', k')holds exactly, because Û_ ext is a function of position only, and< χ_n, k|Ĥ_1| χ_n', k'> ≈( χ_n, k|Ĥ_1| χ_n', k'),because Ĥ_1 is of order 1 in the components of the strain tensor. As a result, one has that:< χ_n, k|Ĥ_0| χ_n', k'> ≈(χ_n, k|Ĥ_0 | χ_n', k') + ( χ_n, k|Û_ ext| χ_n', k')+ ħ^2 /4 m∫ d r∂^2 ε^γ_γ/∂ r^μ∂ r^μ ( χ^†_n, k·χ_n', k') + ( χ_n, k|Ĥ_1| χ_n', k'). § V. EVALUATION OF THE MATRIX ELEMENTS NEEDED FOR THE SECULAR EQUATION We now evaluate the four terms in the left-hand side of Eq. (<ref>) one by one. In the following, these definitions will be used:u^σ_n, 0(r) ≡∑_G e^ iG·ru^σ_n, 0(G),with the normalization(2 π)^3 ∑_σ∑_Gu_n, 0^σ *(G)u_n', 0^σ(G) = δ_n, n' ,andU_ n, 0(r) ≡∑_G e^ iG·rU_ n, 0(G) . It is also convenient to introduce the Fourier transform of the strain tensor (on the infinite q domain),ε^α_β(r) ≡∫ d q ε^ α_β(q) e^ iq·r⇔ε^ α_β(q) ≡1/(2π)^3∫ d r ε^α_β(r) e^- iq·r ,where ε^ α_β(- q) = [ ε^ α_β(q) ]^*, because the strain tensor is real. Homogeneous strain is obtained as a particular case, by setting ε^ν_λ(r) = ε^ν_λ⇔ε^ ν_λ(q) = ε^ν_λ δ(q) .§.§ Term due to Ĥ_0 The first term in the right-hand side of Eq. (<ref>) is evaluated using standard techniques <cit.>, since it is formally the same as the one arising in k·p theories for unstrained systems in Cartesian coordinates:( χ_n, k|Ĥ_0| χ_n', k') = ∫ d r χ^†_n, k(r){- ħ^2 ∇^2_r/2 m+U_ n, 0(r )-i ħ^2/4 m^2 c^2 σ·[[ ∇_r U_ n, 0(r)] ×∇_r] }χ_n', k'(r)= δ(k - k') {δ_n, n' [ E_n(0) + ħ^2 k^2 /2 m] + ħ/mk·π_n, n'} .Here, E_n(0) is the energy eigenvalue of band n at the expansion point (here taken to be Γ≡0), and π_n, n' is defined according to the following equation:δ( k - k' ) π_n, n'≡∫ d re^-i( k - k' ) ·r u^†_n, 0(r)π_r u_n', 0(r) ,where π_r≡-i ħ∇_r +ħσ×[ ∇_r U_ n, 0(r)] /4 m c^2≡p_r + Δp^ rel_ris an operator in position space and a matrix in spin space. For practical use, Eq. (<ref>) can be rewritten asπ_n, n'=(2 π)^3/Ω_ cry∫ d ru^†_n, 0(r)π_r u_n', 0(r) ,where Ω_ cry is the crystal (i.e. the normalization) volume. §.§ Term due to Û_ extAlso the second term in the right-hand side of Eq. (<ref>) is evaluated using standard techniques <cit.>. It should be kept in mind, however, that the external potential must be expressed in curvilinear coordinates r. Therefore, while its expression in Cartesian coordinates does not depend on ε, its expression in r coordinates acquires a dependence on ε via the coordinate transformation r_ C→r. In general, the external potential does not admit an expansion in powers of ε. This is not a problem, as it can be incorporated non-perturbatively into the Luttinger-Kohn equations, as long as it retains a slow spatial dependence with respect to the scale of a unit cell. Therefore,( χ_n, k|Û_ ext| χ_n', k')= ∫ d r χ^†_n, k(r ) U_ ext(r)χ_n', k'(r) ≡(2 π)^3 ∑_σ∑_G∑_G'u_n, 0^σ *(G)u_n', 0^σ(G')U_ ext( k - k' + G - G' ),whereU_ ext(k) ≡1/(2 π)^3∫ d re^-ik·r U_ ext(r).Assuming that the external potential is smooth over a lattice unit cell, it is posited that U_ ext in Eq. (<ref>) is not zero only if its argument lies inside the first Brillouin zone. Since k and k' are both inside the first Brillouin zone, the quantity k - k' + G - G' satisfies the requirement only if G = G' or if G - G' is a nearest-neighbour of the origin in the reciprocal lattice. In the latter case, vectors k and k' can satisfy the constraint if they are close to opposite sides of the first Brillouin zone. Nevertheless, this case is usually neglected, and only G = G' is considered. Under this approximation, the standard result is obtained: ( χ_n, k|Û_ ext| χ_n', k') ≈δ_n, n' U_ ext( k - k').Analogous approximations will be adopted in the remainder of the derivation, while dealing with the inhomogeneous-strain terms. §.§ Orthogonality correction The third term of the right-hand side of Eq. (<ref>) is the strain-dependent correction that ensures the orthonormality of the basis set. The corresponding Hamiltonian term is a slowly-varying function of position, formally analogous to an additional external potential. Therefore, the treatment of this term is analogous to that of Û_ ext in the previous Subsection. Applying the same approximation, one obtains:( χ_n, k| ħ^2/4m( ∇^2trε̂) | χ_n', k')≈ - δ_n, n'ħ^2/4 m| k - k'|^2 ε^ μ_μ(k - k') .§.§ Terms due to Ĥ_1 To evaluate the contributions to Eq. (<ref>) due to Ĥ_1, it is convenient to split them into a nonrelativistic term and a spin-orbit term, as in Eq. (<ref>). The two terms will be considered separately.§.§.§ Nonrelativistic term The nonrelativistic term is given by the expression:( χ_n, k|Ĥ_1,nonrel| χ_n', k') =∑_σ∫ d r{ e^-i( k - k' ) ·r u^σ *_n, 0(r )ε^α_β(r) U^β_α(r )u^σ_n', 0(r)- ħ^2/2 m∂[e^-ik·ru^σ *_n, 0(r) ] /∂ r^μ[ ε^ν_μ(r)+ε^μ_ν(r)]∂[ u^σ_n', 0(r) e^ ik' ·r]/∂ r^ν}= (2 π)^3 ∑_σ∑_Gu_n, 0^σ *(G)∑_G'u_n', 0^σ(G') {∑_G”ε^ μ_ν(k - k'+ G - G' - G” )U^ν_μ(G” )- ħ^2/2 mε^ μ_ν(k - k' + G - G') [ ( G^μ + k^μ)( G'^ν + k'^ν) + ( G^ν + k^ν)( G'^μ + k'^μ)]} ,where the Fourier transforms listed at the beginning of this Section have been used.Since the strain tensor is assumed to have a slow spatial dependence, ε^ μ_ν(q) is non-zero only if q belongs to the first Brillouin zone. The arguments of the Fourier transform of the strain tensor in Eq. (<ref>) are (k - k' + G - G' - G”) and (k - k' + G - G'). In the first case, the requirement of slow spatial variation of the strain tensor imposes that G - G' - G” is either zero, or one of the nearest neighbours of the origin in the reciprocal space; in the second case, the same holds for G - G'.Along the lines of the approximation that is usually adopted for the slowly-varying confining potential (see the previous Subsection), we set to zero the combinations of reciprocal lattice vectors which are summed with k - k' inside the arguments of the slowly-varying functions. Then, the following identities are introduced:( 2 π)^3∑_Gu_n, 0^†(G) ·u_n', 0 (G) = (2 π)^3/Ω_ cry∫ d ru^†_n, 0(r) · u_n', 0(r) = δ_n, n' , ( 2 π)^3 ∑_Gu_n, 0^†(G)·u_n', 0(G) G^ν = (2 π)^3/Ω_ cry∫ d ru^†_n, 0(r) ·( -i∂/∂ r^ν)u_n', 0(r)≡1/ħ( p_ν)_n,n' , ( 2 π)^3 ∑_Gu_n, 0^†(G)·u_n', 0(G) G^ν G^μ = (2 π)^3/Ω_ cry∫ d ru^†_n, 0(r) ·( -i∂/∂ r^ν) ( -i∂/∂ r^μ) u_n', 0(r)≡1/ħ^2(p_μ p_ν)_n,n' , ( 2 π)^3∑_G∑_G'u_n, 0^†(G)·u_n', 0(G')U_μ^ν(G - G' )= (2 π)^3/Ω_ cry∫ d rU_μ^ν(r) u^†_n, 0(r) ·u_n', 0(r) ≡( U_μ^ν)_n, n' ,where Ω_ cry is the crystal (i.e. the normalization) volume. In terms of these quantities, the result reads as( χ_n, k|Ĥ_1,nonrel| χ_n', k') ≈ε^ μ_ν(k - k'){( U_μ^ν)_n, n' - ħ^2/2 m[2/ħ^2(p_μ p_ν)_n,n'+ 1/ħ( p_μ)_n,n'( k^ν+ k'^ν) +(k^μ + k'^μ) 1/ħ( p_ν)_n,n' + ( k^μ k'^ν + k'^μ k^ν) δ_n, n']} .§.§.§ Relativistic term The relativistic contribution is due to the operatorH_1,so = -iħ^2 /8 m^2 c^2 ( Σ^ν∂_ν -∂_νΣ^ν),where Σ^ν is defined in Eq. (10).It is convenient to elaborate this quantity as follows:Σ^ν(r) ={∂/∂ r^μ[ε^α_β(r) U^β_α(r) ]}σ^μν+( ∂ U_ n, 0(r) /∂ r^α)[ε^ν_μ(r)σ^μα- ε^α_μ(r)σ^μν] =i∑_G”∫ d qe^ i( q + G”) ·r{[ U^β_α(G”) ε^ α_β(q)( q^μ + G”^μ)-G”^αU_ n, 0(G”) ε^ α_μ(q)]σ^μν+G”^αU_ n, 0(G”) ε^ ν_μ(q) σ^μα} ≡ i∑_G”∫ d qe^ i( q + G”) ·r[ S_μ(q, G” ) σ^μν+ Z^ν_μα(q, G” ) σ^μα] .The matrix element is then:( χ_n, k|Ĥ_1,so| χ_n', k') = -iħ^2 /8 m^2 c^2 ∫ d r[χ^†_n, k(r)·Σ^ν(r) ·∂χ_n', k'(r) /∂ r^ν -∂χ^†_n, k(r)/∂ r^ν·Σ^ν(r) ·χ_n', k'(r) ]= iħ^2 /8 m^2 c^2 (2 π)^3 ∑_G, G', G”u^†_n, 0(G)·σ^μν·u_n', 0(G') {( k^ν + G^ν + k'^ν + G'^ν)S_μ( q, G”)+ ( k^α + G^α + k'^α + G'^α)Z^α_μν( q, G”)}|_q = k + G - k' - G' - G” . Consistently with the approximation that was already discussed for the previous terms, here G” = G - G' should be substituted in the whole expression. In particular,S_μ( q, G”) |_q = k + G - k' - G' - G” ≈δ_G” , G - G'[ U^β_α(G - G') ε^ α_β(k - k')( k^μ + G^μ - k'^μ - G'^μ)-( G^α -G'^α) U_ n, 0(G - G')ε^ α_μ(k - k' )] ,andZ^α_μν(q, G” ) |_q = k + G - k' - G' - G”≈δ_G” , G - G'( G^α - G'^α) U_ n, 0(G - G')ε^ ν_μ(k - k') .Finally, the following identities hold for any function U(r) having the same periodicity as the lattice:( 2 π)^3 ∑_G , G'u_n, 0^†(G)·σ^μν·u_n', 0(G') U(G - G' )=(2 π)^3/Ω_ cry∫ d ru^†_n, 0(r)·σ^μν· u_n', 0(r) U(r) ≡( σ^μν U )_n, n' , ( 2 π)^3∑_G , G' u_n, 0^†(G)·σ^μν·u_n', 0(G')U(G - G' )( G^α - G'^α) = (2 π)^3/Ω_ cry∫ d ru^†_n, 0(r) ·σ^μν· u_n', 0(r)( -i∂ U(r)/∂ r^α)≡1/ħ[ σ^μν( p_α U ) ]_n,n' , ( 2 π)^3∑_G, G'u_n, 0^†(G)·σ^μν·u_n', 0(G')U(G - G' ) G^α =(2 π)^3/Ω_ cry∫ d r (i∂ u^†_n, 0(r) /∂ r^α) ·σ^μν·u_n', 0(r) U(r)≡1/ħ[ p_α( σ^μν U)]_n,n' , ( 2 π)^3∑_G, G'u_n, 0^†(G)·σ^μν·u_n', 0(G')U(G - G' ) G'^α =(2 π)^3/Ω_ cry∫ d ru^†_n, 0(r) ·σ^μν·( -i∂ u_n', 0(r) /∂ r^α) U(r)≡1/ħ[ ( σ^μν U) p_α]_n,n' , ( 2 π)^3∑_G, G'u_n, 0^†(G)·σ^μν·u_n', 0(G')U(G - G' ) G^α G'^β = (2 π)^3/Ω_ cry∫ d r (i∂ u^†_n, 0(r) /∂ r^α)·σ^μν·( -i∂ u_n', 0(r) /∂ r^β) U(r) ≡1/ħ^2[p_α( σ^μν U) p_β]_n,n' , ( 2 π)^3∑_G, G'u_n, 0^†(G)·σ^μν·u_n', 0(G')U(G - G' )( G^ν + G'^ν) ( G^μ - G'^μ) ≡1/ħ^2[ p_ν( σ^μν p_μ U) +( σ^μν p_μ U) p_ν]_n,n' .The result is( χ_n, k|Ĥ_1,so| χ_n', k')≈ iħ^2 /8 m^2 c^2 ε^ α_β(k - k'){2 k^μ k'^ν(σ^μνU^β_α)_n, n'+2k^μ 1/ħ[ ( σ^μν U^β_α) p_ν]_n,n'+ 2 k'^ν 1/ħ[ p_μ( σ^μν U^β_α)]_n,n'+ 2 1/ħ^2[p_μ( σ^μν U^β_α) p_ν]_n,n' + ( k^ν+ k'^ν)1/ħ[ σ^βα( p_ν U_ n, 0) - σ^βν( p_α U_ n, 0) ]_n,n' +1/ħ^2[ p_ν( σ^βα p_ν U_ n, 0)- p_ν( σ^βν p_α U_ n, 0) +( σ^βα p_ν U_ n, 0) p_ν - ( σ^βν p_α U_ n, 0) p_ν]_n,n'} . §.§ Total matrix element Combining the terms derived above, one obtains Eq. (12)of the main text, where( 𝒟_μ^ν)_n, n'≡( U_μ^ν)_n, n' - 1/ m(p_μ p_ν)_n,n'_ nonrel + iħ^2 /8 m^2 c^2 [2 ∂_α( σ^αβ U^ν_μ) ∂_β+ ∂_β( σ^νμ∂_β U_ n, 0)- ∂_β( σ^νβ∂_μ U_ n, 0) -( σ^νμ∂_β U_ n, 0) ∂_β + ( σ^νβ∂_μ U_ n, 0) ∂_β]_n,n'_ rel are the deformation potentials with relativistic corrections, and( ℒ_α; μ^ν)_n, n'≡ - ħ/2 m[δ^ν_α( p_μ)_n,n' + δ^μ_α( p_ν)_n,n'] _ nonrel +ħ^2 /8 m^2 c^2 { 2[ ( σ^αβ U^ν_μ) ∂_β]_n,n'+[ σ^νμ( ∂_α U_ n, 0) - σ^να( ∂_μ U_ n, 0) ]_n,n'}_ rel , ( ℒ_α; μ^*ν)_n', n≡ - ħ/2 m[δ^ν_α( p_μ)_n,n' + δ^μ_α( p_ν)_n,n'] _ nonrel +ħ^2 /8 m^2 c^2 { 2 [ ∂_β( σ^αβ U^ν_μ)]_n,n'+[ σ^νμ( ∂_α U_ n, 0) - σ^να( ∂_μ U_ n, 0) ]_n,n'}_ rel ,( 𝒬_αβ; μ^ν)_n, n'≡ - ħ^2/2 m(δ^μ_αδ^ν_β +δ^ν_αδ^μ_β) δ_n, n'_ nonrel+ iħ^2 /4 m^2 c^2 (σ^αβU^ν_μ)_n, n'_ rel .The non-relativistic contributions to these quantities are denoted, respectively, as D, L and Q in Eqs. (13a-13c). § VI. MANIFOLD DECOUPLINGWe here apply Löwdin partitioning in order to decouple a low-energy manifold of bands, with n ∈{ 1, 2, …, N }, from the higher (remote) bands with n > N. Using the notation of Ref. <cit.>, the Hamiltonian is written asĤ≡Ĥ^(0) + Ĥ^(1) + Ĥ^(2) ,where Ĥ^(0) is diagonal in the band and crystal-momentum indices, Ĥ^(1) contains all intra-manifold terms, and Ĥ^(2) contains the inter-manifold terms. In the case at hand, using Eq. (12),the three parts are written asĤ^(0)≡∑_n∫_ 1 BZ d k[ E_n(0) + ħ^2 k^2 /2 m] | χ_n, k> < χ_n, k| , Ĥ^(1) ≡( ∑_n ≤ N∑_n' ≤ N+ ∑_n > N∑_n' > N) ∫_ 1 BZ d kħ/mk·π_n, n'| χ_n, k>< χ_n', k| + ∑_n∫_ 1BZ d k∫_ 1BZ d k' [ U_ ext( k - k')-ħ^2/4 m| k - k'|^2 ε^ μ_μ(k - k') ] | χ_n, k> < χ_n, k'|+ ( ∑_n ≤ N∑_n' ≤ N+ ∑_n > N∑_n' > N) ∫_ 1BZ d k∫_ 1BZ d k' ε^ μ_ν(k - k')[ X^ν_μ(k, k') ]_n, n'| χ_n, k> < χ_n', k'|, Ĥ^(2)≡( ∑_n ≤ N∑_n' > N + ∑_n > N∑_n' ≤ N) ∫_ 1BZ d k∫_ 1BZ d k' {δ(k - k') ħ/mk·π_n, n' + ε^ μ_ν(k - k')[ X^ν_μ(k, k') ]_n, n'}| χ_n, k> < χ_n', k'| ,where[ X^ν_μ(k, k') ]_n, n' = ( 𝒟_μ^ν)_n, n' + k^α( ℒ_α; μ^ν)_n, n' + k'^α( ℒ_α; μ^*ν)_n', n + k^α k'^β( 𝒬_αβ; μ^ν)_n, n' . A canonical transformationℋ̂ =e^- ŜĤ e^Ŝ , | ϕ> =e^- Ŝ| ψ>,is applied to block-diagonalize Ĥ^(2), while preserving the already block-diagonal form of Ĥ^(0) + Ĥ^(1). Following the procedure outlined in Ref. <cit.>, one writesℋ̂ = ℋ̂_ diag + ℋ̂_ nondiag , ℋ̂_ diag= ∑_j = 0^∞1/(2 j)![ Ĥ^(0) + Ĥ^(1) , Ŝ]^(2j)+ ∑_j = 0^∞1/(2 j + 1)![ Ĥ^(2) , Ŝ]^(2j + 1) = Ĥ^(0) + Ĥ^(1) + [ Ĥ^(2) , Ŝ]+ 1/2[ [ Ĥ^(0) + Ĥ^(1) , Ŝ], Ŝ]+ … , ℋ̂_ nondiag= ∑_j = 0^∞1/(2 j + 1)![ Ĥ^(0) + Ĥ^(1) , Ŝ]^(2j + 1)+ ∑_j = 0^∞1/(2 j)![ Ĥ^(2) , Ŝ]^(2j) =Ĥ^(2) +[ Ĥ^(0) + Ĥ^(1) , Ŝ] +1/2[ [ Ĥ^(2) , Ŝ], Ŝ]+ … .The operator Ŝ is chosen so that ℋ̂_ nondiag≈ 0. Expanding Ŝ = Ŝ^(1) + Ŝ^(2) + …, this condition is satisfied by imposing[ Ĥ^(0) , Ŝ^(1)] = - Ĥ^(2) , [ Ĥ^(0) , Ŝ^(2)] = - [ Ĥ^(1), Ŝ^(1)], … .The resulting first term of the expansion isŜ^(1)= - ( ∑_n ≤ N∑_n' > N + ∑_n > N∑_n' ≤ N)∫_ 1BZ d k∫_ 1BZ d k' δ(k - k') ħ/mk·π_n, n' + ε^ μ_ν(k - k')[ X^ν_μ(k, k') ]_n, n'/ E_n(0) + ħ^2 k^2 /2 m - E_n'(0) - ħ^2 k'^2 /2 m| χ_n, k> < χ_n', k'|.The second term, S^(2), displays an additional large denominator with respect to S^(1), so it is much smaller and it will be neglected here. The effective Hamiltonian after the canonical transformation is thenℋ̂≈Ĥ^(0) + Ĥ^(1) + [ Ĥ^(2) , Ŝ^(1)],where terms of order ∝ε^2 must be discarded from the expression of [ Ĥ^(2) , Ŝ^(1)], since the present theory is accurate only up to the first order in the strain tensor. The resulting Hamiltonian restricted to the n ≤ N manifold is:ℋ̂^(N) ≈∑_n ≤ N ∫_ 1 BZ d k[ E_n(0) + ħ^2 k^2 /2 m] | χ_n, k> < χ_n, k| + ∑_n ≤ N∑_n' ≤ N∫_ 1 BZ d kħ/mk·π_n, n'| χ_n, k>< χ_n', k| + ∑_n ≤ N∑_n' ≤ N∑_n” > N∫_ 1BZ d kħ/mk·π_n, n”ħ/mk·π_n”, n'(1/ E_n(0) - E_n”(0)+1 / E_n'(0)- E_n”(0)) | χ_n, k>< χ_n', k|+ ∑_n ≤ N∫_ 1BZ d k∫_ 1BZ d k' [ U_ ext( k - k')-ħ^2/4 m| k - k'|^2 ε^ μ_μ(k - k') ] | χ_n, k> < χ_n, k'|+ ∑_n ≤ N∑_n' ≤ N∫_ 1BZ d k∫_ 1BZ d k' ε^ μ_ν(k - k')[ X^ν_μ(k, k') ]_n, n'| χ_n, k> < χ_n', k'|+ ∑_n ≤ N∑_n' ≤ N∑_n” > N∫_ 1BZ d k∫_ 1BZ d k' ε^ μ_ν(k - k') | χ_n, k>< χ_n', k'|×[ ħ/mk·π_n, n”[ X^ν_μ(k, k') ]_n”, n'(1/ E_n(0) - E_n”(0)+1 / E_n'(0) + ħ^2 k'^2 /2 m- E_n”(0) - ħ^2 k^2 /2 m)+ [ X^ν_μ(k, k') ]_n, n”ħ/mk' ·π_n”, n'(1/ E_n(0) + ħ^2 k^2 /2 m - E_n”(0) - ħ^2 k'^2 /2 m +1 / E_n'(0)- E_n”(0) )] .The last three lines represent a small contribution with respect to the dominant, strain-independent one, and they will be neglected here. Under this approximation, the matrix elements of Eq. (<ref>) are written as < χ_n, k| ℋ̂| χ_n', k'> ≈< χ_n, k|Ĥ| χ_n', k'> + δ(k - k') ħ^2 Π^αβ_n, n'/m^2 k^αk^β , where n , n' ≤ N, andΠ^αβ_n, n' ≡∑_n” > N( π^α_n, n”π^β_n”, n'/ E_n(0) - E_n”(0) +π^α_n, n”π^β_n”, n'/ E_n'(0)- E_n”(0)) .§ VII. ENVELOPE FUNCTIONS Since the transformed Hamiltonian is block-diagonal, its eigenstates are combinations of the basis states belonging to a single block. For the low-energy block, an eigenstate is written as| ϕ> ≡∑_n ≤ N∫_1BZ d k 𝒞_n(k) | χ_n, k>,where the coefficients 𝒞_n(k) satisfy∑_n' ≤ N∫_ 1 BZ d k'ℋ^(N)_n, n'(k, k')𝒞_n'(k') = E 𝒞_n(k). The slowly-varying envelope functions are defined asF_n(r) ≡∫_1BZ d ke^ ik·r𝒞_n(k), and they allow to write the spinorial wave function < r| ϕ> in the form given by Eq. (15)of the main text. Equation (<ref>) is rewritten in terms of the envelope functions as∑_n' ≤ N∫_ 1 BZ d ke^ ik·r∫_ 1 BZ d k'ℋ^(N)_n, n'(k, k')𝒞_n'(k')= E F_n(r). To simplify the left-hand side of Eq. (<ref>), we distinguish three types of matrix elements of the Hamiltonian: * those having the form δ(k - k') Λ_n,n'(k): these give∑_n' ≤ NΛ_n,n'(-i∇) F_n'(r),analogously to standard envelope-function theories;* those having the form 𝒰_n, n'(k - k'): these include the external potential and formally analogous terms, for which the standard treatment is applicable; under the assumption that the envelope functions are slowly varying, they contribute terms∑_n' ≤ N𝒰_n, n'(r ) F_n'(r); * those having the form ε^ μ_ν(k - k')[ X^ν_μ(k, k') ]_n, n'. This is a new category of terms, which do not map onto those related to homogeneous strain, because the Fourier transform of the strain tensor is not a Dirac delta. The contributions to the left-hand side of Eq. (<ref>) which include the formally new terms are written as∑_n' ≤ N∫_ 1 BZ d ke^ ik·r∫_ 1 BZ d k'ε^ μ_ν(k - k')[ X^ν_μ(k, k') ]_n, n'𝒞_n'(k')= ∑_n' ≤ N∫ d r'ε^ μ_ν(r')1/(2 π)^3∫_ 1 BZ d ke^ ik·( r - r' ) ×∫_ 1 BZ d k'e^ ik' ·r'{( 𝒟_μ^ν)_n, n' + k^α( ℒ_α; μ^ν)_n, n'+ k'^α( ℒ_α; μ^*ν)_n', n + k^α k'^β( 𝒬_αβ; μ^ν)_n, n'} 𝒞_n'(k').The first contribution to the right-hand side of Eq. (<ref>) is∑_n' ≤ N( 𝒟_μ^ν)_n, n'∫ d r'ε^ μ_ν(r')1/(2 π)^3∫_ 1 BZ d ke^ ik·( r - r' ) ∫_ 1 BZ d k'e^ ik' ·r'𝒞_n'(k')≈ε^ μ_ν(r)∑_n' ≤ N( 𝒟_μ^ν)_n, n'F_n' (r),the second contribution is∑_n' ≤ N( ℒ_α; μ^ν)_n, n'∫ d r'ε^ μ_ν(r')1/(2 π)^3∫_ 1 BZ d ke^ ik·( r - r' ) k^α∫_ 1 BZ d k'e^ ik' ·r'𝒞_n'(k') ≈ -i∑_n' ≤ N( ℒ_α; μ^ν)_n, n'∫ d r'ε^ μ_ν(r') F_n'(r') ∂δ( r - r' )/∂ r^α= -i∑_n' ≤ N( ℒ_α; μ^ν)_n, n'∂[ ε^ μ_ν(r) F_n'(r) ]/∂ r^α ,the third contribution is∑_n' ≤ N( ℒ_α; μ^*ν)_n', n∫ d r'ε^ μ_ν(r')1/(2 π)^3∫_ 1 BZ d ke^ ik·( r - r' ) ∫_ 1 BZ d k'e^ ik' ·r' k'^α 𝒞_n'(k') ≈ -i∑_n' ≤ N( ℒ_α; μ^*ν)_n', nε^ μ_ν(r)∂ F_n'(r) /∂ r^α ,and the fourth contribution is∑_n' ≤ N( 𝒬_αβ; μ^ν)_n, n'∫ d r'ε^ μ_ν(r')1/(2 π)^3∫_ 1 BZ d ke^ ik·( r - r' ) k^α∫_ 1 BZ d k'e^ ik' ·r' k'^β𝒞_n'(k') ≈ - ∑_n' ≤ N( 𝒬_αβ; μ^ν)_n, n'∫ d r'ε^ μ_ν(r') ∂F_n'(r') /∂ r'^β∂δ( r - r' )/∂ r^α = - ∑_n' ≤ N( 𝒬_αβ; μ^ν)_n, n'∂/∂ r^α[ ε^ μ_ν(r) ∂F_n'(r) /∂ r^β]. In deriving the expressions above, we have used1/(2 π)^3∫_ 1 BZ d ke^ ik·( r - r' ) ≈δ( r - r' ),which is an approximate relation, only valid when this quantity is multiplied by a slowly-varying spatial function, such as envelope functions and components of the strain tensor.Collecting all terms, one obtains ∑_n' ≤ N( ℋ̂^ EF_0;n,n' + ℋ̂^ EF_ε;n,n') F_n'(r)= E F_n(r),whereℋ̂^ EF_0;n,n'≡[E_n(0) + ħ^2 k̂^2 /2 m + U_ ext(r)] δ_n, n' + ħπ^α_n, n'/mk̂_α +ħ^2 Π^αβ_n, n'/m^2k̂_αk̂_βis formally the same as the standard k·p Hamiltonian <cit.>, but with the k̂_α operators defined in curvilinear coordinates, and ℋ̂^ EF_ε;n,n' is the strain-dependent term defined in Eq. (16).
http://arxiv.org/abs/2312.15967v1
{ "authors": [ "Andrea Secchi", "Filippo Troiani" ], "categories": [ "cond-mat.mes-hall", "cond-mat.mtrl-sci", "quant-ph" ], "primary_category": "cond-mat.mes-hall", "published": "20231226092739", "title": "Envelope-function theory of inhomogeneous strain in semiconductor nanostructures" }
* patterns arrows,shapes,automata,backgrounds,petri,positioning shadows calc spy angles, quotes matrix compat=1.17ineqInequalityInequality ineq#2(#1)#3 subSubsectionSubsection Subsection#2(#1)#3 sdpSDPSDP sdp#2(#1)#3 lpLPLP lp#2(#1)#3 ineqInequalityInequality ineq#2(#1)#3 subSubsectionSubsection Subsection#2(#1)#3 sdpSDPSDP sdp#2(#1)#3 lpLPLP lp#2(#1)#3 Ualgorithm[1][htpb]algocf@post@ruledheight3ptalgocf@capt@ruledundercenterline
http://arxiv.org/abs/2312.16616v1
{ "authors": [ "Ilias Diakonikolas", "Daniel M. Kane", "Vasilis Kontonis", "Christos Tzamos", "Nikos Zarifis" ], "categories": [ "cs.LG", "cs.DS", "math.ST", "stat.ML", "stat.TH" ], "primary_category": "cs.LG", "published": "20231227155047", "title": "Agnostically Learning Multi-index Models with Queries" }
IQM, Georg-Brauchle-Ring 23-25, 80992 Munich, Germany Technical University of Munich, CIT, Department of Computer Science, Boltzmannstr. 3, 85748 Garching, GermanyIQM, Georg-Brauchle-Ring 23-25, 80992 Munich, Germany [email protected], Georg-Brauchle-Ring 23-25, 80992 Munich, Germany [email protected] Qubit Routing for QAOA Circuits Martin Leib Received ...; accepted... ========================================We develop a qubit routing algorithm with polynomial classical run time for the Quantum Approximate Optimization Algorithm (QAOA). The algorithm follows a two step process. First, it obtains a near-optimal solution, based on Vizing's theorem for the edge coloring problem, consisting of subsets of the interaction gates that can beexecuted in parallel on a fully parallelized all-to-all connected QPU. Second, it proceeds with greedy application of SWAP gates based on their net effect on the distance of remaining interaction gates on a specific hardware connectivity graph. Our algorithm strikes a balance between optimizing for both the circuit depth and total SWAP gate count. We show that it improves upon existing state-of-the-art routing algorithms for QAOA circuits defined on k-regular as well as Erdös-Renyi problem graphs of sizes up to N ≤ 400.§ INTRODUCTION Quantum computing applications sparked recent interest in academia and industry, offering solutions for problems found in chemistry <cit.>, finance <cit.> and optimization <cit.>. One of the most promising applications of quantum computing is the speedup and improvement of the quality of solutions for combinatorial optimization problems. Combinatorial optimization use cases are ubiquitous in the industrial as well as academic research context and any quantum advantage would create a huge impact. However, the promise of a quantum advantage comes with a demanding engineering challenge: The presence of noise currently limits the number of operations one can perform on a quantum computer such that large-scale algorithms cannot be realized on Noisy Intermediate-Scale Quantum (NISQ) devices. Therefore, QAOA is considered to be a promising candidate for near-term quantum computer applications due to shallow circuit depths.However, the execution of QAOA circuits on many state-of-the-art quantum processors is hampered by either restricted local connectivity of the QPU or by the amount of two-qubit gates that can be executed in parallel. These constraints lead to the necessity of introducing SWAP gates to make the execution of interaction gates between non-neighboring qubits possible. This process is referred to as qubit routing and designing a routine to map a quantum circuit to an architecture is often referred to as solving the qubit routing problem. Prior to the routing, the association of qubits in the abstract circuit description and actual, physical qubits on the QPU is referred to as qubit mapping and there is, similar to the qubit routing problem, an exponentially growing number of possibilities to fulfill the mapping. The actual, implemented mapping, however, has a big influence on the overall algorithm performance, especially for NISQ devices. In order to bring the current state-of-the-art quantum computers closer to the practicability of large circuits, it is therefore essential that the qubit routing as well as the mapping is implemented in the most optimal way in terms of the number of added SWAP gates as well as resulting circuit depth. The qubit routing problem can be mapped to either the integer linear programming <cit.> or the token swapping problem <cit.>, both of which are NP-complete <cit.>. However, an exponential classical run time of qubit routing would jeopardize the potential for quantum advantage within any quantum algorithm for which it is needed. Therefore, finding polynomial run time algorithms to achieve near-optimal results for qubit routing is the main focus of the present work.Existing methods for qubit routing are diversified over different optimization objectives and solution approaches. Objectives can include minimizing the total number of gates of a certain type, minimizing the total circuit depth or reducing the overall error by taking noise models into account <cit.>. Learning-based methods have been explored in the form of deep learning <cit.> and reinforcement learning <cit.>. The former approach is subject to data preparation overhead for the learning process and produces only architecture-specific results, while the latter suffers from long execution times and optimizes only for the circuit depth. Swap-network based qubit routing algorithms <cit.> apply predefined layers of SWAP gates, achieving low circuit depths at the cost of high SWAP numbers. Swap-search based qubit routing algorithms <cit.> instead focus on selecting a subset of possible SWAP gates and evaluating the SWAP candidates according to a heuristic cost function, which generally leads to lower SWAP counts at the price of lower parallelism. The algorithm developed in this work follows this scheme as well, but attempts to strike a balance between the two aforementioned objectives.In our work, we tackle the qubit mapping and routing problem for QAOA circuits. QAOA circuits exhibit a large search space to route interaction gates from, as all two-qubit gates commute with each other. This degree of freedom makes the routing task especially challenging, but also presents an opportunity to save gates and run time by a clever routine that picks favourable gates depending on the physical coupler layout of the QPU. We develop a heuristic algorithm which addresses both the circuit depth and SWAP number optimization and show that it outperforms currently used state-of-the-art routing algorithms, especially when simultaneously considering those two objectives.The outline of this paper is as follows: In <ref>, we briefly present the theoretical background of QAOA. <ref> introduces our routing algorithm followed by a discussion of numerical results presented in <ref>. § QAOAThe QAOA algorithm <cit.> produces approximate solutions for combinatorial optimization problems. Combinatorial optimization problems pose the task of finding a binary number x^* that minimizes a function f: {0,1}^n →ℝ, f_min = f(x^*). The way this problem is approached with QAOA is to first map the function f onto a diagonal spin glass HamiltonianH_P = ∑ _i<j J_ijσ_i^z σ_j^z + ∑ _i h_iσ_i^z ,such that f(x) = ⟨x| H_P |x⟩, where |x⟩ is the computational basis state corresponding to bitstring x. Measurement samples from a low energy state of H_P therefore are good solutions to the original combinatorial optimization problem. QAOA prepares this low-energy state by making use of the Rayleigh-Ritz variational principle, f_min≤⟨ψ(β,γ)| H_P |ψ(β,γ)⟩ ,where the parameters β={β_1, …, β_d} and γ={γ_1, …, γ_d} are optimized in an outer, classical, optimization loop. The QAOA Ansatz state is inspired by a trotterized version of quantum annealing with a simple, local driver Hamiltonian H_0=-∑_i=1^N σ_i^x, where the length of the trotter steps are the variational parameters, |ψ(β,γ)⟩ =U(H_P, β_d)U(H_0, γ_d)… … U(H_P, β_1)U(H_0, γ_1)|+⟩ ,with |+⟩∝ (|0⟩+|1⟩)^⊗ n the normalized, equal superposition of all computational basis states.The QAOA Ansatz circuit is composed of d layers, each one consisting of a rotation generated by the problem Hamiltonian, U(H_P, β_k)=e^-iβ_k H_P, followed by a rotation generated by the driver Hamiltonian, U(H_0, γ_k)=e^-iγ_k H_0. Since single qubit gates need no routing on any hardware platform the main target for routing is the parameterized rotation generated by the problem Hamiltonian. If we can find a routing for one of these unitaries we can always invert the routing for the next block in the Ansatz and iterate this way through the circuit. All 2-local terms in the problem Hamiltonian are products of two Pauli z operators, which means that this many-body gate can be decomposed into a set of two-qubit interaction gates, R_zz = exp(-i θ z_i z_j),for any pair of qubits i and j. Since all of these two-qubit gates commute, we can freely choose the order of their execution. The full information necessary for the routing process of the quantum circuit can thus be captured in a problem graph 𝒢_P=(V, E) with vertices V={v_1,… v_N} representing the qubits, and edges E⊆{e_ij = (v_i, v_j) | v_i, v_j ∈ V} representing interaction gates with non-vanishing coupling strengths J_ij, where J_ij is the weight associated with edge e_ij. If pairs of physical qubits, where the information for qubit i and j currently reside, are not connected with a physical coupler we have to shuttle quantum information with the help of SWAP gates using the techniques detailed in the following section. § METHODA quantum compiler can take on various processing steps to execute a given quantum circuit on a QPU including, but not restricted to, decomposing circuit gates into native quantum gates, resynthesizing the circuit according to commutation and rewriting rules and adding SWAP gates to comply with hardware restrictions. In this work, we focus only on qubit mapping and routing. Our work can be integrated as a subroutine into an existing compiler responsible for other processing and execution steps. The general strategy of our algorithm is to first divide the set of interaction gates into a minimal amount of subsets such that every subset can be executed in parallel and without SWAP gates on a hypothetical all-to-all connected fully parallel QPU. Each of these subsets individually can, in principle, also be executed in parallel on a QPU with at least a one-dimensional topology and we take advantage of this fact in our mapping strategy. In order to execute the next of the remaining subsets we apply a greedy SWAP search routine to bring interacting pairs of qubits to sites on the QPU with a connecting coupler between them. Therefore, our QAOA routing algorithm consists of two main parts, the mapper and the router. An initial mapping is characterized by an injective function between the set of problem variables x_i and physical qubits q_i. The problem variables will be referred to as virtual qubits. The mapper's responsibility is to provide an initial mapping from virtual to physical qubits on a given QPU. We will consider only the settings where the number of physical qubits matches the number of virtual qubits for simplicity (bijective initial mapping). Having more physical than virtual qubits simply allows one to pick a given subset of them based on some given metric, like the connectivity or quality of the qubits. Once the qubits are initialized, the mapper will provide the router with an interaction set of gates to route as well as an initial buffer containing a set of gates that would be executable in parallel on an all-to-all connected QPU. The router will then attempt to execute the gates whilst adding SWAP gates in order to satisfy the connectivity constraints of the QPU, given by the hardware graph. The interaction set of gates and buffer will be updated iteratively until all gates have been processed. The output of the compiler will be a quantum circuit logically equivalent to the input circuit from the problem graph and simultaneously compliant with a given QPU architecture. In the next subsections, we will discuss in detail how the mapper and router work by first introducing some preliminary definitions and then describing the procedures they follow. The full workflow of the algorithm is depicted in Fig. <ref>. §.§ PreliminariesIn the following, we give formal definitions of some graph concepts which we use in designing our compiler.An edge coloring of a graph 𝒢 is an assignment of edges to colors such that no adjacent edges are assigned the same color. The minimum edge coloring is an edge-color mapping with the smallest possible number of colors. A matching M of a graph 𝒢 is a subset of the graphs' edges where none of the edges are adjacent within the graph, i.e. the tuples containing pairwise non-adjacent edges. A maximum matching M_max is a matching with the maximum cardinality, i.e. a matching containing the maximum number of non-adjacent edges. A maximum matching is not necessarily unique. A maximal matching is a matching that is not a subset of another matching, i.e. a maximal matching cannot accept another edge and still be a matching. Every maximum matching is a maximal matching, but not vice versa. A maximal matching can be obtained by traversing the edges of the graph randomly and adding them to a set if none of their vertices are already contained in the set.Beyond the graph theoretical concepts, we will also need to define further objects used in the routing algorithm. In addition to the definition of the problem graph in section <ref>, every edge connecting virtual qubits stores the distance information between the physical qubits which the virtual qubits are assigned to prior to the routing. The distance information of an edge is updated if one of the virtual qubits is swapped. Finally we define 𝒢_QPU as the connectivity graph of the QPU topology, where vertices are the physical qubits and an edge between two vertices exists if and only if there is a coupler between the corresponding qubits on the QPU (see also Fig. <ref> for an example layout of the problem and connectivity graphs).We additionally maintain two containers for all interaction gates that need to be executed, i.e. the edges of the problem graph, ℐ and ℐ_C. The buffer ℐ_C is a set of interaction gates where every involved qubit appears only once in an interaction gate. The target of the routing algorithm is to execute these gates and if not possible, because there is no suitable coupler in the QPU, enable them through the application of SWAP gates.The rest of the interaction gates that have not been executed yet and don't belong to ℐ_C are in the interaction set ℐ. An execution of an interaction gate in the quantum circuit deletes it from ℐ_C and triggers an exchange of gates between ℐ_C and ℐ. The routing algorithm is finished when all interaction gates are erased from both ℐ and ℐ_C.§.§ MapperFor an initial mapping, we determine two objectives: 1) After the initialization, every qubit is neighboring at least one other qubit it has to interact with and 2) The number of parallel interaction gates executable within the first two layers of the circuit is maximized. In order to achieve these objectives, we first determine the sets of concurrently executable interaction gates. This is equivalent to solving the edge coloring problem for 𝒢_P. Finding an edge-coloring with the minimum number of required colors is an NP-hard problem. However, there exists a polynomial-time algorithm which guarantees a coloring with at most deg(𝒢_P)+1 colors, where deg(𝒢_P)=max(deg(v)| v∈ V(𝒢_P)), which is at most one color worse compared to the optimal edge-coloring, according to Vizing's theorem <cit.>. Once a coloring of 𝒢_P is found, we extract two colors and their respective edge lists. Next, we `chain' two colors together into a single list in an alternating manner, i.e. C = {(v_0,v_1), (v_1,v_2), …| (v_2k, v_2k+1) ∈ E_col_0, (v_2k+1, v_2k+2) ∈ E_col_1)}. The chain will either be a cycle or a simple path, if necessary. In the case of a cycle, we can break it an arbitrary qubit pair to produce a simple path. In the case of multiple independent components of the induced subgraph resulting from these two colors, we can always connect them into a single chain at a low cost in terms of edges which must be placed back into ℐ. Next, the chain is embedded into the QPU hardware graph. For example, on a square grid QPU, the embedding starts with the bottom row of qubits and propagates to the top from right to left and vice versa, resulting in a continuous snake pattern. For QPU's where embedding of a chain is not possible (i.e. for tree graphs) because no Hamiltonian path exists, one must find an embedding by allowing for a minimal number of additional connections. Interaction gates corresponding to these connections then cannot be implemented and have to be placed back into the set of remaining interaction gates ℐ. Fig. <ref> shows an example of the initial mapping. An edge coloring of the problem graph (left) is performed and subsequently two colors are chosen (blue and green), which together form a chain that can be embedded into the square hardware connectivity graph (right). Consequently, all interaction gates from these two colors can be implemented concurrently in the first two circuit layers without the necessity for any SWAP operations. Assuming the QPU topology has a Hamiltonian path, this described allocation procedure allows for the execution of two sets of parallelizable interaction gates, one after the other, without any need for routing. By choosing the two colors with the largest edge sets, we can further increase parallelization. §.§ Router The routing algorithm takes as input: ℐ, ℐ_C, the initial mapping of virtual to physical qubits and the QPU connectivity graph 𝒢_QPU. We calculate the shortest paths between physical qubits of 𝒢_QPU prior to routing. This needs to be done only once (e.g. with Dijkstra's algorithm). The router then maintains the buffer ℐ_C and will gradually proceed with adding the remaining qubit interaction gates from ℐ into this set according to update rules to be described later on. The routing algorithm consists of the following main steps:* Execute all possible interaction gates which have a valid connectivity on the QPU and remove them from ℐ_C.* Trigger the update and refilling strategy between ℐ and ℐ_C* For all the interaction gates which do not have a valid connectivity on the QPU, apply the swap strategy.* Repeat 1-3 until both ℐ and ℐ_C are empty. §.§.§ Swap StrategyWe define the total distance 𝒟 of ℐ_C to be the sum of the shortest path lengths between virtual qubit pairs of the interaction gates that are contained in the buffer ℐ_C. When two qubits are swapped on a QPU, it holds for the new distance 𝒟' that |𝒟'-𝒟|≤ 2. There are 5 possible outcomes:1 & 2.Both of the swapped qubits are in ℐ_C and they move closer to (further apart from) their partners, i.e. 𝒟'-𝒟=2 (𝒟'-𝒟=-2).3. Both of the swapped qubits are in ℐ_C and one of them moves closer to its partner, whilst the other one moves further away i.e. 𝒟'-𝒟=0. Alternatively, one can have 𝒟'-𝒟=0 if none of the swapped qubits are in ℐ_C, which is a case not considered as a swap candidate. 4 & 5. Only one of the qubits is in ℐ_C and it moves closer to (further apart from) its partner, i.e. 𝒟'-𝒟=1 (𝒟'-𝒟=-1).Ideally, the routing algorithm should only execute swaps that result in a decrease of the total distance which we will refer to as positive swaps. Finding a maximum matching from all possible positive swaps has the objective of maximizing the amount of swaps that can be executed in parallel in order to reduce the overall circuit depth. Hence, the swap strategy consists of the following steps:* Find swap candidates by traversing through the edges of 𝒢_QPU. * Construct a swap graph 𝒢_SWAP consisting only of positive swaps.* Find a maximal matching of 𝒢_SWAP.* Apply the swaps found by the maximal matching. An example for a 4× 4 square QPU is shown in Fig. <ref>. Here, qubit pairs are indicated through matching colors and all swap candidates with a positive swap score are highlighted by thick edges. On the right side of Fig. <ref> the result of the maximal matching is presented, i.e. arrows indicate all swaps which have been be applied and the qubit colors show the resulting configuration. One needs to additionally pay attention to a number of corner cases involving pairs of swap candidates which cancel each others positive or negative effects. This can happen if they both change the distance of the same qubit pair. For simplicity, let us consider a square QPU connectivity graph. Now, if the horizontal or vertical distance between a pair of qubits is exactly one then separately applying swaps acting on both qubits in that given direction will have no net effect on their distance. To avoid this issue we apply swaps from the maximal matching one by one and perform an additional check of whether a given swap is still positive before implementing it.Another case concerns pairs of swap candidates which act on a qubit pair with a horizontal or vertical distance of zero (meaning they are located in the same row or column of the square QPU graph). Here, it can happen that, individually, both swap candidates in the given direction have a score smaller than +1, but if applied together their score becomes positive because the negative effect on the qubit pair cancels out. To this end, it is possible to additionally check for such swap candidate pairs (with swap scores 0/0 or -1,0) after having applied all the positive swaps from the minimal matching. If all four qubits involved in the swaps have not been utilized in a given swap layer, one then implements the pair of swap candidates. After the swapping, the compiler continues to the next step and applies interaction gates that have now become executable on the given hardware. For the case that no candidates have been found to reduce the overall distance, but ℐ_c is not empty, the fallback strategy is to pick a random qubit pair to swap, where one of the qubits is guaranteed to move closer to its partner after the swap (resulting in a +0 change in 𝒟). §.§.§ Update StrategyThe update strategy consists of steps that aim to replace interaction gates in the buffer ℐ_C. As the routing proceeds, it may be of advantage to exchange some pairs in the current set ℐ_C that are still far away from each other. The replacement pair should have a smaller distance than some other pair inℐ_C. In order to achieve this, we iterate through every qubit that is involved in an interaction gate from ℐ_C and check the available neighbors of its respective vertex which are currently not involved in another gate within the buffer, u, in 𝒢_P. We choose the neighbor with the minimal distance W on the hardware graph 𝒢_QPU, i.e. the partner with the smallest distance to the vertex. If ṽ = _v^'∈ [N],v^'≠ v W(v^',u) then (u, v) is replaced by (u, ṽ) in the buffer ℐ_C.§.§.§ Removal StrategyEvery removal triggers additions from ℐ to ℐ_C, if applicable. When an interaction gate is routed successfully, its qubits become available for a further pair selection. We search through the interaction pairs involving the two qubits and where other logical qubits are not currently involved in another gate contained in the buffer I_C. From there, we pick the ones with the minimum current distance and add them to ℐ_C. §.§.§ Push-back StrategyIn order to reduce the circuit depth, we check for every applied gate (including swap gates) whether it can be already executed in a previous layer. We accomplish this by maintaining a set of layers for every circuit for easy access. A layer is characterized by a graph where nodes correspond to physical qubits, and an edge exists if and only if two qubits are subject to an interaction gate within that layer. In addition, each gate will virtually `block' a newly added gate from being pushed back any further, which is indicated by a binary node attribute. The edges are labeled by either a swap or a non-swap mark. If two swap gates are inserted on the same pair of qubits sequentially, they will cancel each other out by a swap mark check. If the qubits involved in a certain gate are both `free' in the previous layer, which can be checked by accessing the previous layer and the node attributes of the qubits, we push back the gate. This process is repeated until at least one of the involved qubits is involved in another gate in the previous layer. §.§ Example In Fig. <ref> we show a full instance of the routing algorithm for a 9 qubit 4-regular connectivity graph embedded in a 3× 3 square lattice. We observe that the first two circuit layers, as described, are implemented without any swap operations. The remaining interaction gates (blue and green) are implemented in the following five layers using 6 SWAP gates (red). Note that SWAP and interaction gates can also appear in the same circuit layer as a result of the push-back strategy. The full circuit can be executed in depth 7.§ NUMERICAL RESULTS §.§ Setup In this section we aim at benchmarking our routing method against other available algorithms found in literature. In what follows, we consider all single-qubit gates to be a free resource and any two-qubit gate to be implementable within a single circuit layer. Any two gates can be parallelly implemented within the same layer only if they are acting on distinct qubits.We have conducted our comparison by considering randomized k-regular and Erdős–Rényi graphs as problem graphs. A graph is k-regular if every vertex has degree k. Erdős–Rényi graphs are constructed by adding edges randomly with a probability p. We have considered the routing algorithm for 4-, 6-, 8- and 10-regular graphs as well as for Erdős–Rényi graphs with probabilities p = {0.1, 0.2, 0.3} and run all numerical experiments on a two-dimensional square-grid QPU with linear system size L ranging from 5 to 20, i.e. the total number of qubits is N ∈{L^2 | L ∈{5,…,20}}. We would like to emphasize that the choice of a square lattice hardware graph is arbitrary and our routing method is not limited to or optimized for this particular hardware graph. On the other hand, a square grid QPU corresponds to existing hardware realizations and the results are therefore likely most informative of the realistically attainable performance of the routing methods. In order to acquire sufficient statistics, we average over 20 random instances for each problem size (and graph type) whilst keeping the random seeds unchanged between different routing algorithms. §.§ Benchmark Algorithms We have compared the circuit depth and the number of added SWAP gates for the following routing algorithms: Tket(version: 1.10.0), Qiskit's implementation of SABRE <cit.> and its own Basic routing algorithm (version: 0.39.4). We included only benchmarks from routing algorithms we could run in a reasonable amount of time for all system sizes up to 400 qubits, up to the order of hours on a moderately sized computing cluster. We did not include results from the algorithm provided by Cirq (version: 1.1.0) or the implementation of 2QAN <cit.> as they had execution times far above our stated threshold already for smaller system sizes. Additionally, we considered three SWAP network (SN) implementations, which are mostly agnostic to the problem graphs and simply shuffle around qubits within a given hardware graph in order to ensure that every pair of qubits is neighboring at a given time step. The first one is a linear even-odd SWAP network <cit.>, which executes in 2N-2 depth (including interaction gates) whilst implementing N^2/2-3N/2+1 SWAP gates. This SWAP network can be applied as long as a Hamiltonian path through the hardware graph exists. Another SWAP network, tailored to k-regular problem graphs and square grid hardware graph, has been introduced in Ref. <cit.>. The algorithm is based on the fact that one can shuffle qubits to arbitrary positions on a square grid in order to implement a full layer of interaction gates in 3L circuit layers, which for a k-regular problem graph totals 3(k-1)N^1/2-2k+4 depth and 3(k-1)(N^3/2/2-3N/2 +N^1/2) SWAP operations. Finally, a SWAP network for the square grid was implemented in Ref. <cit.>.The authors have also defined SWAP network strategies for a few different hardware architectures: linear and heavy-hex. The square-grid version of the algorithm takes 3N/2+3N^1/2+1.5 depth and (N/2+N^1/2+1/2)(N/2-N^1/2) SWAP operations. Note that, in general, two interaction gate layers are needed after every SWAP layer in this SWAP network. §.§ k-regular Graphs In Fig. <ref> we compare the performance of the aforementioned routing algorithms for k-regular graphs for k={4,6,8,10} and for system sizes of up to 400 qubits placed on square hardware graphs. We observe that our routing method produces significantly shallower circuits compared to all non-SN methods and the difference grows with both the system size and the regularity of the graphs (see top row of Fig. <ref>). On the other hand, we see that all SN's eventually produce favourable circuit depths compared to our method as the regularity and system size are increased. This is expected, since the linear and grid SN are unaffected by the number of interaction gates that need to be implemented and the k-regular SN only scales with k and the square root of the total number of qubits, √(N). Indeed, for small regularities (≤ 10) the k-regular SN is optimal at large enough system sizes (≳ 250), whilst our method performs best for system size below. The trade-off of SN's becomes clear when the focus shifts to SWAP counts (bottom row), as they produce far higher values compared to non-SN algorithms. Our method has the lowest SWAP counts for all regularities, with only TKET performing comparably well, especially at higher regularities, and at the cost of much higher circuit depths.§.§ Erdős–Rényi GraphsLet us now consider Erdős–Rényi graphs with interaction gate probabilities of p = {0.1, 0.2, 0.3} in Fig. <ref>. For these graphs, the number of interaction gates grows with the number of qubits and especially for large system sizes the graphs become significantly denser than those of Fig. <ref> (i.e. for 400 qubits the number of interaction gates with p=0.1 is comparable to a 40-regular graph). Furthermore, for such graphs the k-reg SN is not appropriate, as it will scale with the node which has highest regularity, which can be as high as the system sizeN. In general, we observe for these graphs that the SN's perform significantly better in terms of depth and also for the SWAP counts for p={0.2,0.3} whilst for p=0.1 our method is still optimal. Indeed, p=0.1 seems to be close to the break even point, up to where our method performs better than SN's. Of the two SN's the grid SN performs better for all of the considered graphs, however we expect the linear SN to take over at some point as p→ 1, since it is provably optimal in this limit.§ DISCUSSION We conclude that our routing algorithm is especially useful for sparse problem graphs and for system sizes up to hundreds of qubits. Given the current technological progress in quantum hardware we can assume that exactly these kinds of QAOA instances bear the most promise to be successfully implemented in the near-future, possibly without the need for fault-tolerance. Furthermore, our algorithm is not restricted by the regularity of problem graphs or the connectivity of the hardware graphs. This means that one would be able to significantly improve the performance of our algorithm compared to SWAP network approaches by implementing QPUs with additional hardware connectivity.The run time of our algorithm is limited by the polynomially scaling evaluation of the maximal matching routine <cit.>. In principle, it is also possible to improve upon our algorithm by implementing ever more sophisticated qubit scores. For example, one could think of treating false swap candidate pairs that have a mutually cancelling effect on the total swap distance 𝒟 on the same footing as single swap candidates (currently the maximal matching is performed only over those). Alternatively, one could consider giving a positive contribution to the score of swaps if their unpaired qubits would move in the direction of the closest pair which is not in ℐ_C. Yet another option is toprioritize pairs which are closer to each other by introducing a slight correction to the swap score, since further apart pairs are more likely to be replaced in the buffer ℐ_C. One could also further reduce the SWAP counts at the expense of increasing the circuit depth by only accepting +2 swap candidates as long as such candidates are available and only then resorting to +1 candidates. It is important to note at this point, however, that since our algorithm is heuristic in nature, it is possible to arbitrarily keep improving its swap and depth performance at the cost of increased complexity and execution times until one reaches the optimum at the price of an exponentially scaling algorithm. We believe that, in this sense, further improving our routing algorithm will likely only yield diminishing returns. We have introduced an efficient qubit routing strategy for QAOA which strikes a balance in simultaneously performing well in terms of swap counts as well as total circuit depths. We find the most striking improvement compared to existing methods for relatively sparse graphs of the order of a few hundred qubits. Beyond that our algorithms stays competitive in the number of swap gates, but succumbs to swap network algorithms in the metric of total circuit depth. Unlike some alternatives, our algorithm is agnostic to the actual structure of the hardware graph and simply improves with its connectivity. On the other hand, some of the swap networks need a specific connectivity, like a chain or square grid to be implemented and do not necessarily improve when additional connectivity is available. In this work, we considered every two-qubit operation to be directly implementable. In reality, however, SWAP gates are often not directly implementable on a QPU, in which case they have to be decomposed i.e. into a circuit involving three successive CNOT gates. Whilst this does not alter the relative efficiency we found for different routing algorithms, it adds additional emphasis on the importance of optimizing this part of QAOA.In Ref. <cit.> the authors considered bridge gates, which effectively implement a CNOT gate between next-nearest neighbors in depth four as an alternative to swapping and implementing the CNOT gate afterwards, which also results in a depth four circuit. We want to point out that this strategy does not work in the case of QAOA, since the necessary interaction gate here is R_zz. In some QPU architectures it is possible to shuffle around qubits without having to implement any actual swap gates, rendering the qubit connectivity effectively all-to-all. However, even then, the shuffling process can take significant time in which idling qubits can potentially suffer from noise. One can then view the qubit routing as a useful means of decreasing this idling times of qubits as well as the total run time of algorithms.Whilst we have designed our method with the QAOA algorithm in mind, it can also be applied as a general circuit optimization strategy. Here, it is most powerful when given a circuit with mostly commuting gates, as one then has the choice of selecting the most efficiently implementable interaction gates for the buffer ℐ_C. For not commuting gates, on the other hand, one is obliged to keep the existing order of gates. circuit optimization, using related concepts has been previously explored in Ref. <cit.>.§ ACKNOWLEDGMENTSThe authors would like to thank Stephanie Cheylan for useful discussions. apsrev4-2
http://arxiv.org/abs/2312.15982v1
{ "authors": [ "Ayse Kotil", "Fedor Simkovic", "Martin Leib" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226102610", "title": "Improved Qubit Routing for QAOA Circuits" }
Multi-scale Progressive Feature Embedding for Accurate NIR-to-RGB Spectral Domain Translation Xingxing Yang Department of Computer Science Hong Kong Baptist UniversityHong Kong SAR, China [email protected] Jie Chen Department of Computer Science Hong Kong Baptist UniversityHong Kong SAR, China [email protected] Zaifeng Yang Institute of High Performance Computing Agency for Science, Technology and ResearchSingapore [email protected] 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================== NIR-to-RGB spectral domain translation is a challenging task due to the mapping ambiguities and existing methods show limited learning capacities. To address these challenges, we propose to colorize NIR images via a multi-scale progressive feature embedding network (MPFNet), with the guidance of grayscale image colorization. Specifically, we first introduce a domain translation module that translates NIR source images into the grayscale target domain. By incorporating a progressive training strategy, the statistical and semantic knowledge from both task domains are efficiently aligned with a series of pixel-/feature-level consistency constraints. Besides, a multi-scale progressive feature embedding network is designed to improve learning capabilities. Experiments show that our MPFNet outperforms state-of-the-art counterparts by 2.55dB in the NIR-to-RGB spectral domain translation task in terms of PSNR.Near-Infrared image colorization, domain adaptation, Generative Adversarial Network, attention mechanism § INTRODUCTION Near-infrared (NIR) imaging systems can capture unique spectral reflectance details, which are widely used in night-time video surveillance <cit.>, object detection, material analysis <cit.> and remote sensing systems <cit.>. NIR domain information (780 nm to 2500 nm), though having all the unique application values, is neither natural nor efficient for both human and computer vision systems to explore. Consequently, NIR-to-RGB spectral domain translation has become a valuable research topic.Recent development in deep learning has brought great advancement to image translation tasks, like grayscale image colorization <cit.>. However, the progress of NIR-to-RGB spectral domain translation <cit.> lags behind, and the reasons are analyzed as follows:(1) Mapping Ambiguity.NIR-to-RGB spectral domain translation<cit.> is intrinsically a much more challenging task compared with grayscale colorization due to the mapping ambiguity introduced by the domain gap between the non-overlapping spectral bands, which requires estimation of both luminance and chrominance values (while grayscale image colorization requires only the estimation of the latter). This causes conventional image-to-image translation paradigms using given mapping ground-truth RGB images as supervision to be prone to produce monotonous, if not erroneous, predictions as the optimization process will push the prediction to approximate the statistical average <cit.>.We find most existing methods tend to overlook this aspect <cit.>.(2) Limited Learning Capability.Existing methods (<cit.>, <cit.>, <cit.>, <cit.>) mostly stack convolutional layers to perform end-to-end prediction in a supervised manner. Some unsupervised <cit.> and semi-supervised (<cit.>, <cit.>) methods even adopt CycleGAN <cit.> to translate the NIR domain to the RGB domain. However, all these methods produce unsatisfactory results because aligning the spectrum discrepancy is challenging. Fortunately, learning-based grayscale image colorization is free of the above-mentioned two issues. As such, in this study, we propose to colorize NIR images with the guidance of grayscale image colorization, which is comprised of a domain translation module between NIR and grayscale images and a colorization module, as shown in Fig. <ref>. Specifically, our framework first leverages a domain translation module that translates NIR source images into the grayscale target domain.For the colorization module, the colorization network is pre-trained by grayscale target images and subsequently fine-tuned by translated grayscale images with a series of pixel-level and feature-level consistency constraints to learn and fuse the statistical and semantic knowledge from both task domains. To improve the learning capacity, a multi-scale progressive feature embedding network (MPFNet) is designed. § PROPOSED APPROACH Method Overview. As shown in Fig. <ref>, the proposed framework comprises a domain translation module and a colorization module.The domain translation module translates NIR source images into the grayscale target domain, which generates plausible grayscale images X_N2G from the NIR domain. The colorization module F_G first translates grayscale target images X_G into the RGB domain in the pre-training stage, and then translates plausible grayscale images X_N2G into the RGB domain in the fine-tuning stage.By incorporating the multi-scale feature embedding network as the backbone of both domain translation module and colorization module, the statistical and semantic knowledge from both NIR and grayscale domains are efficiently fused with a series of pixel-/feature-level consistency constraints. §.§ Multi-scale Feature Embedding NetworkDirect domain translation from NIR to RGB in a single path, as done in previous works in <cit.>, produces unstable results both visually and semantically (illustrated in Fig. <ref>). In contrast, we propose a multi-scale encoder-decoder architecture for both NIR-to-grayscale domain translation and grayscale-to-RGB colorization. A schematic diagram of the system is shown in Fig. <ref>(a).The system breaks down the challenging domain mapping problem into sub-tasks with each focusing on a different resolution.At each scale, an encoder-decoder feature embedding block (FEB) is designed to learn contextual features (Fig. <ref>(c)), followed by a supervised color-consistency module (SCCM)<cit.> which generates predictions with the supervision of ground-truth RGB images.Cross-scale skip connections are designed to propagate and fuse contextual details from lower resolutions to higher ones.To improve the learning capacity and highlight spatial image details, a residual coordinate attention block (RCAB) is introduced (Fig. <ref>(b)) and embedded into FEBs.Specifically, as shown in Fig. <ref> (c), FEBs adopt UNet <cit.> as the backbone to embed both contextual and textural features crucial for spectrum translation. Each encoder block consists of a leaky ReLU activation module and a stride-2 4×4 convolution layer to downsample the feature map. Then, the residual coordinate attention block is designed to explore the spatial and channel feature correlation, followed by an instance normalization layer <cit.> for promoting color style diversity <cit.>.The decoder block resembles the encoder but with a stride-2 4×4 transpose convolution layer for feature upsampling. The bottle-neck block contains only a 3×3 convolution layer without RCAB since the feature map is coarse and the attention mechanism becomes redundant.Different from <cit.> and <cit.> which perform feature re-weighting respectively in spatial and channel dimensions, as shown in Fig. <ref>(b), RCAB simultaneously captures both channel correlation and accurate position-sensitive information (via x and y-axis average pooling). §.§ Objective Functions Our objective contains two terms: Domain Translation Losses to match the distribution of the NIR and grayscale domain; and Colorization Losses to map the functions from the grayscale domain to the RGB domain.Domain Translation Losses.Considering that we have both paired NIR-grayscale images and unpaired grayscale images in our dataset, we adopt a CycleGAN<cit.> paradigm to train the domain translation module. The domain translation loss is defined as: ℒ_tran =ℒ_GAN^img(X_N,X_G,D_N^img,G_G2N)+ℒ_GAN^img(X_N,X_G,D_G^img,G_N2G)+λ_1ℒ_cyc+λ_2ℒ_idt, where the ℒ_GAN^img, ℒ_cyc and ℒ_idt are all defined in <cit.>. λ_1 and λ_2 are hyperparameters, which we empirically set both as 1.Colorization Losses. We employ a L_mix loss <cit.> at any given scale s=1,2,..., S that combines both SSIM loss and L1 loss to formulate a supervised consistency constraint on both pixel and feature levels, which consists of two parts: during the pre-training stage, we only use original target images (X_G) to train the colorization module; while during the fine-tuning stage, we usetranslated target images (X_N2G) as inputs, which are defined as: ℒ_pp=∑_s=1^3L_mix(Y_G^s, F_G^s(X_G)), ℒ_pf=∑_s=1^3L_mix(Y_N^s, F_G^s(X_N2G)),where Y_G and Y_N denote ground truths of grayscale images and NIR images, respectively. Additionally, in the fine-tuning stage, we further introduce another discriminator D_G^feat further aligns feature distributions of X_N2G and X_G in the RGB domain: ℒ_GAN^feat(X_N,X_G,D_G^feat,G_N2G,F_G)=𝔼_x_n∼ X_N[D_G^feat(F_G(G_N2G(x_n)))]+𝔼_x_g∼ X_G[D_G^feat(F_G(x_g))-1],Total Loss The full objective function in the fin-tuning stage is expressed as follows:ℒ=ℒ_pf +λ_1ℒ_tran +λ_2ℒ_GAN^feat,where λ_1 and λ_2 are hyperparameters, which we empirically set both as 1.§ EXPERIMENTAL RESULTS§.§ Implementation and Training DetailsWe used the VCIP2020 Grand Challenge on the NIR-to-RGB translation dataset for both training and testing. Specifically, there are 372 NIR-RGB image pairs in the training dataset and another 28 pairs for testing.We employed data augmentation by scaling, mirroring, random size cropping, and contrast adjustment. Quantitative comparisons were performed using the Peak of Noise-to-Signal Ratio (PSNR), Angular Error (AE), Structural Similarity (SSIM)<cit.>, and LPIPS<cit.>.All of the training images have the same size (256×256) and are normalized to the range (-1, 1).Firstly, we trained the domain translation module for 400 epochs with a learning rate l_tran = 1 × 10^-4. Next, we trained our colorization network F_G on X_G for 250 epochs with a learning ratel_c = 1 × 10^-4. At last, we fine-tuned the whole network using the above pre-trained models. The batch size was set to 10. §.§ Comparison with NIR Colorization Methods In this section, we quantitatively and qualitatively compare our MPFNet method with ATCycleGAN <cit.>, NIR-GNN <cit.>, MFF <cit.>, and SST <cit.>. As shown in Table <ref>, our method outperforms all existing methods.Especially, compared to ATCycleGAN (1st Runner up of the VCIP 2020 Grand Challenge of NIR-to-RGB spectral domain translation) our method obtains a performance gain of 2.55 dB and 0.04 in terms of PSNR and SSIM, respectively. For visual comparison, we randomly select five images from the test set and illustrate them in Fig. <ref>. As can be seen, our method can generate contextually-natural colorization results. The predicted results are closer to the style of ground-truth RGB images while retaining more texture information of NIR input images (e.g., the mountain in the third row).To further validate the performance and generalization ability of our framework, we retrained both our network and ATCycleGAN <cit.> using the EPFL dataset<cit.>, which has 477 NIR-RGB images pairs in total. Meanwhile, we also compared with the reported results of DualGAN<cit.>, which is also trained and evaluated on the same dataset.Note that the EPFL dataset has more complicated scene categories than the VCIP dataset, which involves urban, water, street, old buildings, and so forth.The quantitative and qualitative results of the test set are shown in Fig. <ref> and Table <ref>, respectively. Obviously, our method still outperforms these two methods by a large margin. § ACKNOWLEDGEMENTThis research is supported by A*STAR C222812026.§ CONCLUSIONIn this work, we have proposed a multi-scale progressive feature embedding framework for NIR-to-RGB spectral domain translation. Since the mapping relationship of the NIR-to-RGB translation is very implicit for models to capture, while the grayscale-to-RGB colorization is more explicit, we propose a domain translation module that translates NIR source images into the grayscale target domain, which significantly relieves the mapping ambiguity. To further improve the learning capacity, we have proposed a residual coordinate attention block to highlight objects of interest. Experiments show that our model achieves significant performance gains on the NIR-to-RGB spectral domain translation task.IEEEtran
http://arxiv.org/abs/2312.16040v1
{ "authors": [ "Xingxing Yang", "Jie Chen", "Zaifeng Yang" ], "categories": [ "cs.CV", "eess.IV" ], "primary_category": "cs.CV", "published": "20231226130745", "title": "Multi-scale Progressive Feature Embedding for Accurate NIR-to-RGB Spectral Domain Translation" }
Brunnian planar braids and simplicial groups Mahender Singh============================================ We propose and analyze discontinuous Galerkin (dG)approximations to 3D-1D coupled systems which model diffusion in a 3D domain containing a small inclusion reduced toits 1D centerline.Convergence to weak solutions of a steady state problem is established via deriving a posteriori error estimates and bounds on residuals defined with suitable lift operators. For the time dependent problem, a backward Euler dG formulation is also presented and analysed. Further, we propose a dG method for networks embedded in 3D domains, which is, up to jump terms, locally mass conservative on bifurcation points. Numerical examples in idealized geometries portray our theoretical findings, and simulations in realistic 1D networks show the robustness of our method.Key words.3D-1D coupled models; discontinuous Galerkin methods; 1D vessel networks MSC codes. 65N30, 65M60. § INTRODUCTION Modeling physiological processes involving the flow and transport within a complex network of vessel-like structures embedded in a 3D domain is crucial. Examples of such processes include drug transport in vascularized tissue <cit.> and solute clearance through the lymphatic vessels in the body <cit.> and through the glymphatic system of the brain <cit.>. This modeling setup has applications not only in physiology but also spansareas such as geosciences <cit.>.To account for a complex network of vessels that typically have a small diameter compared to a surrounding domain, topological model reduction techniques have been proposed <cit.>. Such models reduce the equations posed in 3D vessels to 1D equations posed on their centerlines. Further, these 1D equations are suitably coupled to extended 3D equations in the surrounding. Thereby, 3D–1D models reduce the computational cost while providing a reliable approximation to the full dimensional system. Bounds on the modeling error induced by such a derivation in terms of the vessel diameter are derived for the time dependent convection diffusion 3D-1D problem in <cit.>, for the steady state diffusion 3D-1D problem in <cit.>, and for the steady state 2D-0D problem in <cit.>.We remark that the 3D-1D model derived in <cit.> and further extended in <cit.> naturally uses the lateral average as a way to restrict 3D functions to1D inclusions. This differs from the models presented in <cit.>, where 1D traces of 3D functions are used; therefore, the functional setting involves special weighted Hilbert spaces. The latter models can be generally viewed as elliptic problems with Dirac line sources. Several finite element schemes have been proposed and analyzed for this class of problems. In addition to the continuous Galerkin method introduced in <cit.> and further analyzed in <cit.>, we mention the singularity removal method <cit.>, the mixed approach <cit.>, the interior penalty dG method <cit.> or the Lagrange multiplier approach <cit.>. It is worth noting that thepapers <cit.> only analyze the 3D problem and assume a given 1D source term.For the 3D-1D problem where the restriction operator is realized via lateral averages,the continuous Galerkin method is analyzed in <cit.>, providing error estimates inenergy norms. To the best of our knowledge, a discontinuous Galerkin method for the coupled 3D–1D system and its analysis are novel. DG approximations have several favorable features such as thelocal mass conservation property <cit.>. In addition, with dG approximations, local mesh refinement and local high order approximation are easily handled since there are no continuity requirements.The analysis of dG for the coupled 3D–1D problem requires non–standard arguments as the strong consistency of the method cannot be assumed. This stems from the observation that the 3D solution does not belong to H^3/2+η(Ω) for any positive η, the natural Sobolev space for the interior penalty dG bilinear form. We now summarize the main contributions of the paper and give an outline of the contents.* We propose an interior penalty dG method for the coupled 3D–1D problem, and we prove convergence to weak solutions. The main result is given in Theorem <ref>. * We derive error estimates for regular meshes in Corollary <ref> and for graded meshes in Corollary <ref>. The second estimate shows that if the mesh is resolved near the boundary of the inclusion, then almost optimal error rates are recovered. * We analyze a backward Euler dG discretization for the time dependent problem by introducing a suitable interpolant which is based on the elliptic projection. The main result is in Theorem <ref>. * For vessel networks embedded in a 3D domain, we propose a dG method with a hybridization technique on bifurcation points. Up to jump terms, this method preserves conservation of mass on such junctions, see Section <ref>. We show the well–posedness of this dG formulation. The rest of this article is organized as follows. Sections <ref> and <ref> introduce the model problem and the dG approximation respectively. The error analysis for the steady state problem is included in Section <ref>. We analyze a backward Euler dG method for the transient 3D–1D model in Section <ref>. The case of a vessel network inside a 3D domain is studied in Section <ref>. In Section <ref>, we provide numerical examples for manufactured solutions in a 3D–1D setting, for 1D vessel networks, and for realistic 1D networks in 3D tissue.Conclusions follow in Section <ref>. § MODEL PROBLEM§.§ NotationGiven an open domain O ⊂^d, d ∈{1,2,3}, the usual L^2 inner product is denoted by(f,g)_O for given real functions f and g.LetL^2(O) be the Hilbert space with inner product (·, ·)_O and the usual induced norm ·_L^2(O). Wedrop the subscript when O= Ω and let ·_L^2(Ω) = · and (f,g)_Ω= (f,g). Recall the notation of the standard Sobolev spaces W^m,p(O) and H^m(O) = W^m,2(O) for m ∈ℕ and 1 ≤ p ≤∞. For a given weight w ∈ L^∞(O) and w > 0 a.e. in O, the weighted L^2 inner product is givenby (f,g)_L^2_ω(O) = (f,wg)_O withthe respective weighted L_ω^2 space:f_L^2_w(O) =w^1/2f_L^2(O), L^2_w(O) = {f:O →|f_L_w^2(O) < ∞}.Similarly, the weighted Hilbert space H^1_w(O) is given by H^1_w(O) = {f ∈ L^2_w(O) |∇ f_L^2_w(O) <∞},where the weighted inner product and norm are(f,g)_H^1_w(O) = (f,g)_L^2_w(O) + (∇ f, ∇ g)_L^2_w(O),f_H^1_w(O)^2 = f_L^2_w(O)^2 + ∇ f _L^2_w(O)^2.We omit the subscript/weight w when w = 1.Throughout the paper, we denote by C a generic constant independent of mesh parameters. We use the standard notation A ≲ B for A ≤ C B andA ≈ B for A ≤ C B and B ≤ C A. §.§ The 3D-1D modelLet Ω⊂ℝ^3 be a bounded domain with a one dimensional inclusion Λ. We assume that Λ is parametrizedby λ(s), s ∈ [0,L] and is strictly included in Ω, λ is C^2 regular, and (for simplicity) λ' (s)= 1 so that the arc length and s coincide.We further define B_Λ as a generalized cylinder with centerline Λ. The boundary of B_Λ will be denoted by Γ. See Figure <ref> for an illustration of the considered geometry.A cross-section of B_Λ at s∈ [0,L] is denoted by Θ(s) with area A(s) and perimeter P(s).We assume that there are positive constants a_0, a_1 such that a_0≤ A(s)+P(s) ≤ a_1 for all s.We also assume that A belongs to 𝒞^1([0,L]). For afunction u ∈ L^1(∂Θ(s)), we define the lateral average u̅ as u(s)= 1/P(s)∫_∂Θ(s) u. The 3D-1D model that weconsider results from a reduction of a 3D-3D model in Ω\ B_Λ and in B_Λ with Robin type interface conditions on Γ. This condition models the membrane Γ as semi-permeable with permeability constant ξ > 0. Averaging the equations in B_Λ and formally extending the equations in Ω\ B_Λ, one obtains the following coupled system- Δu + ξ(u - û ) δ_Γ= f, in Ω, -d_s (A d_s û ) + P ξ(û -u) = A f̂,in Λ.We refer to <cit.> fordetails on the derivation and on the model error analysis. The source terms f ∈ L^2(Ω) and f̂∈ L^2_A(Λ) are given. The above equations are to be understood in the weak sense, and the functional (u - û) δ_Γ is defined over H^1(Ω) as(u - û ) δ_Γ(v)=∫_Λ P (u - û ) v , ∀ v ∈ H^1(Ω).The above functional is well-defined since an application ofCauchy-Schwarz's inequality and trace theorem yieldsv_L^2_P(Λ)≤v_L^2(Γ)≲v_H^1(Ω), ∀ v ∈ H^1(Ω).The system (<ref>)–(<ref>) is complemented by the following boundary conditions.u= 0 , on∂Ω, andAd_s û= 0, on s ={0,L}.To introduce the weak formulation of (<ref>)–(<ref>), we define the following bilinear forms. a(u,v) = ( ∇u ,∇v ),∀u, v ∈H^1(Ω),a_Λ (û, v̂) =( d_s û,d_sv̂)_L^2_A(Λ) , ∀û, v̂ ∈H_A^1(Λ),b_Λ ( v̂,ŵ )= (ξv̂, ŵ)_L^2_P(Λ), ∀v̂, ŵ∈L^2_P(Λ). The weak formulation of the coupled 3D-1D problem then reads <cit.>: Find u = (u,û) ∈ H_0^1(Ω)× H_A^1(Λ) such that a(u,v) + b_Λ(u - û ,v)= (f,v)_Ω,∀v ∈H_0^1(Ω), a_Λ ( û, v̂ ) + b_Λ( û - u , v̂ )= (f̂, v̂)_L^2_A(Λ),∀v̂ ∈H^1_A(Λ). The terms given in b_Λ model the coupling between the 3D solution u and the 1D solution û.Equivalently, one can write the above as follows.Find u =(u,û) ∈ H_0^1(Ω)× H_A^1(Λ) such that 𝒜(u,v) = (f,v)_Ω +(f̂, v̂)_L^2_A(Λ) , ∀v= (v, v̂) ∈H_0^1(Ω)× H_A^1(Λ),where we define for u = (u,û ), v = (v ,v̂) ∈ H_0^1(Ω) × H^1_A(Λ) 𝒜 (u, v) = a(u,v) + a_Λ(û , v̂) + b_Λ (u - û, v - v̂ ).The problemgiven in (<ref>) is well-posed, see<cit.>.§ DISCONTINUOUS GALERKIN FORMULATION §.§ Meshes and DG spaces We consider a family of regular partitions of Ωmade of tetrahedra anddenoted by 𝒯_Ω^h. The mesh-size is h = max_K ∈ h_K, where h_K = diam(K). We associate with𝒯_Ω^h, the space H^1() of broken H^1 functions in Ω, anda finite dimensional space 𝕍_h^Ωof broken piecewise polynomials of order k_1. 𝕍_h^Ω = {v_h ∈ L^2(Ω),v_h ∈ℙ_k_1(K),∀ K ∈𝒯_Ω^h}.Similarly, we let 𝒯_Λ^h = { (s_i-1, s_i),i = 1,…, N} with s_0=0 and s_N = L be a family ofuniformpartitions of Λ,with mesh size h_Λ = s_i - s_i-1. We let H^1() be the space of broken H^1 functions in Λ, and we let𝕍^Λ_h be the respective space of broken piecewise polynomials of order k_2.𝕍_h^Λ = {v̂_h ∈ L^2((0,L)),v̂_h ∈ℙ_k_2((s_i-1,s_i)), 1≤ i ≤ N}.For each 1≤ i≤ N,we let B_i be the portion of B_Λ obtained when s is restricted to Λ_i = (s_i-1,s_i). That is, we have that⋃_1≤ i≤ N B_i= B_Λ.For each 1≤ i≤ N, we now define neighborhoods of [s_i-1,s_i] consisting of 3D elements in . Namely, we define ω_i = {K ∈, K ∩∂ B_i ≠∅}.We can then write := {K ∈, K ∩∂ B_Λ≠∅} = ⋃_1≤ i≤ Nω_i. We assume that if K ∩ B_Λ≠∅, thenthe two-dimensional Lebesgue measure of ∂ K ∩∂ B_Λ is zero. This ensures that the average operator · given in (<ref>) is well defined over H^1().Further,we assume the following relation between the level of refinement for the 3D domain and that of the 1D domain. For K ∈, let ℐ_K be the set of integers i_0such that K ∈ω_i_0. We assume that the cardinality of ℐ_K is bounded above by a constant independent of K and of h. Essentially, this assumption means the 3D mesh intersecting ∂ B_Λ can not remain fixed while the 1D mesh is refined.We also denote by Γ_h the set of all interior faces in 𝒯_Ω^h. The set of edges belonging to elements in is decomposed by definingΓ_i= {F∈Γ_h, F ⊂∂ K,where K ∈ω_i }.For each interior face F, we associate a unit normal vector n_F and we denote by K_F^1 and K_F^2 the two elements that share F such that the vector n_F points fromK_F^1 to K_F^2. We denote the average and the jump of a function v_h ∈𝕍_h^Ω by {v_h} and [v_h] respectively.{v_h} = 1/2( v_h|_K_F^1 + v_h |_K_F^2),[v_h] = v_h |_K_F^1 - v_h|_K_F^2, ∀ F ∈Γ_h. If F = ∂Ω∩∂ K_F^1, then the average and the jump are given by[v] = { v} = v|_K_F^1.The area of F ∈Γ_h ∪∂Ω is denoted by |F|. Similar definitions are adopted for jumps and averages of v̂_h ∈𝕍_h^Λ on the nodes s_i.1≤i≤N-1, [v̂_h]_s_i=lim_t→0, t>0 v̂_h(s_i- t) -lim_t→0, t>0 v̂_h(s_i +t),1≤i≤N-1, {v̂_h}_s_i=1/2 lim_t→0, t>0 v̂_h(s_i+t) +1/2 lim_t→0, t>0 v̂_h(s_i-t),[v̂_h]_s_0= - v̂_h(s_0), [v̂_h]_s_N = v̂_h(s_N).For v ∈ H^1(), we define the norm for σ_Ω > 0v_^2 = ∑_K ∈∇ v_L^2(K)^2 + ∑_F ∈Γ_h ∪∂Ωσ_Ω/|F|^1/2[v]^2_L^2(F).A Poincaré inequality holds in H^1() (see for e.g <cit.>): v_L^2(Ω)≲v_, ∀ v ∈ H^1().Forv̂∈ H^1(), we define the semi-norm for σ_Λ > 0|v̂|_^2 = ∑_i=1^N d_s v̂_L^2((s_i-1,s_i))^2 + ∑_i=1^N-1σ_Λ/h_Λ [v̂]_s_i^2.The above definitions allow us to introduce the norm ·_DG on H^1() × H^1(). For v = (v, v̂), v_DG^2 =v_^2+|v̂|_^2 + v - v̂^2_L_P^2(Λ).The above indeed defines a norm.If v_ = 0, then v = 0 since ·_ is a norm on H^1(). This implies that v̂_L_P^2(Λ) = 0 and v̂= 0.§.§ The numerical methodWe use interior penalty discontinuous Galerkinforms <cit.>. Definea_h(·, ·): ×→ℝ: a_h(u,v) = ∑_K ∈∫_K ∇ u ·∇ v - ∑_ F ∈Γ_h ∪∂Ω∫_F{∇ u}·n_F [v] + ϵ_1 ∑_F ∈Γ_h ∪∂Ω∫_F{∇ v}·n_F [u]+ ∑_F ∈Γ_h ∪∂Ωσ_Ω/|F|^1/2∫_F [u][v]. For the 1D discrete solution, we introduce the form a_Λ,h(·,·): ×→ℝa_Λ,h(û, v̂) =∑_i=1^N∫_s_i-1^s_i A dû/dv̂/ - ∑_i=1^N - 1{Adû/}_s_i [v̂]_s_i + ϵ_2 ∑_i=1^N- 1{Adv̂/}_s_i [û]_s_i +∑_i=1^N - 1σ_Λ/h_Λ[û]_s_i [v̂]_s_i.In the above, ϵ_1, ϵ_2 ∈{ -1,0,1} lead tosymmetric, incomplete, or non-symmetric discretizations, and σ_Λ, σ_Ω > 0 are penalty parameters. The dG formulation of problem (<ref>) then reads as follows. Find u_h = (u_h, û_h) ∈× such that 𝒜_h (u_h, v_h) = (f,v_h)_Ω + (f̂ , v̂_h)_Λ, ∀v_h = (v_h, v̂_h) ∈×,where we defined the form 𝒜_h (·, ·): (×)^2→ℝ: 𝒜_h (u_h, v_h) = a_h(u_h, v_h) + a_Λ, h (û_h, v̂_h) +b_Λ (u_h - û_h, v_h - v̂_h).It is important to note that the interface Γ does not need to be resolved by the mesh to realize the coupling term b_Λ; identifying the elements intersecting Γ is sufficient. To show the well–posedness of the discrete dG formulation, we first show the coercivity of 𝒜_h with respect to the norm defined in (<ref>). For suitably chosen penalty parameters σ_Λ and σ_Ω, there exists a constant C_coerc such that 𝒜_h(u_h, u_h) ≥ C_coercu_h_^2, ∀ u_h ∈× .From standard analysis of dG methods, if σ_Ω is large enough whenever ϵ_1 = -1 orfor any σ_Ω when ϵ_1 = 1 (same conditions apply for σ_Λ and ϵ_2), we have a_h (u_h, u_h) ≥ C_1 u_h_^2, a_Λ,h(û_h ,û_h) ≥ C_2 |û_h|_^2.The result then immediately follows from the above and from the definition of 𝒜_h(·, ·). There exists a unique pair (u_h, û_h) ∈× solving (<ref>).From the coercivity property, it easily follows that the solution is unique. Since this is a square linear system in finite dimensions, existence follows.§ ERROR ANALYSISThe main difficulty in the error analysis of the dG formulation is that the strong consistency of the method can not be assumed. Indeed, under sufficient regularity assumptions on the domain, one can only show thatthe 3D solutionuof (<ref>) belongs to H^3/2-η(Ω) for η >0 <cit.>.However, theform a_h can not be extended to this spacesince the traces of gradients for functions in H^3/2-η(Ω) are not well defined. Therefore, we adopt here a combination of a priori and a posteriori error estimates within the framework proposed by Gudi <cit.> to prove convergence.The main result is provided in Theorem <ref>.§.§ Preliminary lemmas We first introduce the conforming spaces𝕍_h,c^Ω⊂ H^1_0(Ω) of continuous piecewise linear functions defined overin Ω.Similarly, we let 𝕍_h,c^Λ⊂ H^1(Λ) be the respective space of continuous piecewise linear functions defined overin Λ. There exists an enriching map E =(E,Ê): ×→× such that |E v|_H^1(Ω)≲v _, |Êv̂ |_H^1(Λ)≲ | v̂|_,(∑_K ∈ h_K^-2E v - v^2_L^2(K))^1/2≲v_,Êv̂ - v̂_L^2(Λ)≲ h_Λ| v̂|_. An enriching map with the above properties can be constructed as a nodal Lagrange interpolant with nodal values taken as averagesof v (v̂), see<cit.> and<cit.>.Another approach is to apply a Scott-Zhang interpolant to a Crouzeix-Raviart correction, see<cit.>.We now define L^2–projections.Let K ∈ andΛ_i = (s_i-1,s_i) for i ∈{1,…,N}. For any (w, ŵ) ∈ L^2(K) × L^2(Λ_i), define (π_h w, π̂_h ŵ) ∈ℙ^k_1(K) ×ℙ^k_2(Λ_i) such that ∀ v_h ∈ℙ^k_1(K), (π_h w - w, v_h)_K = 0, ∀v̂_h ∈ℙ^k_2 (Λ_i), (π̂_h ŵ - ŵ, v̂_h)_L_P^2(Λ_i) = 0. Let s ∈{0,…, k+1}, m ∈{0,…,s}, K ∈, and Λ_i = (s_i-1,s_i) for i∈{1,…, N}. Assume that w ∈ H^s(K) and ŵ∈ H^s(Λ_i). Then,π_h w - w_H^m(K)≲ h_K^s-m w _H^s(K),π̂_h ŵ - ŵ_H^m(Λ_i)≲ h_Λ^s-mŵ_H^s(Λ_i). In addition, the L^2 projection is stable in the dG norm. Namely, π_h w _≲w_H^1(), ∀ w ∈ H^1().Proofs of the estimates in(<ref>) can be found in <cit.>.The proof of (<ref>) follows from applications of trace estimates and (<ref>). We now state a local trace inequality on ∂ B_Λ. The proof of the following estimate is due to Wu and Xiao <cit.>. There exists a constant h_0 such that for all h ≤ h_0 and K ∈,the following estimates holdv_L^2(∂B_Λ ∩K )≲h_K^ -1/2 v_L^2(K) + h_K^1/2 ∇v_L^2(K), ∀v ∈H^1(K),v_h_L^2(∂B_Λ ∩K)≲h_K^-1/2 v_h_L^2(K), ∀v_h ∈ℙ_k(K), ∀k≥1. Hereinafter, we assume that h ≤ h_0. We now show a global trace inequality.For u ∈ H^1(),there holds u_L^2_P(Λ)≲u_.We start by showing the result for v_h ∈𝕍_h^Ω. Let Λ_i = (s_i-1,s_i). We use triangle and Cauchy–Schwarz's inequalities to obtain that for any 1≤ i≤ N, v_h_L^2_P(Λ_i)≤v_h- E v_h_L^2_P(Λ_i) + E v_h_L^2_P(Λ_i)≤v_h -Ev_h _L^2(∂ B_i) + E v_h_L^2_P(Λ_i). Recall the definition of ω_i in (<ref>) and note thatv_h -Ev_h _L^2(∂ B_i)^2 = ∑_K∈ω_iv_h -Ev_h ^2_L^2(∂ B_i ∩K̅). With (<ref>),we obtain thatv_h -Ev_h _L^2(∂ B_i)^2 ≲∑_K ∈ω_ih_K^-1v_h -Ev_h _L^2(K)^2. Summing over iand using the global bound (<ref>) yield∑_i=1^N v_h -Ev_h _L^2(∂ B_i)^2 ≲∑_K∈𝒯_B^h h_K^-1v_h -Ev_h _L^2(K)^2 ≲ h ‖ v_h ‖_^2. Therefore, with (<ref>), we obtainv_h_L^2_P(Λ)^2 =∑_i=1^N v_h_L^2_P(Λ_i)^2≲ h ‖ v_h ‖_^2+ ‖Ev_h‖_L^2_P(Λ)^2≲ h ‖ v_h ‖_^2 + ‖ Ev_h ‖_H^1(Ω)^2. With Poincaré's inequality (<ref>), and the properties of E (<ref>)–(<ref>), we obtain the bound‖ Ev_h ‖_H^1(Ω)^2≤Ev_h - v_h_L^2(Ω)^2 + v_h^2_L^2(Ω) + |Ev_h|_H^1(Ω)^2 ≲v_h^2_.Substituting the above in (<ref>) yields (<ref>) for v_h ∈𝕍_h^Ω. Consider now u ∈ H^1() and recall that π_h u is the local L^2 projection on 𝕍_h^Ω. Then, by Cauchy–Schwarz's inequality and (<ref>), we have that u - π_h u ^2_L^2_P(Λ_i)≤∑_K ∈ω_i u - π_h u ^2_L^2(∂ B_i ∩K) ≲∑_K ∈ω_i(h_K^-1 u - π_h u ^2_L^2(K) + h_K∇ (u - π_h u) ^2_L^2(K)) ≲∑_K ∈ω_i h_K u^2_H^1(K).In the above, we used the properties of the L^2 projection given in (<ref>). Then, using triangle inequality and(<ref>) for 𝕍_h^Ω, we obtain u_L^2_P(Λ)≤u - π_h u_L^2_P(Λ) + π_h u_L^2_P(Λ)≲ h^1/2 ( ∑_K ∈u^2_H^1(K))^1/2+ π_h u _. The result is concluded by Poincaré's inequality (<ref>) and the stability of the L^2 projection π_h in the ·_ norm, see (<ref>). A consequence of (<ref>) and the triangle inequality is the following bound:∀ u = (u,û) ∈ H^1()× H^1(), ‖û‖_L_P^2(Λ)≲‖ u ‖_. We will make use of lift operators. For a given (u,û) ∈ H^1() × H^1(), define (L_h u, L̂_h û) ∈× such that (L_h u, w_h)_Ω + (L̂_h û, ŵ_h)_L^2_P(Λ) = b_Λ( u - û, w_h - ŵ_h), ∀ (w_h, ŵ_h) ∈×. The existence of (L_h u, L̂_h û) easily follows from uniqueness. We show the following estimate.Given (u,û) ∈ H^1() × H^1(), let (L_h u, L̂_h û) ∈× be defined by(<ref>). There holds ∑_K ∈ h_K L_h u^2_L^2(K)+ L̂_h û^2_L^2_P(Λ)≲u^2_ + û^2_L^2_P(Λ).Choosing (w_h, ŵ_h) = (0, L̂_h û) in (<ref>) and using Cauchy-Schwarz's inequality and(<ref>), we have L̂_h û^2_L^2_P(Λ)≤ξu - û_L^2_P(Λ)L̂_h û_L^2_P(Λ)≲ (u_ + û_L^2_P(Λ))L̂_hû_L^2_P(Λ) .This shows the bound on the second term in (<ref>). Next, fix K ∈ and recall that ℐ_K be the set of integers i_0such that K ∈ω_i_0 where we assume that the cardinality of ℐ_K is bounded above by a small constant independent of K. In (<ref>), choose ŵ_h= 0 and w_h = (L_h u)χ_K where χ_K is the characteristic function on K.We obtainL_h u _L^2(K) ^2 ≲∑_i_0∈ℐ_K (u_L^2_P((s_i_0-1 , s_i_0)) + û_L^2_P((s_i_0-1 , s_i_0)))w_h _L^2_P((s_i_0-1 , s_i_0)). We now use Cauchy–Schwarz's inequality, the observation that w_h is locally supported in K, and trace inequality (<ref>). We estimatew_h _L^2_P((s_i_0-1 , s_i_0))≤w_h_L^2(∂ B_i_0)= L_h u _L^2( ∂ B_i_0∩K̅ ) ≲ h_K^-1/2L_h u _L^2(K) .Thus, we conclude that L_h u^2_L^2(K)≲ h_K^-1/2∑_i_0∈ℐ_K(u_L^2_P((s_i_0-1 , s_i_0)) + û_L^2_P((s_i_0-1 , s_i_0)))L_h u_L^2(K).Summing the above bound over K∈ and using Cauchy-Schwarz's inequality yield∑_K ∈ h_KL_h u _L^2(K)^2 ≲ (u_L^2_P(Λ) + û_L^2_P(Λ)) (∑_K ∈ h_KL_h u _L^2(K)^2 )^1/2.With Lemma <ref> and with noting that L_h u|_K = 0 for K ∉, we conclude the result. §.§ Main result and proof outlineThe main convergence result reads as follows. Let u = (u, û) ∈ H^1_0 (Ω) × H^1(Λ)be the weak solution defined by (<ref>), and let u_h= (u_h, û_h) ∈× be the discrete solution defined by (<ref>). Recall that h_B = max_K ∈ h_K. The following estimate holds.u -u_h_≲inf_v∈× u -v_+ h f- π_h f_ L^2(Ω)+ h_Λf̂ - π̂_h f̂_L^2_A(Λ) + h_B^1/2u - û_L^2_P(Λ). Here, we present the main steps of the proof. The details are givenin the next section. We have, see Lemma <ref> for the proof,u - u_h_ ≲inf_v∈× ( u - v_+ sup_ϕ∈×(f,ϕ - E ϕ)_Ω + (f̂, ϕ̂- Êϕ̂)_L^2_A(Λ) - 𝒜_h(v, ϕ - Eϕ)/ϕ_) .We now bound the second term above. To this end, fix v , ϕ∈× and let w =ϕ -Eϕ. Define Z = (f, w)_Ω + (f̂, ŵ)_L_A^2(Λ)- 𝒜_h ( v,w).With the lift operator (<ref>), we write Z = (f - L_h v , w)_Ω + ( Af̂ - PL̂_hv̂, ŵ)_Λ - a_h(v, w) - a_Λ, h(v̂, ŵ). We integrate by parts the first term in a_h(v,w) and the first term in a_Λ,h(v̂, ŵ).We obtain Z =∑_K ∈∫_K (f- L_h v + Δ v) w + ∑_i=1^N ∫_s_i-1^s_i (A f̂ - P L̂_h v̂ +d_s(Ad_s v̂) ) ŵ_Z_1- ∑_F∈Γ_h∫_F [∇ v] ·n_F {w} - ∑_i=0^N [Ad_s v̂]_s_i{ŵ}_s_i_Z_2 +Z_3+ Z_4,where Z_3, Z_4 are the remaining terms in a_h(v,w) and a_Λ,h(v̂, ŵ) respectively. Namely, Z_3= -ϵ_1 ∑_F ∈Γ_h ∪∂Ω∫_F{∇ w}·n_e [v]- ∑_F ∈Γ_h ∪∂Ω∫_F σ_Ω/| F|^1/2 [v][w] ,Z_4 =- ϵ_2 ∑_i=1^N- 1{Ad_s ŵ}_s_i [v̂]_s_i -∑_i=1^N - 1σ_Λ/h_Λ[v̂]_s_i [ŵ]_s_i.We start by bounding Z_3 and Z_4. We note that [u]= [Eϕ] = 0 a.e. on F ∈Γ_h ∪∂Ω and that [û]_s_i = [Êϕ̂]_s_i = 0, i ∈{1, … N-1}.We use standard applications of trace inequality for polynomials and Cauchy-Schwarz's inequalityto obtain|Z_3| + |Z_4| ≲‖ w‖_‖ v-u‖_ + ‖ŵ‖_‖v̂-û‖_≲(‖ϕ‖_ + | Eϕ|_H^1(Ω) + |ϕ̂|_ + |Êϕ̂|_H^1(Λ)) ‖ u - v‖_≲ϕ_ u -v_.In the last inequality above, we usedthe stability of E given in (<ref>) of Lemma <ref>,and the definition of ·_.For the term Z_1, we use Cauchy–Schwarz's inequality and the approximation properties of E (<ref>).We use the notation Λ_i = (s_i-1, s_i) and weestimate (Z_1)^2≤( ∑_K ∈ h_K^2 f + Δ v - L_h v^2_L^2(K) +∑_i=1^N h_Λ^2 A f̂ + d_s(Ad_s v̂) -PL̂_h v̂^2_L^2(Λ_i)) ×( ∑_K ∈ h_K^-2w^2_L^2(K)+∑_ i = 1^Nh_Λ^-2ŵ_L^2(Λ_i)^2 ) ≲( ∑_K ∈ h_K^2 f + Δ v -L_h v^2_L^2(K) +∑_i=1^N h_Λ^2 A f̂ + d_s(Ad_s v̂) -PL̂_h v̂^2_L^2(Λ_i))ϕ^2_ : = (R^1_Ω + R^1_Λ ) ϕ^2_. For the term Z_2, we use standard applications of trace inequalities and (<ref>)to estimate (Z_2)^2 ≲( ∑_F ∈Γ_h|F|^1/2[∇ v] ·n_F_L^2(F)^2 + ∑_ i = 0 ^N h_Λ [ Ad_s v̂ ]_s_i^2)×( ∑_K ∈ h_K^-2w^2_L^2(K)+∑_i=1^N h_Λ^-2ŵ_L^2(Λ_i)^2 )≲( ∑_F ∈Γ_h|F|^1/2[∇ v] ·n_F _L^2(F)^2 + ∑_ i = 0 ^N h_Λ [Ad_s v̂ ]_s_i^2) ϕ_^2 : = (R^2_Ω + R^2_Λ) ϕ_^2 . Combining the bounds above we have‖ u- u_h‖_≤inf_ v ∈×(‖ u- v‖_ + (R_Ω^1+R_Λ^1)^1/2 + (R_Ω^2+R_Λ^2)^1/2).The proof is finished by obtaining the required bounds on the residual (R^1_Ω + R^1_Λ ), see Lemma <ref> and Corollary <ref>, and on the residual(R^2_Ω + R^2_Λ), see Lemma <ref> and Corollary <ref>. Under the assumptions of <Ref>,if u ∈ H^3/2-η(Ω)for any η>0 and û∈ H^2_A(Λ), then the following bound holdsu -u_h_≲ h^1/2-η (u_H^3/2-η(Ω)+ u - û_L^2_P(Λ) +f_L^2(Ω)) +h_Λ (û_H^2_A(Λ) + f̂_L^2_A(Λ) ). LetS_hu = (S_h u, Ŝ_h û) ∈× whereS_h and Ŝ_h are Scott–Zhang interpolants of u and û respectively <cit.>. With the triangle inequality,(<ref>), and approximation properties,we bound u -S_hu _≲u - S_h u _H^1(Ω) + |û -Ŝ_h û |_H^1_A(Λ) + û -Ŝ_h û_L^2_P(Λ) ≲ h^1/2-ηu_H^3/2-η(Ω)+h_Λû_H^2_A(Λ).Using the above bound in (<ref>) yields the desired estimate. We now show that if the mesh is refined near the boundary of B_Λ, (namely the mesh size is of the order h^2 k_1 where we recall k_1 is the polynomial degree for the space ) then almost optimal error estimates can be recovered. To this end, we use the definitions of graded meshes <cit.> in order to obtain the required estimates.Let r_K = dist(K, ∂ B_Λ) andrecall that h_K= diam(K).Suppose that the mesh satisfies the following grading property. h_K≈ h r_K^1-1/2 k_1, r_K > 1/2 h_K, h^2 k_1, otherwise .Let h_Λ≈ h_K, for K ∈.Assume that the assumptions of <Ref> hold. Further,assume that u ∈ H^k_1+1(Ω\B_Λ) ∩ H^k_1+1( B_Λ),û∈ H^2(Λ), f ∈ H^k_1-1(Ω), and f̂∈ L^2(Λ).Then,u -u_h_≲h^k_1 - 2η (u_H^k_1+1(B_Λ) +u_H^k_1+1(Ω\B_Λ)+u_H^3/2-η(Ω) + f_H^k_1-1(Ω)) + h^2k_1 (û_H^2(Λ) + f̂_L^2(Λ)).Define an interpolant I_h u ∈ such that I_h u|_K = S_h u |_K, the Scott–Zhang interpolant restricted to K, if r_K≤1/2 h_K and I_h u|_K = π_h u, the local L^2 projection, otherwise. We use the local approximation properties of the Scott–Zhang interpolant. Namely,we have that <cit.>|u - S_h u |_H^1(K) + h_K^-1u - S_h u _L^2(K)≲ h_K^min(k_1+1,s) - 1u_H^s(Δ_K),1 ≤ s ≤ k_1+1.In the above, Δ_K is the union of elements sharing a face with K. Hence, we obtain that ∑_K ∈, r_K ≤1/2 h_K(|u - I_h u|^2_H^1(K) + h_K^-2u - I_h u ^2_L^2(K))≲∑_K ∈, r_K ≤1/2 h_K h_K^1 - 2ηu^2_H^3/2-η(Δ_K)≲ h^2k_1(1-2η)u^2_H^3/2-η(Ω).Further, using the approximation properties of the L^2 projection, we obtain ∑_K ∈, r_K >1/2 h_K( |u - I_h u |^2_H^1(K) + h_K^-2u - I_h u ^2_L^2(K)) ≲∑_K ∈, r_K >1/2 h_Kh^2k_1u^2_H^k_1+1(K) ≲ h^2k_1 (u^2_H^k_1+1(B_Λ) +u^2_H^k_1+1(Ω\B_Λ)).In the above, we also used that r_K ≲(Ω). Now, note that u -I_h u ^2_≲∑_K ∈(|u - I_h u |^2_H^1(K) + h_K^-2u - I_h u ^2_L^2(K)).Define I_hu = (I_h u, Ŝ_h û). We use the above bounds, triangle inequality, (<ref>), and approximation properties of Ŝ_h to obtain that u-I_hu_DG≲u -I_h u _+ |û - Ŝ_h û |_+ û - Ŝ_h û_L_P^2(Λ)≲ h^k_1 (u_H^k_1+1(B_Λ) +u_H^k_1+1(Ω\B_Λ) + h^-2ηu_H^3/2-η(Ω))+ h^2k_1û_H^2 (Λ) .In the above, we used the assumption that h_Λ≈ h_K for K ∈ and thus h_Λ≈ h^2k_1. The above bound estimates the first term of (<ref>). The second and third terms in (<ref>) are bounded by the approximation properties of the L^2 projections, see (<ref>). Finally, the last term in (<ref>) is controlled by observing that r_K ≲1/2 h_K, ∀ K ∈𝒯^h_B. Thus, using (<ref>), h_B ≲ h^2k_1. Along with (<ref>), this concludes the proof. §.§ Proof detailsWe now provide details for the steps given in the proof of Theorem <ref>.Let u and u_h be the solutions of(<ref>) and (<ref>) respectively.Then, (<ref>) holds.The proof follows from the abstract framework given in <cit.>. In the notation of this Lemma, we set V = H_0^1(Ω) × H^1_A(Λ), v^2_V = v^2_H^1(Ω) + v^2_H^1_A(Λ), and ·_h = ·_. We verify assumptions (N1)–(N3) of <cit.>. Observe that assumption (N1) is the coercivity estimate of Lemma <ref>. We now verify assumption (N3) which states that Ev_V≲v_, ∀ v ∈×.Let v = (v, v̂ ) ∈×. From Lemma <ref>, we have Ev_H^1(Ω)+ Êv̂_H^1_A(Λ)≲v_L^2(Ω) + v̂_L_A^2(Λ) + v_ + |v̂|_.For the first term above, we use Poincaré's inequality (<ref>). For the second term, we use the fact that A,P > 0, triangle inequality, and trace inequality (Lemma <ref>): v̂_L^2_A(Λ)≲v̂ - v_L^2_P(Λ) + v_L^2_P(Λ)≲v̂ - v_L^2_P(Λ) + v_.Therefore, we obtain that Ev_H^1(Ω)+ Êv̂_H^1_A(Λ)≲v_ + v̂_ + v - v̂_L^2_P(Λ)≲v_.Hence, (<ref>) is verified.It remains to verify (N2). We show that for v∈ H_0^1(Ω) × H^1(Λ), v_h ∈×, and w∈×, there holds𝒜(v, w) - 𝒜_h(v_h, w) ≲v - v_h_ (w_H^1(Ω)^2 + ŵ_H^1_A(Λ)^2)^1/2.For this, observe that [v]= [w] = 0 a.e. on e ∈Γ_h ∪∂Ω. Thus, we have thata(v,w) - a_h(v_h,w) = ∑_K ∈∫_K ∇ (v- v_h) ·∇ w -ϵ_1 ∑_F ∈Γ_h ∪∂Ω∫_F {∇ w}·n_F [v_h - v]. With the trace estimate for polynomials|F|^1/4{∇ w}·n_F_L^2(F)≲∇ w _L^2(K_F^1 ∪ K_F^2),F =∂ K^1_F ∩∂ K_F^2,andCauchy-Schwarz's inequality, we obtain that a(v,w) - a_h(v_h,w) ≲v_h - v_ |w|_H^1(Ω). A similar argument shows that a_Λ(v̂ , ŵ) - a_Λ, h(v̂_h, ŵ) ≲ |v̂_h - v̂|_| ŵ|_H^1_A(Λ).For the remainder terms, we simply use Cauchy-Schwarz's inequality and the trace estimate (<ref>). Indeed, we have that b_Λ(v- v̂ , w - ŵ ) - b_Λ (v_h - v̂_h, w -ŵ) ≤ξv - v_h - ( v̂ - v̂_h) _L^2_P(Λ)w - ŵ_L^2_P(Λ)≲v - v_h_ (w_H^1(Ω) + ŵ_L^2_P(Λ)) ≲v - v_h_ (w_H^1(Ω) + ŵ_H^1_A(Λ)).Estimate (<ref>) followsby combining the above bounds. The proof is finished by applying <cit.>. We now show the first residual bound. We recall that for any K∈𝒯_B^h, the set ℐ_K denotes the set of integers i_0such that K ∈ω_i_0.Fix1 ≤ i ≤ N and recall that Λ_i = (s_i-1,s_i).For all v_h ∈ and any K ∈ω_i, there holds f + Δ v -L_h v^2_L^2(K)≲h_K^-2∇ (u-v)_L^2(K)+ h_K^-1∑_j ∈ℐ_Ku - û^2_L^2_P(Λ_j)+L_h (u -v)^2_L^2(K)+ π_h f - f^2_L^2(K).For any v̂_h ∈, there holds Af̂ + d_s( Ad_s v̂) - P L̂_h v̂^2_L^2(Λ_i)≲h_Λ^-2d_s (û - v̂)^2_L_A^2(Λ_i)+ u - û^2_L^2_P(Λ_i) + L̂_h (û - v̂)^2_L^2_P(Λ_i) + π̂_h f̂ - f̂^2_L_A^2(Λ_i).Let b_K be the bubble function associated to K <cit.>.Define the residuals R =(π_h f + Δ v- L_h v) |_K and ψ = R b_K.Owing to the properties of the bubble functions, we estimate R_L^2(K)^2 ≲∫_K R ψ = ∫_K (f + Δ v - L_h v) ψ + ∫_K (π_h f - f) ψ= T_1+ T_2. Since ψ vanishes on the boundary of K, we integrate by parts and obtain T_1= ∫_K(fψ - ∇ v·∇ψ - L_h vψ) .Testing(<ref>) with (ψ, 0) and substituting in the above gives T_1= ∫_K ∇ (u - v) ·∇ψ +b_Λ(u - û, ψ) - ∫_K L_h vψ .The first term is bounded by Cauchy-Schwarz's inequality and inverse estimates since ψ belongs to a finite dimensional space.T_1 ≲ h_K^-1∇ (u-v)_L^2(K)ψ_L^2(K)+b_Λ(u - û, ψ) - ∫_K L_h vψ. For the second term above, we use the definition of the lift operator (<ref>) and write b_Λ(u - û, ψ) = b_Λ(u - û, ψ - π_h ψ) +(L_h u,π_hψ)_K = b_Λ(u - û, ψ - π_h ψ) +(L_h u,ψ)_K.Here we used the definition of the L^2 projections in (<ref>) and the fact that L_h u ∈.Since ψ is locally supported on one element K, with Cauchy–Schwarz's inequality,trace estimate (<ref>), and stability of the L^2 projection,we obtain the bound∑_j∈ℐ_Kψ- π_h ψ^2_L^2_P(Λ_j)≤ψ - π_h ψ^2_L^2(∂ B_Λ∩ K) ≲ h_K^-1ψ - π_h ψ^2_L^2(K)≲ h_K^-1ψ^2_L^2(K) .Thus, with Cauchy-Schwarz's and triangle inequalities, we obtain that b_Λ(u - û, ψ - π_h ψ) ≲ h_K^-1/2(∑_j∈ℐ_Ku - û^2_L^2_P(Λ_j) )^1/2ψ_L^2(K).Thus, we obtain T_1 ≲ ‖ψ‖_L^2(K)( h_K^-1‖∇ (u-v)‖_L^2(K) +‖ L_h(u-v)‖_L^2(K) + h_K^-1/2 (∑_j∈ℐ_K‖u̅-û‖^2_L_P^2(Λ_j))^1/2). The term T_2 is simply handledby Cauchy-Schwarz's inequality.Collecting the resulting boundsin (<ref>),noting that ψ_L^2(K)≲R_L^2(K), and using the triangle inequalityyields estimate (<ref>). To show (<ref>), let b̂_i denote the bubble functions associated to Λ_i,R̂ = (Aπ̂_h f̂+ d_s(Ad_sv̂) - P L̂_h v̂)|_Λ_i, and ψ̂= R̂b̂_i. We have R̂_L^2(Λ_i)^2 ≲∫_Λ_iR̂ψ̂ =∫_Λ_i (A f̂ +d_s(Ad_s v̂) - P L̂_h v̂) ψ̂ +∫_Λ_iA(π̂_h f̂ - f̂) ψ̂ = T_3 +T_4.Testing (<ref>) with (0,ψ̂) and performing the same computation as before, we obtainT_3 =∫_Λ_i Ad_s (û - v̂ )d_s ψ̂- b_Λ(u - û,ψ̂- π̂_h ψ̂) + ∫_Λ_i P L̂_h (û - v̂) ψ̂.With Cauchy–Schwarz's and inverse inequalities, the stability of the L^2 projection,andthe fact that ψ̂ is locally supported in Λ_i, we obtain T_3 ≲ψ̂_L^2_P(Λ_i)(h_Λ^-1d_s (û - v̂)_L_A^2(Λ_i) + L̂_h (û - v̂)_L^2_P(Λ_i)+ u - û_L^2_P(Λ_i)).Bounding T_4 with Cauchy–Schwarz's inequality and using that ψ̂_L^2_P(Λ_i)≲R̂_L^2(Λ_i),estimate (<ref>) is obtained.An immediate corollary to the above Lemmas is the following global bound.Recall that h_B = max_K ∈ h_K. The following bound on R^1_Ω + R^1_Λ (as defined in (<ref>)) holds.(R^1_Ω + R^1_Λ)≲u - v ^2_+ (h_B + h_Λ^2)u - û_L^2_P(Λ)^2 +h^2 f - π_h f ^2_L^2(Ω) + h_Λ^2 f̂ -π̂_h f̂_L^2_A(Λ).First note thatif K ∉, then L_h v = 0 on K, and ℐ_K = ∅.We can writeR_Ω^1 =∑_K∈𝒯_B^hh_K^2 ‖ f+Δ v-L_h v‖_L^2(K)^2 +∑_K∈∖𝒯_B^hh_K^2 ‖ f+Δ v‖_L^2(K)^2.For the first term in the right-hand side, we use Lemma <ref> and the assumption that |ℐ_K| ≲ C for all K ∈to obtain the bound:R_Ω^1≲ ‖ u-v‖_^2+ h_B ‖u̅ - û‖^2_L_P^2(Λ) + h^2 ‖ f - π_h f ‖_L^2(Ω)^2 +∑_K∈𝒯_B^h h_K^2 ‖ L_h (u-v)‖^2_L^2(K)+∑_K∈∖𝒯_B^hh_K^2 ‖ f+Δ v‖_L^2(K)^2.If K ∉, thenstandard a posteriori estimates <cit.> yieldh_K^2 f+Δ v^2_L^2(K)≲∇ (u - v)_L^2(K)^2 + h_K^2f - π_h f_L^2(K)^2, ∀ K ∈ \ .With the bound above and Lemma <ref>, we can conclude that bound (<ref>) holds on R_Ω^1. The same bound holds on R_Λ^1 which follows immediately from Lemma <ref> and Lemma <ref>.We proceed to bound on R_Ω^2 + R_Λ^2. For any face F, let S_F = K_F^1∪ K_F^2 where K_F^1 and K_F^2 are the elements sharing the face F.We also define∀ 1≤ i≤ N-1, Ŝ_i = Λ_i-1∪Λ_i, Ŝ_0 = Λ_0, Ŝ_N = Λ_N-1, Ŝ_N+1 = ∅. Fix 1≤ i ≤ N. Then, for any F ∈Γ_i, and any v ∈, there holds[∇ v] ·n_F^2_L^2(F)≲ |F|^-1/2 ∑_K⊂ S_F∇ (u-v) ^2_L^2(K) + |F|^1/2∑_K⊂ S_Ff+ Δ v -L_h v^2_L^2(K) + |F|^1/2L_h(u-v)_L^2(S_F)^2+u - û^2_L_P^2(Ŝ_i ∪Ŝ_i+1).For any v̂∈, there holds[A d_s v̂]_s_i^2≲ h_Λ^-1d_s (û- v̂) ^2_L_A^2(Ŝ_i) +h_Λ∑_Λ_ℓ⊂Ŝ_iA f̂ + d_s(Ad_s v̂) - P L_h v̂^2_L^2(Λ_ℓ) +h_ΛL̂_h (û - v̂) ^2_L^2(Ŝ_i) +h_Λu - û^2_L_P^2(Ŝ_i).Fix 1≤ i≤ N and fix F in Γ_i. Denote by b_F the face bubble associated to F; this means that b_F vanishes on the boundary of S_F and b_F takes the value one at the barycenter of F. Fix v in .We set r = [∇ v] ·n_F, extend r by constant values along n_F,and set ψ = r b_F.From<cit.>, we have ψ_L^2(S_F)≲ |F|^1/4r_L^2(F).With the properties of the bubble function and integration by parts, we have r^2_L^2(F)≲∫_F r ψ = ∫_F [∇ v] ·n_F ψ = ∑_K⊂ S_F∫_KΔ vψ + ∑_K⊂ S_F∫_K∇ v ·∇ψ.Choose the test function v = (ψ, 0) in (<ref>) ∑_K⊂ S_F∫_K∇ u ·∇ψ+b_Λ(u -û, ψ)= ∫_S_Ff ψ.We introduce the L^2 projection and rewritethe second term above asb_Λ(u -û, ψ)= b_Λ(u -û, ψ - π_h ψ ) + (L_h u , π_h ψ)_Ω = b_Λ(u -û, ψ - π_h ψ) + ∫_S_F L_h u ψ.After some manipulation, we obtain r^2_L^2(F)≲ ∑_K⊂ S_F∫_K (f + Δ v -L_h v) ψ+ ∫_S_F L_h (v-u)ψ + ∑_K⊂ S_F∫_K∇ (v-u) ·∇ψ - b_Λ(u -û,ψ - π_h ψ)= W_1 +… + W_4.With (<ref>), the terms W_1 and W_2 are bounded as:W_1 +W_2 ≲|F|^1/4‖ r ‖_L^2(F)( ( ∑_K⊂ S_Ff+ Δ v -L_h v_L^2(K)^2)^1/2 + L_h(u-v)_L^2(S_F)).With inverse estimates and (<ref>) and the observationthat h_K_F^ℓ^-1 |F|^1/4≲ |F|^-1/4 for ℓ = 1,2, we bound W_3 ≲|F|^-1/4r_L^2(F) (∑_K⊂ S_F∇ (u-v) _L^2(K)^2)^1/2. Let K_F^1 and K_F^2 denote the elements that share the face F and let𝒥_F denote the set of indices i_0 such that K_F^1 belongs to ω_i_0 or such that K_F^2 belongs to ω_i_0. In reality, the set 𝒥_F is either the singleton {i}(recall that F belongs to Γ_i) or the pair {i, i+1} or the pair {i-1, i} or the triplet{i-1,i,i+1}.W_4 = (ξ P (u-û),ψ-π_hψ)_Λ =∑_ℓ∈𝒥_F (ξP(u-û),ψ-π_hψ)_Λ_ℓ≤ξ (∑_ℓ∈𝒥_F‖u-û‖_L^2_P(Λ_ℓ)^2)^1/2(∑_ℓ∈𝒥_F‖ψ-π_hψ‖_L^2_P(Λ_ℓ)^2)^1/2≤ξ (∑_ℓ∈𝒥_F‖u-û‖_L^2_P(Λ_ℓ)^2)^1/2‖ψ - π_hψ‖_L^2(S_F∩∂ B_Λ) . With Cauchy-Schwarz inequality and trace estimate (<ref>), we obtain W_4 ≲ξ (∑_ℓ∈𝒥_F‖u-û‖_L^2(Λ_ℓ)^2)^1/2 (∑_K⊂ S_F h_K^-1‖ψ - π_hψ‖_L^2(K)^2)^1/2≲ h_F^-1/2‖u-û‖_L^2(Ŝ_i∪Ŝ_i+1)‖ψ‖_L^2(S_F),where h_F = min(h_K_F^1, h_K_F^2).Therefore, with (<ref>), we haveW_4≲ h_F^-1/2| F |^1/4‖ r ‖_L^2(F)‖u-û‖_L^2(Ŝ_i∪Ŝ_i+1)≲‖ r ‖_L^2(F)‖u-û‖_L^2(Ŝ_i∪Ŝ_i+1). Collecting the above bounds and using appropriate Young's inequalities yield (<ref>).To prove the bound (<ref>), wedenote by b̂_i the typical hat function associated to the node s_i; this means that b̂_i is piecewise linear, takes the value 1 at s_i and the value 0 at all the other nodes s_ℓ for ℓ≠ i. Denote by r̂_i = [Ad_s v̂]_s_i and let ψ̂_i = r̂_i b̂_i. It easily follows thatψ̂_i_L^2(Ŝ_i)≲ h_Λ^1/2 |r̂_i|. Using integration by parts, it is easy to check thatr̂_i^2 =[Ad_s v̂]_s_iψ̂_i(s_i) = ∑_Λ_ℓ⊂Ŝ_i(∫_Λ_ℓd_s (Ad_s v̂)ψ̂_i+ ∫_Λ_ℓ Ad_s v̂ d_s ψ̂_i). This time, we choose for test function v = (0, ψ̂_i) in (<ref>) to obtain∑_Λ_ℓ⊂Ŝ_i∫_Λ_ℓ Ad_s ûd_s ψ̂_i -b_Λ(u -û,ψ̂_i )=∫_Ŝ_i A f̂ψ̂_i.We rewrite it as∑_Λ_ℓ⊂Ŝ_i∫_Λ_ℓ Ad_s ûd_s ψ̂_i -b_Λ(u -û,ψ̂_i -π̂_h ψ̂_i) + ∫_Ŝ_i P L̂_h û ψ̂_i =∫_Ŝ_i A f̂ψ̂_i.After some manipulation, we obtain r̂_i^2 ≲ ∑_Λ_ℓ⊂Ŝ_i∫_Λ_ℓ ( A f̂ + d_s(Ad_sv̂) - P L̂_h v̂) ψ̂_i +∫_Ŝ_iPL̂_h (v̂- û) ψ̂_i+∑_Λ_ℓ⊂Ŝ_i∫_Λ_ℓAd_s (v̂ - û) d_s ψ̂_i + b_Λ(u -û, ψ̂_i - π̂_h ψ̂_i )= W_5 +… + W_8.We easily bound the terms W_5, W_6 and W_7 by (<ref>)W_5 +W_6≲ h_Λ^1/2|r̂_i| ((∑_Λ_ℓ⊂Ŝ_i A f̂ + d_s(Ad_sv̂) - P L̂_h v̂_L^2(Λ_ℓ)^2)^1/2 + L̂_h (û - v̂) _L_P^2(Ŝ_i)), W_7≲ h_Λ^-1/2 |r̂_i| (∑_Λ_ℓ⊂Ŝ_id_s (û-v̂) _L_A^2(Λ_ℓ)^2)^1/2.For the term W_8, we have by Cauchy-Schwarz and stability of the L^2–projection that W_8≲‖u-û‖_L^2_P(Ŝ_i)‖ψ̂_i - π̂_h ψ̂_i‖_L_P^2(Ŝ_i) ≲‖u-û‖_L_P^2(Ŝ_i)‖ψ̂_i‖_L^2_P(Ŝ_i)≲ h_Λ^1/2‖u-û‖_L^2_P(Ŝ_i)|r̂_i|.Collecting the above bounds yield the desired result.The bound on (R^2_Ω + R^2_Λ) easily follows. The following bound on R^2_Ω + R^2_Λ as defined in (<ref>) holds.(R^2_Ω + R^2_Λ)≲u - v ^2_+ (h_B + h_Λ^2)u - û_L^2_P(Λ)^2 +h^2 f - π_h f ^2_L^2(Ω) + h_Λ^2 f̂ -π̂_h f̂_L^2_A(Λ). Recalling the definition of (<ref>), we have R_Ω^2= ∑_i=1^N ∑_F∈Γ_i| F|^1/2‖ [∇ v]·n_F‖_L^2(F)^2 + ∑_F∈Γ_h∖⋃_i=1^N Γ_i| F|^1/2‖ [∇ v]·n_F‖_L^2(F)^2.The first part is bounded using Lemma <ref>. ∑_i=1^N ∑_F∈Γ_i | F|^1/2‖ [∇ v]·n_F‖_L^2(F)^2 ≲u-v_^2+ ∑_K ∈ h_K^2 (f + Δ v - L_h v^2_L^2(K) + L_h(u-v)^2_L^2(K)) + h_B^2 u - û^2_L^2_P(Λ) .If F does not belong to ⋃_i=1^N Γ_i, thenL_h v = 0 on S_F andstandard a posteriori estimates are used. We omit the details for brevity. Following <cit.>, we have |F|^1/2[∇ v]·n_F^2_L^2(F)≲∑_K⊂ S_F∇ (u - v)_L^2(K)^2 + h_S_F^2 f - π_h f _L^2(S_F)^2 ,∀ F ∈Γ_h \⋃_i=1^N Γ_iCombining the above estimates with Lemma <ref> and Corollary <ref> yields the bound (<ref>) on R_Ω^2. ForR_Λ^2, wehave from Lemma <ref> that R_Λ^2 = ∑_i=0^N h_Λ [ Ad_s v̂]_s_i^2 ≲∑_i=0^Nd_s (û- v̂) ^2_L_A^2(Ŝ_i) +∑_i=0^N h_Λ^2∑_Λ_ℓ⊂Ŝ_iA f̂ + d_s(Ad_s v̂) - P L_h v̂^2_L^2(Λ_ℓ)+h_Λ^2L̂_h (û - v̂) ^2_L_P^2(Λ) + h_Λ^2 u - û^2_L_P^2( Λ).Applying Corollary <ref> and Lemma <ref> yields the bound on R_Λ^2.§ TIME DEPENDENT 3D-1D MODEL We now consider the following time dependent model. For further details on the derivation, well–posedness, and regularity properties of the system, we refer to <cit.>. The weak formulation of the time–dependent problem reads as follows.Find u = (u,û)∈ V = L^2(0,T;H^1_0(Ω)) × L^2(0,T; H^1_A(Λ)) with (∂_t u, ∂_t û)∈ L^2(0,T;L^2(Ω)) × L^2(0,T; L^2_A(Λ)) such that (∂_t u , v) +(∂_t (Aû), v̂)_Λ + 𝒜( u , v) = (f,v) + (Af̂, v̂)_Λ , ∀ v ∈ V.u(0)= (u^0, û^0) ∈ L^2(Ω) × L^2_A(Λ).Here, we recall that 𝒜 is given in (<ref>) and assume that f ∈ L^2(0,T;L^2(Ω)) and f̂∈ L^2(0,T; L^2_A(Λ)) are given. We retain the assumptions on A and P from the previous sections, and we assume that they are independent of time.Consider a uniform partition of the time interval [0,T] into N_T sub-intervals with time step size τ. We use the notation g^n(·) = g(t^n, ·) = g(n τ, ·) for any function g. Let (u_h^0, û_h^0) ∈× be the L^2 projection of (u^0, û^0). u_h^0 = π_h u^0, û_h^0 = π̂_h û^0.A backward Euler dG approximation then reads as follows. Find u_h = (u_h^n ,û_h^n)_1≤ n ≤ N_T∈× such that 1/τ (u_h^n - u_h^n-1, v_h) + 1/τ ( A (û_h^n - û_h^n-1), v̂_h)_Λ + 𝒜_h( u_h^n,v_h)= (f^n,v_h) + (A f̂^n, v̂_h)_Λ , ∀ v_h ∈×.The form 𝒜_h is given in (<ref>). To analyse the above scheme, we define the following elliptic projection: Π_h(t): H^1 (0, T; H^1_0(Ω)) ×H^1(0,T;H^1_A(Λ)) → H^1(0,T;) × H^1(0,T; ) such that for a given g(t) = (g(t), ĝ(t))𝒜_h ( Π_hg (t),v_h) = (g(t), v_h)_Ω + (Aĝ(t), v̂_h)_Λ. ∀v_h ∈×. From the analysis of the previous section,for any t>0, Π_hg(t) is well defined. Since 𝒜_h is linear and coercive and Π_h is continuous, ∂_t (Π_hg(t)) = Π_h ∂_tg(t).Now, for u(t) = (u(t), û(t) ) and f(t) = (f(t) , f̂(t)),we define the interpolant η_h(t) = (η_h (t), η̂_h(t)) ∈× such that η_h(t)= Π_h (( f(t) - ∂_t u(t) ,A f̂(t) - A∂_t û(t))).Therefore, we have that 𝒜_h (η_h(t),v_h)=(f(t)- ∂_t u (t) , v_h)+ (Af̂(t) - A∂_t û(t), v̂_h)_Λ, ∀ v_h ∈×.Since𝒜( u(t),v) = (f(t)- ∂_t u (t) , v) + (A f̂(t) - A ∂_t û(t), v̂)_Λ, ∀ v ∈ H^1_0(Ω) × H^1_A(Λ),we apply the error analysis of the previous section to obtain that for any η>0η_h(t) -u(t)_≲ h^1/2 -η (u(t)_H^3/2-η(Ω) + û(t) _H^2_A(Λ)) +h( f(t) - ∂_t u(t) _L^2(Ω) + f̂ (t) - ∂_t u (t) _L^2_A(Λ)) .Here, for simplicity, we let h_Λ≈ h. It is also easy to see that∂_t η_h(t)= ∂_tΠ_h (( f(t)-∂_t u(t),Af̂(t) - A ∂_t û(t)))= Π_h (( ∂_t f(t)- ∂_tt u(t),A∂_t f̂(t) - A ∂_ttû(t)).Therefore, 𝒜_h(∂_t η_h(t) ,v_h) = (∂_t f(t)- ∂_tt u (t) , v_h) + (A ∂_t f̂(t) -A∂_ttû(t), v̂_h)_Λ, ∀ v_h ∈×.Observing that𝒜(∂_tu(t),v) = (∂_t f(t)- ∂_tt u (t) , v) + (A ∂_t f̂(t) - A∂_ttû(t) , v̂)_Λ, ∀ v ∈ H^1_0(Ω) × H^1_A(Λ),we apply the previous analysis to obtain a bound on ∂_t η_h (t) - ∂_t u(t) _ that is a similar to (<ref>): ∂_tη_h(t) - ∂_tu(t)_≲ h^1/2 -η (∂_t u(t)_H^3/2-η(Ω) + ∂_t û(t) _H_A^2(Λ)) +h( ∂_t f(t) - ∂_tt u(t) _L^2(Ω) + ∂_t f̂ (t) - ∂_ttu (t) _L^2_A(Λ)) .This interpolant allows us to prove the following result. For any 1 ≤ m ≤ N_T, there holds u_h^m- u^m^2 + û_h^m - û^m _L^2_A (Λ)^2 + C_coerc/4τ∑_n=1^m u_h^n -u^n ^2_≲τ^2 + h^1-2η . The above estimate holds under the assumption that (u,û) ∈ H^1(0,T; H^3/2-η(Ω)) × H^1(0,T;H^2(Λ)), (∂_ttu ,∂_tt (Aû)) ∈ L^2(0,T;L^2(Ω)) × L^2(0,T;L^2(Λ)), and (f,f̂) ∈H^1(0,T;L^2(Ω)) × H^1(0,T;L^2(Λ)). Wederive the error equation for e_h^n = (e^n_h, ê^n_h) =u_h^n - η_h^n. For all v_h ∈×, 1/τ(e_h^n -e_h^n-1, v_h) + 1/τ (A( ê_h^n - ê_h^n-1), v̂_h)_Λ +𝒜_h( e^n_h ,v_h)= 1/τ (τ (∂_t u)^n -(η_h^n - η_h^n-1), v_h ) + 1/τ (τ A( ∂_tû)^n - A ( η̂_h^n - η̂_h^n-1 ), v̂_h)_Λ.The proof is based on energy arguments. Wetest (<ref>)with v_h =e_h^n and multiply by τ.With the coercivity property (<ref>), we obtain 1/2 (e_h^n^2 -e_h^n-1^2) + 1/2 (ê_h^n_L^2_A(Λ)^2 -ê_h^n-1_L^2_A(Λ)^2)+ C_coerc/2τ e_h ^n_^2 ≲ (τ (∂_t u)^n -(η_h^n - η_h^n-1), e_h^n )+ (A( τ(∂_t û)^n - (η̂_h^n - η̂_h^n-1)) , ê_h^n)_Λ =T_1 +T_2.It is standard to show (with Cauchy-Schwarz's inequality, Taylor's theorem, and Poincaré's inequality (<ref>)) thatT_1≲ (τ^3/2∂_tt u_L^2(t^n-1, t^n; L^2(Ω)) + τ^1/2∂_t (u -η_h)_L^2(t^n-1, t^n; L^2(Ω)))e_h^n_ .With Young's inequality, we then obtain T_1≤ Cτ^2∂_tt u_L^2(t^n-1, t^n; L^2(Ω))^2 + C ∂_t (u -η_h )_L^2(t^n-1, t^n; L^2(Ω))^2 +τC_coerc/8 e_h^n^2_.Similarly, we bound T_2 with T_2 ≤ Cτ^2A ∂_ttû_L^2(t^n-1, t^n; L^2(Λ))^2 + C A ∂_t(û -η̂_h)_L^2(t^n-1, t^n; L^2(Ω))^2 +τC_coerc/8 e_h^n^2_.We use the above bounds in (<ref>), and we sum the resulting bound over n. We obtain that e_h^m^2 + ê_h^m _L^2_A(Λ)^2 + C_coerc/4τ∑_n=1^m e_h^n^2_≲τ^2 ( ∂_tt u_L^2(0, T;L^2(Ω))^2+ A ∂_ttû_0,T; L^2(Ω))^2 ) + ∂_t (u - η_h)_L^2(0, T; L^2(Ω))^2+A ∂_t( û -η̂_h) _L^2(0,T;L^2(Ω))^2 + e_h^0^2 + ê^0_h^2_L^2_A(Λ). Then, the result follows by using the errorestimates (<ref>) and (<ref>), approximation properties of the L^2 projections (<ref>), and the triangle inequality.In the case of graded meshes, i.e. under the same mesh assumptions as Corollary <ref>, almost optimal spatial convergence rates in the dG norm hold. For example, for k_1 = 1, we have that for any 1 ≤ m ≤ N_T,u_h^m- u^m^2 + û_h^m - û^m _L^2_A (Λ)^2 + C_coerc/4τ∑_n=1^m u_h^n -u^n ^2_≲τ^2 + h^2(1-2η) . The above estimate holds under the additional assumption thatu ∈ H^1(0,T;H^2(Ω\B_Λ) ∩ H^2(B_Λ)). The proof follows from the same argument as before where similar estimates to (<ref>) are used for the dG norm of η_h -u and of ∂_t (η_h -u). For k_1 > 1, one can also derive almost optimal rates under additional regularity requirements on the solution. We omit the details for brevity.§ EXTENSION TO 1D NETWORKS EMBEDDED IN A 3D DOMAINWe extend the above numerical method and model to a 1D network in a 3D domain.We adopt the notation of <cit.> where a hybridized dG method is used for convection diffusion problems in a network. Here, we only introduce Lagrange multipliers on the bifurcation nodes, and we couple the network model to the 3D equations. We do not analyze this dG method for the 3D-1D network model beyond showing well–posedness and local mass conservation at bifurcation points. The error analysis will be the object of future work.A network is represented by a finite, directed, and connected oriented graph 𝒢(𝒱, ℰ) where 𝒱 is theset of vertices and ℰ is the set of edges.We let ℰ(𝗏) denote the set of edges sharing a vertex 𝗏.The boundary of the graph is then defined by 𝒱_∂ = {𝗏∈𝒱, card(ℰ(𝗏)) = 1}. For a given edge 𝖾 = (𝗏_in^𝖾, 𝗏_out^𝖾), we define the function n_𝖾: 𝒱→{-1,0,1} withn_𝖾(𝗏_in^𝖾) = 1 ,n _𝖾(𝗏_out^ 𝖾) = -1, andn_𝖾(𝗏) = 0, ∀𝗏∈𝒱 \ {𝗏_in^𝖾,𝗏_out^𝖾}.The collection of bifurcation points is denoted by ℬ = {𝗏∈𝒱, card(ℰ(𝗏)) ≥3}. For each 𝖾∈ℰ, we define a surrounding cylinder B_𝖾 of cross–section Θ_𝖾 with area A_𝖾 and perimeter P_𝖾. The L_P^2 space over the graph is defined by L_P^2(𝒢) = {u:u_𝖾 =u|_𝖾∈ L^2_P_𝖾(𝖾), ∀𝖾∈ℰ}. This 1D-network is embedded in a 3D domain Ω. The surrounding cylinders B_𝖾 are all strictly included in Ω. In Ω, we solve for u satisfying (in the distributional sense) - Δ u + ξ (u - û ) δ_𝒢 = f, inΩ,u = 0 on ∂Ω, and for each 𝖾∈ℰ, we solve for a 1D solution û_𝖾 satisfying -d_s (A_𝖾 d_s û_𝖾 ) + P_𝖾(û_𝖾 - u_𝖾) = f̂_𝖾in𝖾.The coefficient ξ is a piecewise positive constant on each edge of the graph.The function u is defined byu|_𝖾= u_𝖾 = 1/P_𝖾∫_∂Θ_𝖾 u , ∀𝖾∈ℰ.The functional ξ (u - û) δ_𝒢 is definedby ξ (u - û)δ_𝒢 (v)= ∑_𝖾∈ℰ∫_𝖾ξ_𝖾P_𝖾 (u_𝖾 - û_𝖾) v_𝖾,∀ v ∈ H^1(Ω).We supplement the above system with the following boundary conditions which impose conservation of fluxes and continuity at bifurcation points. On the boundary, we impose homogeneous Neumann conditions.∑_𝖾∈ℰ(𝗏) A_𝖾 d_s û_𝖾 (𝗏) n_𝖾(𝗏) = 0, and û_𝖾(𝗏) = û_𝖾'(𝗏), ∀ 𝗏∈ℬ, ∀ 𝖾, 𝖾' ∈ℰ(𝗏), A_𝖾 d_s û_𝖾 (𝗏) = 0, ∀ 𝗏∈𝒱_∂,𝖾∈ℰ(𝗏). To summarise, the 3D-1D network model consists of (<ref>)-(<ref>) with boundary conditions (<ref>)-(<ref>). The above model can also be found in <cit.>. We now introduce a dG formulation for this model.§.§ DG for the 3D-1D network modelFor each 𝖾∈ℰ,we denote by h_𝖾 the length of the edge 𝖾 andwe introduce a mesh and a space 𝕍_h^𝖾 of degree k_𝖾 similar to (<ref>). Then, we define the broken polynomial space 𝕍_h^𝒢 = {v̂_h:v̂_h|_𝖾 = v̂_𝖾,h∈𝕍_h^𝖾}.We will use a hybridization technique to handle the values of the discrete solution at the bifurcation points. Thus, we define 𝕍_h^ℬ = {w̃_h = (w̃_𝗏,h)_𝗏∈ℬ, ∑_𝗏∈ℬw̃_𝗏,h^2 < ∞}. We now define the form b_𝗏: (𝕍_h^Ω×𝕍_h^𝒢×𝕍_h^ℬ)^2 →ℝ which enforces conditions at the bifurcation points, see Remark <ref>. For 𝗏∈ℬ, define b_𝗏( (u_h, û_h, ũ_h), (w_h, ŵ_h, w̃_h)) = ∑_𝖾∈ℰ(𝗏)A_𝖾 d_s û_𝖾,h (𝗏) n_ 𝖾(𝗏)( ŵ_𝖾,h(𝗏) - w̃_𝗏,h)+∑_𝖾∈ℰ(𝗏) A_𝖾 d_s ŵ_𝖾,h (𝗏) n_𝖾(𝗏)( û_𝖾,h(𝗏) - ũ_𝗏,h)+∑_𝖾∈ℰ(𝗏)σ_𝗏/h_𝖾 (û_𝖾,h(𝗏) - ũ_𝗏,h)( ŵ_𝖾,h(𝗏) - w̃_𝗏,h) .The full dG formulation reads as follows. Find (u_h, û_h, ũ_h) ∈𝕍_h^Ω×𝕍_h^𝒢×𝕍_h^ℬ such that for all (w_h, ŵ_h, w̃_h) ∈𝕍_h^Ω×𝕍_h^𝒢×𝕍_h^ℬ, there holdsa_h (u_h, w_h) + ∑_𝖾 ∈ℰ b_𝖾(u_𝖾,h- û_𝖾,h, w_𝖾,h) = (f,w_h),∑_𝖾 ∈ℰ a_𝖾,h(û_𝖾,h, ŵ_𝖾,h) + ∑_𝖾 ∈ℰb_𝖾(û_𝖾,h - u_𝖾,h, ŵ_𝖾,h)+ ∑_𝗏 ∈ℬ b_𝗏( (u_h, û_h, ũ_h), (w_h, ŵ_h, w̃_h)) = ∑_𝖾 ∈ℰ (f̂_𝖾, ŵ_𝖾,h)_ L_A_𝖾^2(𝖾) . In the scheme above, the form a_h is the same one defined by (<ref>) and the forms a_𝖾,h and b_𝖾 correspond to the forms a_Λ,h and b_Λ with Λ =𝖾. For instance, wewriteb_𝖾(v̂, ŵ) = (ξ_𝖾v̂, ŵ)_L_P_𝖾^2(𝖾), ∀v̂, ŵ∈ L_P_𝖾^2(𝖾). For a given 𝗏∈ℬ, let w̃_h ∈𝕍_h^ℬ be such that w̃_𝗏,h= 1 and zero otherwise. Choosing (w_h, ŵ_h, w̃_h) = (0 , 0, w̃_h) in (<ref>) yields: ∑_𝖾∈ℰ(𝗏) A_𝖾 d_s û_𝖾,h(𝗏) n_𝖾(𝗏) + ∑_𝖾∈ℰ(𝗏)σ_𝗏/h_𝖾 (û_𝖾,h(𝗏) - ũ_𝗏,h)= 0 , ∀𝗏∈ℬ.That is, up to jump terms, the discrete dG scheme locally conserves the fluxes, see (<ref>),at each bifurcation point. There exists a unique solution for the problem given in (<ref>) and (<ref>). For any (u_h, û_h , ũ_h) ∈𝕍_h^Ω×𝕍_h^𝒢×𝕍_h^ℬ,let 𝒳 = a_h (u_h, u_h) + ∑_𝖾∈ℰ a_𝖾,h(û_𝖾,h, û_𝖾,h) +∑_e ∈ℰ b_𝖾 (u_𝖾,h - û_𝖾,h , u_𝖾,h - û_𝖾,h)+ ∑_𝗏∈ℬ b_𝗏( (u_h, û_h, ũ_h), (u_h, û_h, ũ_h)).It suffices to show that 𝒳≳u_h_^2 + ∑_𝖾∈ℰ ( |û_h |^2_𝕍_h^𝖾 + u_𝖾,h - û_e,h^2_L^2_P(𝖾)) + ∑_𝗏∈ℬ∑_𝖾∈ℰ(𝗏)σ_𝖾/h_𝖾 (û_𝖾,h(𝗏) - ũ_𝗏,h)^2,since the right hand side above defines a norm. Here, |· |_𝕍_h^𝖾 is defined in the same way as (<ref>). From application oftrace estimates, it is standard to show that for σ_𝗏 large enough, there exists a constant C_3>0 such that ∑_𝗏∈ℬ b_𝗏( (u_h, û_h, ũ_h), (u_h, û_h, ũ_h))+ 1/2∑_𝖾∈ℰ C_𝖾 |û_h|^2_𝒯_𝖾^h ≥∑_𝗏∈ℬ∑_𝖾∈ℰ(𝗏)C_3/h_𝖾 (û_𝖾,h(𝗏) - ũ_𝗏,h)^2,where C_𝖾 is the coercivity constant of a_𝖾,h, similar to (<ref>). It then follows that ∑_𝖾∈ℰ a_𝖾,h(û_𝖾,h, û_𝖾,h) +∑_𝗏∈ℬ b_𝗏(u_h, û_h, ũ_h), (u_h, û_h, ũ_h))≥1/2∑_𝖾∈ℰ C_𝖾|û_h|^2_𝒯_𝖾^h +∑_𝗏∈ℬ∑_𝖾∈ℰ(𝗏)C_3/h_𝖾 (û_𝖾,h(𝗏) - ũ_𝗏,h)^2.From here, we use the coercivity results (<ref>) and the definition of b_𝖾 to conclude that (<ref>) holds. We omit the details for brevity.§ NUMERICAL RESULTS§.§ Manufactured solutions with one vessel in a 3D domainIn this first example, we consider manufactured solutions and compute error rates. Let Ω = (-0.5,0.5)^3 contain Λ = { (0,0,z), z ∈ (-0.5,0.5) } with a surrounding cylinder of constant radius R = 0.05. Denoting by r the distance to the line Λ, the exact solutions are u = ξ/ξ+1 (1 - Rln (r/R) ) û ,r > R, ξ/ξ+1û,r ≤ R . ,andû = sin(π z) + 2. The above 3D solution is obtained from the observation that <cit.>, see also <cit.>: ∫_Ω - ( ∂_xx u + ∂_yy u)v = ∫_Γξ/ξ + 1ûv= -∫_ΛξP (u - û ) v .We set ξ = 1, and we modify the source terms f, f̂ and the boundary conditions so that the equations are satisfied. The parameters are set toϵ_1 = ϵ_2 = -1, k_1=k_2= 1, and σ_Ω = σ_Λ = 30. For all our examples, we use the FEniCS finite element framework <cit.> and the (FEniCS)_ii module <cit.>.We compute the solution(u_h, û_h),the L^2 and the H^1 norms of theerrorse_h = u - u_h and ê_h = û - û_h on a family of uniform meshes created by FEniCS “BoxMesh” with 6N^3 number of elements. The results and the rates of convergence are reported in Tables <ref> and <ref> for the 3D and the 1D approximation respectively. §.§ Manufactured solution for a vessel network. Next, we verify the convergence of the dG scheme for the 1D network model. Precisely, in this example, we now consider only the Poisson problem posed on the network -Δû = f̂ on 𝒢 complementedwith (<ref>) and homogeneous Dirichlet conditions on 𝒱_∂, and we do not solve for a 3D solution. The dG scheme for this 1D diffusion problem problem is given in (<ref>) with u_h =v_h = ξ = 0 and the penalty parameters set as σ_𝖾= σ_𝗏 =10. We consider the network embedded in ℝ^2 shown in <Ref> which includes 3 bifurcations, i.e. |ℬ| = 3, located at 𝗏_1=(0, 1), 𝗏_2= (-1, 2), 𝗏_3=(1, 2) while the remaining nodes are placed at 𝗏_0=(0, 0),𝗏_4=(-1.5, 3), 𝗏_5=(-0.5, 3), 𝗏_6=(0.5, 3), 𝗏_7=(1.5, 3). Given 𝒢, we consider the followingsolution and data û =y + cos 2π y, (x, y)∈𝖾_02 + 1/2√(2)(y-1), (x, y) ∈𝖾_1≤ i ≤ 22 + 1/2√(2) + 1/8√(5)(y-2),(x, y)∈𝖾_3≤ i ≤ 6 ,f̂ =4π^2cos 2π y,(x, y)∈𝖾_0 0, (x, y)∈𝖾_i≠ 0 . Using (<ref>) and the dG scheme with linear polynomials, <Ref> confirms the first order convergence of the method. The norm in <Ref> is given by ‖ (ê_h, ẽ_h)‖_𝕍^𝒢_h×𝕍^ℬ_h^2= ∑_𝖾∈ℰê_h_𝕍_𝖾 ^h^2 + ∑_𝗏∈ℬ∑_𝖾∈ℰ(𝗏)σ_𝖾/h_𝖾 (û_𝖾,h(v) - ũ_𝗏,h)^2,where ê_h_𝕍_𝖾^h^2 is a slight modification to (<ref>) to alsoinclude boundary terms.§.§ Coupled 3D-1D simulation in realistic networks.In <Ref>, we finally illustrate the capabilities of our dG scheme to model tissue micro-circulation in a realistic setting. To this end, we utilize the data set <cit.> which includes vasculature of a 1 mm^3 of a mouse cortex, and welet 𝒢 be defined in terms of arteries and venules of this network, leaving out the capillaries. The vessel radius in the network ranges approximately from 5 μm to 35 μm.The 3D domain Ω is then defined as a bounding box of 𝒢. Upon discretization, dim𝕍^Ω_h=196608, dim𝕍^𝒢_h=11196,and dim𝕍^ℬ_h=57. The solution fields obtained from (<ref>) considered with f=0, f̂=1 and homogeneous Dirichlet and Neumann conditions for u and û respectively are shown in <Ref>.§ CONCLUSIONSInterior penalty discontinuous Galerkin methods are introduced for coupled 3D-1D problems. These models span several areas of applications such as modeling flow and transport in vascularized tissue. We analyze dG approximations for the steady state problem and a backward Euler dG method for the time dependent problem.Our analysis is valid under minimal assumptions on the regularity of the solution and on the mesh. Recovering almost optimal rates for graded meshes is also shown, under sufficient regularity assumptions. Further, we propose a novel dG method with hybridization for a network of vessels in a 3D surrounding. The method, up to jump terms, locally conserves mass at bifurcation points. Numerical results demonstrate our error analysis.plain
http://arxiv.org/abs/2312.16565v1
{ "authors": [ "Rami Masri", "Miroslav Kuchta", "Beatrice Riviere" ], "categories": [ "math.NA", "cs.NA", "65N30, 65M60" ], "primary_category": "math.NA", "published": "20231227130928", "title": "Discontinuous Galerkin methods for 3D-1D systems" }
matrix2.15 plaincompatibility=false
http://arxiv.org/abs/2312.16532v1
{ "authors": [ "Gauhar Abbas", "Neelam Singh" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20231227112611", "title": "Dark-technicolour at low scale" }
[email protected] Department of Physical Sciences,Indian Institute of Science Education and Research Kolkata, Mohanpur - 741 246, WB, India [email protected] of Physics,Indian Institute of Technology Bombay, Mumbai - 400076, India 04.62.+v, 04.60.Pp Neutron stars are known to have strong magnetic fields reaching as highas 10^15 Gauss, besides having strongly curved interior spacetime. So forcomputing an equation of state for neutron-star matter, the effect of magneticfield as well as curved spacetime should be taken into account. In this article,we compute the equation of state for an ensemble of degenerate fermions in thecurved spacetime of a neutron star in presence of a magnetic field. We showthat the effect of curved spacetime on the equation of state is relativelystronger than the effect of observed strengths of magnetic field. Besides, athin layer containing only spin-up neutrons is shown to form at the boundary ofa degenerate neutron star. Effects of magnetic field on the equation of state incurved spacetime of a neutron star Susobhan Mandal January 14, 2024 ========================================================================================= § INTRODUCTIONThe astrophysical data suggest that the surface magnetic field of a typicalneutron star is around 10^11 - 10^13 Gauss, whereas the internal fieldstrength can reach up to 10^15 Gauss or even higher<cit.>. Thedominant matter constituent of neutron stars are believed to be charge-lessneutrons. However, they interact with a magnetic field through the non-minimalPauli-Dirac gauge coupling due to their intrinsic magnetic moment. Therefore,the presence of a strong magnetic field is expected to play an important role indetermining thermodynamic properties of matter present inside the neutronstars. At the same time, neutron stars are also expected to containelectrically charged particles like protons and electrons. These chargedparticles directly interact with a magnetic field and form the well-knownLandau levels quantum mechanically. These Landau levels are bound states ofcharged particles. Therefore, it is important to study their roles incomputation of the fermionic degeneracy pressure which makes compact starssuch as neutron stars stable against the gravitational collapse. Thethermodynamic properties of a gas of charged particles under an externalmagnetic field in the Minkowski spacetime, have been studied earlier in<cit.>. However, recent articles <cit.> show thatthe curved background geometry of a neutron star also plays a crucial role indetermining the properties of the equation of state (EOS) of the matter presentinside the star. In particular, metric-dependent gravitational time-dilationeffect leads to an enhancement of the stiffness of the EOS of matter.Consequently, such an EOS, referred to as the curved EOS, leads to anenhancement of the mass limits of neutron stars <cit.>. Wehave mentioned that the observed neutron stars are known to have strong magneticfields. Therefore, it is important to take the magnetic field into account whilecomputing the EOS for a neutron star in its curved spacetime.The key idea that we use for computing the EOS here is the lesson fromEinstein's general relativity that even in a curved spacetime one can alwaysfind a set of local coordinates in which the spacetime metric appears to belocally flat. However, it is unlike the usage of a globally flatspacetime which is commonly used in the literature to compute the EOS, referredto as the flat EOS, for neutron stars. Subsequently, we employ the methods ofthermal quantum field theory to compute the EOS, as pioneered by Matsubara<cit.>. The result derived here shows that for an ensemble ofcharge-less neutrons the magnetic field and the gravitational time-dilation bothleads the EOS to become stiffer whereas for an ensemble of charged fermions themagnetic field makes the EOS softer due to formation of the Landau levels.However, the changes of stiffness of the EOS due to the gravitational timedilation effect is relatively stronger than the changes due to the observedstrengths of magnetic field.§ INTERIOR SPACETIMEIn the presence of an axial magnetic field, the interior spacetime of a neutronstar can be modelled by an axially symmetric spacetime. On the other hand, thespacetime metric of a slowly rotating star that preserve axial symmetry can berepresented, in the natural units c = ħ =1, by the followinginvariant line element <cit.> ds^2 = - e^2Φdt^2 + e^2ν dr^2 +r^2[dθ^2 +sin^2θ (dφ-ω dt)^2]  , where ω = ω(r) is the acquired angular velocity of a freely-fallingobserver from infinity, a phenomena referred to as the dragging ofinertial frames. On the other hand, the radial variation of the metric functionΦ = Φ(r) leads to the phenomena of gravitational time dilation.We note that in absence of the frame-dragging angular velocity ω, thespacetime metric (<ref>) represents a spherically symmetric spacetime. The contribution of inertial frame-dragging on the EOS is controlled by adimensionless ratio (ω/m) and if we consider m to be the mass ofneutrons then even for a rapidly rotating millisecond pulsar the dimensionlessratio is vanishingly small as (ω/m) ∼ 10^-22<cit.>. Nevertheless, similar to the magnetic field B, theframe-dragging angular velocity ω couples to the spin-component of theDirac field. However, as we have argued that the effect of inertialframe-dragging on the EOS is extremely small. So for computation of the EOS inthe presence of a magnetic field, we neglect the inertial frame-dragging and wetake into account only the effect of gravitational time-dilation, by consideringthe following invariant line elementds^2 = - e^2Φdt^2 + e^2ν dr^2 + r^2[dθ^2+ sin^2θ dφ^2]  , which essentially represents a spherically symmetric spacetime. § ANISOTROPIC PRESSURE DUE TO MAGNETIC FIELDIn order to study the interior spacetime, here we consider the stellarmatter to be described by a perfect fluid with the stress-energy tensor T^M_μν = (ρ + P) u_μ u_ν + P g_μν ,where u^μ is the 4-velocity of the stellar fluid satisfyingu_μu^μ= -1, ρ is the energy density and P is the pressure ofthe fluid. On the other hand, the stress-energy tensor associated with theelectromagnetic field is given by T^E_μν = 1/μ_0[ F_μαF_ ν^α- 1/4g_μνF_αβ F^αβ]  , where μ_0 is the magnetic permeability of vacuum and F_μν =∂_μ𝒜_ν - ∂_ν𝒜_μ is the electromagnetic field tensor whose indices are contracted with respect to thespacetime metric. Therefore, the total stress-energy tensor T_μν =T^M_μν + T^E_μν, can be expressed in the following form T_ ν^μ = diag( - ρ, P_r, P_t, P_t)  . We note that due to the presence of a magnetic field the total radialpressure P_r and total tangential pressure P_t differ from each other. On the other hand total energy density ρ now includes the contributionsfrom both stellar fluid as well as from the magnetic field. The tt and rrcomponents of Einstein's equation G_ ν^μ = 8π G T_ ν^μcorresponding to the metric (<ref>), lead tothe following equations 8π G r^2ρ= e^-2ν(2rν' - 1) + 1  ,8π G r^2P_r= e^-2ν(2rΦ' + 1) - 1  . By considering e^2ν = 1/(1 - 2Gℳ/r), we obtain the equations dΦ/dr = G( + 4π r^3 P_r)/r(r - 2 G )  ,  dℳ/dr = 4π r^2ρ . The conservation of stress-energy tensor leads to the equation as follows dP_r/dr = 2/r(P_t - P_r) - (P_r+ ρ)dΦ/dr . Additionally, we also have a second-order differential equation which followsfrom G_ θ^θ = 8π G T_ θ^θ equation and isgiven by 8π GP_t = e^-2ν[d^2Φ/dr^2 - dΦ/drdν/dr + (dΦ/dr)^2 + 1/r(dΦ/dr- dν/dr)]  . However, we note that the equation (<ref>) is not an independentequation and it can be obtained from the conservation equation andrr-component of Einstein's equation.For a detailed study on the anisotropic spherical star in general relativity,see <cit.>. Nevertheless, we note that for anaxial magnetic field B, the pressure components P_t and P_r would differfrom each other by the terms of 𝒪(B^2). So for the cases where𝒪(B^2) terms are negligible, the total pressure can be considered tobe isotropic. We shall see later that the observed field strength of even10^15 Gauss is significantly smaller compared to the characteristic fieldstrength B_c ≈ 10^20 Gauss associated with the nucleons. It allows usto neglect the terms of 𝒪(B^2) which in turn permits the exteriormetric to be described by the Schwarzschild metric such that metric functionΦ is subject to the boundary condition e^2Φ(R) = 1 - 2GM/R. For atypical neutron star having mass M = 1 M_⊙ and radius R=10 km, themetric function Φ(R) ≃ -0.17. Further, it follows from the equation(<ref>) that the values of Φ inside the star are lower thanΦ(R) as (dΦ/dr) is positive definite.§ LOCAL THERMAL EQUILIBRIUMDue to the hydrostatic equilibrium, the thermodynamic properties such as thepressure, the energy density vary radially inside a star. On the other hand,these thermodynamic properties are required to be uniform within a giventhermodynamical system in equilibrium. Nevertheless, these two seeminglydisparate aspects can be reconciled by introducing the concept of localthermodynamical equilibrium inside the star.In order to ensure the conditions forlocal thermodynamic equilibriuminside a star, we can choose a sufficiently small region but containing largenumber of degrees of freedom. Inside this small region the metric variationscan be neglected. For definiteness, we chose a box-shaped small region whosecenter is located at say r=r_0. By following the coordinate transformationsgiven in <cit.>, namely x = e^ν(r_0) r sinθ̅cosϕ,y = e^ν(r_0) r sinθ̅sinϕ, and z = e^ν(r_0) r cosθ̅ along with θ̅ = e^-ν(r_0)θ for small θ, we can reduce the metric(<ref>) to the following formds^2 = -e^2Φ(r_0)dt^2 + dx^2 + dy^2 + dz^2, in a locally Cartesian coordinates. The metric within the box(<ref>) contains the information about the metric function Φ =Φ(r_0), in contrast to the usage of a globally flat spacetime for computingthe matter EOS in the literature <cit.>.The metric function Φ is treated here as a constant within the scaleof the box, which is sufficient to describe the microscopic physics of the constituent particles. However, the metric function Φ varies at thescale of the star, as governed by the equations (<ref>). § NEUTRONS IN AN EXTERNAL MAGNETIC FIELD Neutrons are electrically neutral particles, hence, they do not couple minimally to the gauge field associated with the external magneticfield. However, neutrons possess a magnetic dipole moment due to their internalquark degrees of freedom. Consequently, under an external magnetic field,neutrons couple to the gauge field non-minimally through the Pauli-Diracinteraction. The corresponding action is given by S = -∫√(-g)d^4x ψ̅[ie_ a^μγ^a𝒟_μ + m - /2σ^μν F_μν]ψ ,where spinor field ψ represents the neutrons with mass mand ψ̅ = ψ^†γ^0 being its Dirac adjoint. Thetetrad components e^μ_a are defined as g_μνe^μ_ae^ν_b = η_ab whereg_μν is the spacetime metric whereas η_ab = diag(-1,1,1,1) is the Minkowski metric. The spin-covariantderivative is defined as 𝒟_μψ≡∂_μψ +Γ_μψ where spin connection Γ_μ is given by Γ_μ=-18η_ace_ν^c(∂_μe^ν_b + Γ^ν_μσe^σ_b )  [γ^a, γ^b] , with Γ^ν_μβ being the Christoffel connections. The Diracmatrices γ^a satisfy the Clifford algebra {γ^a,γ^b} =- 2η^ab𝕀 together with the relations (γ^0)^2 =𝕀 and (γ^k)^2 = -𝕀 for k=1,2,3. In the Pauli-Diracinteraction term,denotes the magnitude of magnetic moment ofneutrons and σ^μν =i/2e^μ_a e^ν_b [γ^a,γ^b]. §.§ Partition functionIn order to compute the partition function around a given a point inside thestar, we consider a small region around it where the metric can be reduced tothe form (<ref>). Within this box-shaped region the tetradcomponents can be expressed as e_ 0^t = e^-Φ,e_ 1^x =e_ 2^y = e_ 3^z = 1. Consequently, the spin-connection within thebox vanishes i.e. Γ_μ = 0. Additionally, we choose themagnetic field to be in the z-direction with its field strength being B.For such a magnetic field the gauge field components can be chosen as𝒜_μ = (0,0,B x,0). Therefore, within the Box, the action(<ref>) reduces to the following form S = -∫ d^4x ψ̅[iγ^0∂_t + e^Φ (iγ^k∂_k + m) - e^Φ BΣ_3]ψ , where Σ_3 = i2[γ^1,γ^2] = σ_3 ⊗𝕀_2 and σ_3 is the Pauli matrix associated with the spinoperator along z-direction.In functional integral approach, the partition function can be expressed as𝒵 = ∫𝒟ψ̅𝒟ψ e^-S^βby using the coherent states of the Grassmann fields <cit.>. Here S^β denotes the Euclidean actionobtained obtained through the Wick rotation t→ -iτ. By followingthe approach as given in <cit.>, we canexpress the Euclidean action as S^β = ∫_0^βdτ∫ d^3x ψ̅[- γ^0(∂_τ + μ + e^Φ B γ^0 Σ_3)+e^Φ (iγ^k∂_k + m)]ψ . The equilibrium temperature T of the system leads to the followinganti-periodic boundary condition on the Dirac fieldψ(τ,) = -ψ(τ+β,) , where β = 1/(k_B T) with k_B being the Boltzmann constant. By using theMatsubara frequencies ω_l = (2l+1) π/β where l is an integer, wecan express the field ψ in the Fourier domain as ψ(τ,) = 1/√(V)∑_l, e^-i(ω_lτ +·̨)ψ̃(l,)̨ , where volume of the box is now V = ∫ d^3x √(-η). The equation(<ref>) then leads the action (<ref>) to become S^β = ∑_l, ψ̅̃̅ β[ p + m̅]ψ̃ , where p = γ^0(iω_l - μ -e^Φ B γ^0Σ_3) + γ^k (_̨k e^Φ) and m̅ =m e^Φ.Using the results of Gaussian integral over the Grassmann fields and the Diracrepresentation of γ^a matrices, one can evaluate the partition function𝒵 for the particle sector as ln𝒵 = ∑_s = ±∑_ln(1 + e^β(μ_s -ε))  , where ε^2 = ε()̨^2 = e^Φ(^̨2 + m^2) and the modified chemical potential associated with the different spins ofneutrons are μ_s = μ + s e^Φ B  . In general, the presence of a magnetic field makes the dispersion relationanisotropic in the momentum space. As a result, the Fermi-surface is no longerspherical in nature, rather it becomes an ellipsoid. However, for (B/m)≪ 1 limit (which is typically the case inside a neutron star) andneglecting the anisotropy due to the fact (k_z/m) ≪ 1, we get the followingtwo dispersions ω = e^Φ (ε± B)  , which are two shifted spheres. In the thermodynamic limit, the summation over$̨ in the equation (<ref>) can be expressed as an integralover the momentum space that results in the following expression of thepartition function ln𝒵 = ∑_s = ±e^-3Φβ V/48π^2[2μ_sμ_sm^3 - 3m̅^2μ̅_sm^2]  ,whereμ_sm = √(μ_s^2 - m̅^2)andμ̅_sm^2 =μ_sμ_sm - m̅^2arcsinh(μ_sm/m̅). In arriving atthe expression (<ref>), we have neglected the temperaturecorrections of𝒪((βμ)^-2), given a degenerate star ischaracterized by the condition(βμ)≫1. Additionally, we haveomitted formally divergent zero-point energy terms. §.§ Pressure and energy densityWe can compute number density of neutrons from the partition function(<ref>) asn = (βV)^-1 (∂ln𝒵)/(∂μ)which leads ton = n_+ + n_- ,  with   n_± = e^-3Φ/6π^2μ_± m^3 .The equation (<ref>) can be used to express the modifiedchemical potentials in terms of the number densities of spin-up and spin-downneutrons respectively asμ_± = m e^Φ √((bn_±)^2/3 + 1)whereb = (6π^2)/m^3). We should mention here thatμ_±can beequivalently treated as independent variables in places ofμandB. Theequation (<ref>) then leads to the following relation √((bn_+)^2/3 + 1) - √((bn_-)^2/3 + 1) =2 (B/B_c)  ,where the constantB_c = (m/) ≈10^20Gauss. The constantB_chere signifies the characteristic scale of magnetic field associated withneutrons. For a grand canonical ensemble, we can compute total pressure from thepartition function (<ref>) asP = (βV)^-1ln𝒵 = P_+ + P_-where the pressure components associatedwith the different spins of neutrons are P_± = e^Φm^4/48π^2[ √((b n_±)^2/3 + 1){2(b n_±) - 3(b n_±)^1/3}. + .  3 arcsinh{ (b n_±)^1/3}] .For the metric (<ref>), the 4-velocity vector in the boxcorresponding to the perfect fluid form of the stellar fluid(<ref>) can be expressed asu^μ =e^-Φ(1, 0, 0, 0)along withits co-vectoru_μ = e^Φ(-1, 0, 0,0). Consequently, the energy densityρcan be expressed in terms of thepartition function as(ρ-μn)V = -(∂ln𝒵/∂β)<cit.> leading toρ= ρ_+ + ρ_-where ρ_± = - P_± + e^Φm^4/6π^2 (b n_±) √((b n_±)^2/3 + 1) .In the limitB→0, we note thatμ_+=μ_-. Consequently, in thislimit total pressurePand total energy densityρreduce to theexpressions of pressure and energy density for an ensemble of non-interactingdegenerate neutrons as expected. For a non-zeroB, it can be checked that thecorrections to the total pressurePand total energy densityρare of𝒪(B^2).The behaviour of total pressure as a function of number density fordifferent values of the magnetic field and the metric function is plotted inthe FIG. <ref>. On the other hand, the FIG.<ref> shows the behaviour of pressure, energy density ratio asa function of energy density and it shows the stiffening of theEOS due to the effects of both magnetic field and gravitational time dilation.§.§ Magnetic moment of a neutron starWe note from the equation (<ref>) that for a non-zero valueof magnetic fieldB, there is a population difference between different spinsof neutrons. As a result, number densities of spin-up and spin-down neutrons,n_+andn_-respectively, cannot vanish simultaneously at the boundaryof the star, and namely,n_-vanishes earlier inside the neutron star. Inparticular, whenn_-becomes zero then we obtainn_+ = 8/b[B/B_c(1 + B/B_c)]^3/2 .Consequently, due to the presence of a magnetic field there exists a thin layerat the boundary of a degenerate neutron star which contains only spin-upneutrons. In turn, the neutron star as a whole would acquire a net magneticmoment which would then naturally lead to an accretion of charge particlessurrounding the neutron star. In the case of rotating neutron stars, analogousthin layer containing only one kind of spins has been reported earlier<cit.> where it arises due to the dragging of inertialframes. § CHARGED FERMIONS IN AN EXTERNAL MAGNETIC FIELD We have mentioned earlier that primary constituents of a neutron starare believed to be neutrons. However, a neutron star is also expected to have a smaller fraction of electrically charged fermions such as protons andelectrons. Unlike neutrons,charged fermions couple minimally withthe gauge field associated with an external magnetic field. We shall, however,ignore the contributions from the electromagnetic self-interaction betweenthese fermions, as those are expected to be small <cit.>. The generally invariant action for an electrically charged Dirac fermionψcoupled to an electromagnetic gauge field𝒜_μis given by S = -∫√(-g)d^4x ψ̅[ie_ a^μγ^a ( 𝒟_μ - ie 𝒜_μ) + m ]ψ ,whereedenotes the electrical charge of the fermion. §.§ Partition function In order to evaluate the partition function, as earlier, we consider theexternal magnetic field to be along thez-direction and we choose the gaugefield components to be𝒜_μ = (0,0,Bx,0). Therefore, within thebox with the metric (<ref>), we can reduce the Dirac action(<ref>) to the following form S = -∫ d^4x ψ̅[iγ^0∂_t +e^Φ(iγ^k∂_k + m) + e^Φγ^2 eB x]ψ .As earlier, the partition function can be expressed as𝒵 =∫𝒟 ψ̅ 𝒟ψ e^-S^βwhereS^βdenotes the Euclidean action corresponding to the action(<ref>) and is given by S^β = ∫_0^βdτ∫ d^3x ψ̅[- γ^0(∂_τ + μ) + e^Φ (iγ^k∂_k + m) +e^Φγ^2 e B x]ψ . At thermal equilibrium, the Dirac field is subject to the anti-periodicboundary conditionψ(τ,) = -ψ(τ+β,)leading to the Matsubara frequenciesω_l = (2l+1) π/βwherelis an integer.Therefore, we can express the fieldψin the Fourier domain as ψ(τ,) = 1/√(L_y L_z)∑_l,k_y,k_z e^-i (ω_lτ + k_y y + k_z z)ψ_l(x,k_y,k_z) ,whereL_yandL_zdenote the length of the box in theyandzdirections respectively. The equations (<ref>,<ref>) then lead to S^β = ∑_l,k_y,k_z∫ dx  ψ̅_̅l̅ β[ 𝒟̃ + m̅] ψ_l ,where𝒟̃ ≡γ^a 𝒟̃_awith𝒟̃ = γ^0(iω_l - μ) + γ^1(ie^Φ∂_x) + γ^2 e^Φ (e B x + k_y) + γ^3 e^Φ k_z  .The partition function then can be expressed as 𝒵 = ∏_l,k_y,k_zdet[β(𝒟̃+ m̅)]  .By using the propertydet[β(𝒟̃ +m̅)] = det[γ^5β(𝒟̃ +m̅)γ^5] = det[β(-𝒟̃ +m̅)]whereγ^5 ≡iγ^0γ^1 γ^2γ^3,(γ^5)^2 = 𝕀and{γ^5,γ^a} = 0, one can showthat det[β(𝒟̃ + m̅)] =det[β^2(-𝒟̃^2 +m̅^2)]^1/2 .Using the properties of theγ^amatrices, we can express-𝒟̃^2 + m̅^2 = (ω_l + iμ)^2 + e^2Φ[ H_xy - eB Σ_3 + k_z^2 + m^2]  ,whereΣ_3 = i2[γ^1,γ^2] = σ_3 ⊗𝕀_2andH_xy = -∂_x^2 + (e B)^2 (x +k_y/eB)^2 .In order to evaluate the partition function (<ref>) wecan compute the trace over the eigenstates of the operatorΣ_3witheigenvalues2swheres=±12and of the operatorH_xywitheigenvalues(2n+1)|eB|wherenbeing non-negative integers. Itleads toln𝒵 = ∑_l,k_y,k_z,s,nln[β^2 {(ω_l + iμ)^2 + ε^2 }]  ,where ε^2 =e^2Φ[ eB (2n + 1 - 2s) + k_z^2 + m^2]  .In the equation (<ref>), for brevity of notation, the term|eB|is expressed aseBand we shall use this notation henceforth. One can carry out the summation over Matsubara frequenciesω_l(see<cit.>) which leads to the followingexpression of the partition function ln𝒵 = ∑_k_y,k_z,s,n[ln(1 + e^-β(ε - μ))+ ln(1 + e^-β(ε + μ)) ]  .In order to arrive at the equation (<ref>), formallydivergent terms such as the zero-point energy of fermions have been omitted. Thefirst and the second terms in the equation (<ref>)denotes the contributions from the particle and the anti-particlesectors respectively. Henceforth, we shall consider only the particle sector. In the equation (<ref>), we note thatεisindependent ofk_y. However, in the equation (<ref>),k_yshifts the origin ofx-coordinate. Therefore, for a system ofcharged fermions in the given box, we must require|k_y/eB| ≤L_x/2. Byusing the approximation∑_k_y,k_z = (L_yL_z)/(2π)^2 ∫d k_y dk_z, wecan express the partition function for the particle sector as ln𝒵 =eB V/4π^2∑_s,n∫ dk_z ln(1 +e^-β(ε - μ))  ,whereVbeing the volume of the box. We note that in the partitionfunction (<ref>), we can replace the summation overthe indexsandnby a single summation over an indexℓas follows ln𝒵 = ln𝒵_0 + 2 ∑_ℓ=1ln𝒵_ℓ ,whereln𝒵_ℓ =eB V/2π^2∫_0^∞ dkln(1 + e^-β(ε_ℓ - μ))  ,withε_ℓ^2 =e^2Φ [ 2(eB)ℓ+ k^2 + m^2 ].The indexℓhere corresponds to the different Landau levels. From theequation (<ref>), we note that the Landau levels,other thanℓ=0, are doubly degenerate.By using the degeneracy condition of compact stars i.e.(βμ) ≫1, we can explicitly evaluateln𝒵_ℓas ln𝒵_ℓ =β V(eB)e^-Φ/4π^2[μμ_mℓ- m̅_ℓ^2 arcsinh (μ_mℓ/m̅_ℓ) ]  ,wherem̅_ℓ^2 = e^2Φ [m^2 +2(eB)ℓ]andμ_mℓ = √(μ^2 - m̅_ℓ^2). In order to ensure positivevalues forμ_mℓ, we must restrict the summation over Landau levels upto anℓ_max, given by ℓ_max =μ_m^2/2(eB)e^2Φ with μ_m = √(μ^2 - m̅^2) .We can express the total partition function (<ref>) asln𝒵 = ln𝒵_S+ ln𝒵_D ,whereln𝒵_S ≡ln𝒵_ℓ=0represents thecontributions from the singlet Landau level and is given by ln𝒵_S =(eB)β V e^-Φ/4π^2[μμ_m - m̅^2arcsinh(μ_m/m̅)] .On the other handln𝒵_D ≡2 ∑_ℓ=1^ℓ_maxln𝒵_ℓrepresents the contributions from the doublydegenerate Landau levels. With the aid of Poisson formula, by neglecting theoscillating part, we can evaluate it asln𝒵_D = 2∫_1^ℓ_max dℓln𝒵_ℓ<cit.>and it leads to ln𝒵_D =β V e^-3Φ/24π^2[ 2μμ_m1^3 - 3m̅_1^2μ̅_m1^2 ]  ,wherem̅_1 = m̅_ℓ=1andμ̅_m1^2 = μμ_m1-m̅_1^2 arcsinh(μ_m1/m̅_1). It can be checked that in theabsence of the magnetic field i.e. asB→0,m̅_1→m̅,the total partition function (<ref>) reduces exactly tothe partition function of degenerate fermions as given in<cit.>.§.§ Pressure and energy density Using the partition function (<ref>), we cancompute the number density of the fermions as n = 1/β V∂ln𝒵/∂μ= e^-3Φ/3π^2μ_m1^3+ eB e^-Φ/2π^2μ_m ,where we have used the properties(∂μ_m/∂μ) = μ/μ_m,(∂μ_m1/∂μ) = μ/μ_m1and(∂μ̅_m1^2 /∂μ) = 2μ_m1.For convenience, we now defineb = (3π^2/m^3)andB_c = (m^2/e)which then allows us to expressμ_m, up to𝒪(B^2), as μ_m = m e^Φ[ (bn)^1/3 + (B/B_c)/2(bn)^1/3]  .We note that the constantsbandB_cfor charged fermions differ fromthe constants associated with neutrons. As earlier, we can compute thepressure asP = (βV)^-1 ln𝒵and express it asP = (P_0 +P_B)where magnetic field independent part of the pressure is P_0 =m^4 e^Φ/24π^2[ √((b n)^2/3 + 1){2(b n) - 3(b n)^1/3}.+ .  3 arcsinh{ (b n)^1/3}] ,and the magnetic field dependent part is P_B =m^4 e^Φ/12π^2B/B_c[ 3 arcsinh{ (b n)^1/3}- (b n) + 3(b n)^1/3/√((b n)^2/3 + 1)] .Similarly, the energy densityρcan be expressed in terms of thepartition function as(ρ-μn)V = -(∂ln𝒵/∂β)leading toρ= ρ_0 + ρ_Bwhere magnetic field dependent part of the energy density is ρ_0 =- P_0 + m^4 e^Φ/3π^2 (b n) √((b n)^2/3 + 1) ,and magnetic field dependent part is ρ_B =- P_B + m^4 e^Φ/6π^2B/B_c(bn)/√((bn)^2/3 + 1) .We again note that the EOS for an ensemble of electrically charged fermionsunder an external magnetic field and computed in the curved spacetime dependson the gravitational time dilation through the metric functionΦ, inaddition to the magnetic fieldB. As expected, in the limitB→0, thetotal pressurePand energy densityρreduces to the standardexpressions for degenerate fermions.The computed EOS in this section is valid for an ensemble of charged degeneratefermions in a compact star and in principle it could be used to describedegenerate protons and electrons in a neutron star as well as degenerateelectrons in a white dwarf star. The different properties of the EOS for anensemble of protons in a neutron star are plotted in the FIG.<ref> and FIG. <ref>. In particular, the FIG. <ref> shows that unlike the case of neutrons, theeffect of an external magnetic field on degenerate protons makes thecorresponding EOS softer, essentially due to formation of the Landau levelswhich are bound states. §.§ Possible probe for de-confined quarks We note that for electrically charged fermions, an external magneticfieldBleads to𝒪(B)corrections to the EOS. Further,these modifications are enhanced by the effects of curved spacetime andquantitatively these enhancements are dependent on the specific mass-radiuscurve of the star. Therefore, in principle one may use the presence of magneticfield as a possible probe for the existence of de-confined quarkswhich maybe present in the core of a neutron star (for example, see<cit.>). The quarks are known to be lighter compared to thenucleons. For example, the Up quark has mass, saym_q, of around2.2MeVand it has electrical chargee_q=2e/3which implies its characteristic magneticfield to beB_c = m_q^2/e_q ∼10^15Gauss. Therefore, if the core of aneutron star has de-confined quark degrees of freedom and it has magnetic fieldof around10^15Gauss as indicated by observations then the EOS near thecore of a neutron star should pick up a substantial corrections due to themagnetic field. § DISCUSSIONS In summary, in this article we have shown that for an ensemble of electricallyneutral degenerate neutrons both magnetic field and gravitational time-dilationleads the EOS to become stiffer. However, for electrically charged fermions themagnetic field makes the EOS to become softer due to formation of the Landaulevels. Nevertheless, the changes of EOS due to the gravitational time dilationis relatively stronger than the changes due to the observed strengths ofmagnetic field. We have shown that in presence of a non-zero magnetic field,a thin layer containing only spin-up neutrons would form at the boundary of adegenerate neutron star. Hence, a neutron star would acquire a non-zeromagnetic moment which in turn would lead to an accretion of charged particlessurrounding the star. Further, we have argued that a strong magnetic fieldcan act like a possible probe for existence of de-confined quarks in the coreof a neutron star where the effects of curved spacetime would enhance themodifications of the EOS.SM is supported by SERB-Core Research Grant (Project RD/0122-SERB000-044). GMHacknowledges support from the grant no. MTR/2021/000209 of the SERB, Governmentof India. apsrev
http://arxiv.org/abs/2312.16589v1
{ "authors": [ "Golam Mortuza Hossain", "Susobhan Mandal" ], "categories": [ "gr-qc", "astro-ph.HE", "hep-th" ], "primary_category": "gr-qc", "published": "20231227143046", "title": "Effects of magnetic field on the equation of state in curved spacetime of a neutron star" }
plain Yau Mathematical Science Center and Department of Mathematics, Tsinghua University, Beijing, China [email protected] We introduce an quantum entropy for bimodule quantum channels on finite von Neumann algebras, generalizing the remarkable Pimsner-Popa entropy.The relative entropy for Fourier multipliers of bimodule quantum channels establishes an upper bound of thequantum entropy.Additionally, we present the Araki relative entropy for bimodule quantum channels, revealing its equivalence to the relative entropy for Fourier multipliers and demonstrating its left/right monotonicities and convexity.Notably, the quantum entropy attains its maximum if there is a downward Jones basic construction.By considering Rényi entropy for Fourier multipliers, we find a continuous bridge between the logarithm of the Pimsner-Popa index and the Pimsner-Popa entropy.As a consequence, the Rényi entropy at 1/2 serves a criterion for the existence of a downward Jones basic construction. Relative Entropy for Quantum Channels Zishuo Zhao=====================================We introduce a quantum entropy for bimodule quantum channels on finite von Neumann algebras, extending the remarkable Pimsner-Popa entropy. The relative entropy for Fourier multipliers of bimodule quantum channels establishes an upper bound for the quantum entropy. Additionally, we present the Araki relative entropy for bimodule quantum channels, demonstrating its equivalence to the relative entropy for Fourier multipliers. Notably, the quantum entropy reaches its maximum when a downward Jones basic construction exists. Interestingly, the Rényi entropy for Fourier multipliers serves as a continuous link between the logarithm of the Pimsner-Popa index and the Pimsner-Popa entropy. Consequently, the Rényi entropy at 1/2 serves as a criterion for the existence of a Jones downward basic construction. § INTRODUCTION Relative entropy, introduced independently by Kullback and Leibler, serves as a measure quantifying the disparity between two probability distributions.Umegaki <cit.> expanded the concept of relative entropy to encompass density matrices within quantum systems. Building on this foundation, Connes and Stømer <cit.> delved into the study of relative entropy for subalgebras. In a pivotal contribution, Pimsner and Popa investigated relative entropy for finite von Neumann algebras in their work <cit.>, coining the term "Pimsner-Popa entropy." Their study established a profound connection, demonstrating that the finiteness of the Jones index hinges on the satisfaction of the Pimsner-Popa inequalities. Notably, they showed that the Jones index is finite if and only if these inequalities hold, equivalently indicating the finiteness of the Pimsner-Popa entropy.Quantum relative entropy and quantum channels assume pivotal roles in the exploration of quantum information theory. A bimodule quantum channel, denoted as Φ:ℳ→ℳ, is characterized by its preservation of a *-subalgebra 𝒩.In our pursuit of advancing the understanding of bimodule quantum channels, we propose several relative entropies. Drawing inspiration from the foundational work of Connes-Stømer and Pimsner-Popa, we introduce the Pimsner-Popa entropy H(Φ|Ψ) tailored for bimodule quantum channels Φ and Ψ.Employing the framework of quantum Fourier analysis (<cit.>, <cit.>, <cit.>), we define the relative entropy for bimodule quantum channels as the quantum relative entropy D(ΦΨ) of the Fourier multipliers Φ and Ψ which determine information of Φ, Ψ completely. Our subsequent exploration aims to demonstrate that H(Φ|Ψ)≤ D(ΦΨ).With insights from the spin model, conventional quantum channels operating on finite quantum systems manifest as bimodule quantum channels. In a surprising turn of events, we have substantiated that when the inclusion facilitates a downward Jones basic construction, it follows thatH(Φ|Ψ)= D(ΦΨ) In the realm of infinite quantum systems, Araki conducted a systematic exploration of relative entropy for normal states, extending the notion from finite quantum systems. Drawing inspiration from Araki's pioneering work, we introduce a relative entropy, denoted as S_τ(Φ, Ψ), tailored for (bimodule) quantum channels Φ and Ψ.Our investigation reveals that this entropy exhibits both left and right monotonicity.Consequently, we obtain the convexity. In contrast to the Pimsner-Popa entropy designed for bimodule quantum channels, we observe thatH(Φ|Ψ)≤ S_τ(Φ,Ψ).Inspired by the study of the relation between Pimsner-Popa index and Rényi entropy for subalgebras in <cit.>, we consider the Rényi entropy between Fourier multipliers.We found that when a downward Jones basic construction exists, Rényi entropy S_p(Φ|Ψ),p∈ [1, ∞] for Fourier multipliers forms a continuous bridge between the logarithm of Pimsner-Popa index λ(Φ,Ψ) and Pimsner-Popa entropy H(Φ|Ψ) for bimodule quantum channels, enhancing the result in <cit.>. -logλ(Φ,Ψ) ≥ S_p(Φ|Ψ) ≥ H(Φ|Ψ).By comparing Rényi entropy at 1/2 and Pimsner-Popa entropy, we obtain a criterion for the existence of downward basic constructions whenis a finite factor. It is remarkable that when ⊂ is a finite index subfactor of type II_1, they found a closed formula for H(|) that reflects the the Jones index <cit.> and the extremality of the subfactor. In this paper, we study Connes-Stømer relative entropy for completely positive bimodule maps.Using the theory of quantum Fourier analysis (<cit.>, <cit.>, <cit.>) we obtain an upper bound for H(Φ|Ψ).We then identify this upper bound as the relative entropy,in the sense of Araki <cit.>, between certain states that are related to completely positive maps through Connes' correspondence.From this point of view, we propose a relative entropy for completely positive maps in general.Based on this notion, we obtain:Suppose 𝒩⊂ℳ is a finite inclusion of finite von Neumann algebras, and Φ,Ψ∈𝐂𝐏_() with Φ≼Ψ.ThenH(Φ|Ψ)≤ S_τ_(Φ,Ψ). Surprisingly, we proved that if the inclusion admits a downward Jones basic construction, then this upperbound is sharp.Precisely, we have the following theorem:Let 𝒩⊂ℳ be a finite inclusion of finite von Neumann algebras with downward Jones basic construction _-1⊂⊂^e_-1.Then for any Φ,Ψ∈𝐂𝐏_() with Φ≼Ψ,H(Φ|Ψ)=S_τ_(Φ,Ψ). Computation of S_τ_(Φ,Ψ) in the case Φ = id_ and Ψ = E_ coincides with the formula obtained by Pimsner and Popa.Therefore by the existence of downward Jones basic construction for subfactors as in <cit.>, our result generalizes that of Pimsner and Popa and reveals new connections between several notions of relative entropy. We further prove the left and right monotocities of Araki's relative entropy S_ϕ(Φ, Ψ) for completely positive maps.[Monotocities] Suppose that 𝒜, ℬ, 𝒞 are von Neumann algebras and ϕ is a normal faithful state on 𝒞. Then * For Ψ_1 ≼Φ_1∈𝐂𝐏(ℬ,𝒞) and Ψ_2∈(𝒜,ℬ):S_ϕ(Φ_1Ψ_2,Ψ_1Ψ_2) ≤ S_ϕ(Φ_1,Ψ_1). * For Ψ_1∈(ℬ,𝒞) and Φ_2 ≼Ψ_2∈𝐂𝐏(𝒜,ℬ), S_ϕ(Ψ_1Φ_2,Ψ_1Ψ_2) ≤ S_ϕ∘Ψ_1(Φ_2,Ψ_2).The paper is organized as follows. In Section 2, we review finite inclusion of finite von Neumann algebras, completely positive maps and completely positive bimodule maps. In Section 3, we introduce Pimsner-Popa relative entropy for completely positive maps. In Section 4, we show the equality between Pimsner-Popa relative entropy and relative entropy for Fourier multiplier when the inclusion admits a downward basic construction. In Section 5, we study the comparable completely positive maps and their derivatives. In Section 6, we introduce Araki's relative entropy for comparable completely positive maps and prove its monotoniciy and convexity. In Section 7, we study the Rényi relative entropies for completely positive maps. The author would like to express his gratitude to Zhengwei Liu and Jinsong Wu for their constant support and encouragement.The author was supported by BMSTC and ACZSP (Grant No. Z221100002722017) and by Beijing Natural Science Foundation Key Program (Grant No. Z220002). § PRELIMINARIES In this section, we review the basic theory of inclusions of finite von Neumann algebras and their completely positive bimodule maps.§.§ Jones basic construction Letbe a finite von Neumann algebra with a faithful normal normalized trace τ_, and let ⊂ be an inclusion.We denote the restriction of τ_ onas τ_, and let E_ = E^_ be the trace-preserving conditional expectation fromonto .We fix a set of Pimnser-Popa basis {η_j}_j for the right -module L^2()_.That is, ∑_j η_jE_(η^*_jx) = x for all x∈.Consider the operators L_τ_(η_j)∈(L^2()_,L^2()_) defined as L_τ_(η_j)(yΩ_) = η_jyΩ_, y∈.Then {η_j}_j being a basis implies that ∑_j L_τ_(η_j)L^*_τ_(η_j) = 1as operators on L^2().We say ⊂ is a finite inclusion if there exists a finite Pimsner-Popa basis.In such case, the Jones index of the inclusion ⊂ is defined to be δ^2 = [:]:=_L^2() and can be computed as δ^2 = ∑_jτ_(η^*_jη_j). Let J_ be the modular conjugation on L^2() associated to τ_ and e_ be the orthogonal projection with range Ω_.The Jones basic construction for ⊂ is defined as _1 = J_'J_ =e_, with a canonical trace τ__1 (z) = δ^-2∑_j⟨ z(η_jΩ_),η_jΩ_⟩, z∈_1,and the canonical trace on ' = J__1J_ is defined as τ_'(z') = τ__1(J_z'J_), z'∈'.Both traces are independent of the choice of the basis.Explicitly, we have τ__1(xe_y) = δ^-2τ_(xy),x,y∈. In general, we won't have τ__1|_ = τ_.Let h__1, be the unique positive operator in the center Z() ofsuch thatτ_(h__1,x) = τ__1(x), x∈. We note that by Equation (<ref>): E^_1_(e_) =δ^-2h^-1__1,.Now we perform Jones basic construction for the inclusion ⊂_1, obtaining _2 = J__1'J__1 = _1e__1, where e_ is the orthogonal projection on L^2(_1) onto L^2().We have that ∑_j L_τ_(δη_je_)L^*_τ_(δη_je_) = 1,where L_τ_(δη_je_)∈(L^2()_,L^2(_1)_) is defined as L_τ_(δη_je_)(xΩ_) = δη_je_xΩ__1 for all x∈.Therefore we can define the canonical trace τ__2 as τ__2(z) = ∑_j ⟨ z(η_je_Ω__1),η_je_Ω__1⟩, z∈_2.We check that τ__2 agrees with τ__1 on _1.In fact, for all x,y∈:τ__2(xe_y)= ∑_j ⟨ xe_yη_je_Ω__1,η_je_Ω__1⟩= ∑_jτ__1(xE_(yη_j)e_η^*_j)= δ^-2∑_jτ_(xE_(yη_j)η^*_j)= δ^-2τ_(xy) = τ__1(xe_y).In addition, we have that τ__2(xe_ye_)= δ^-2τ__2(xh^-1__1,ye_)= δ^-2∑_j ⟨ xyh^-1__1,e_(η_je_Ω__1),η_je_Ω__1⟩= δ^-4∑_j ⟨ xyh^-1__1,η_jh^-1__1,Ω__1,η_je_Ω__1⟩= δ^-4τ__1(xyh^2_,_1)= δ^-2τ__1(xe_yh^-1__1,),so we obtain thatE^_2__1(e_) = δ^-2h^-1__1,. §.§ BimodulesThe point of view of bimoudle theory is indispensable for our analysis.Our reference for basic bimodule theory for finite von Neumann algebras is <cit.>.We view L^2() together with the left action ofand the right action ofas an - bimodule and denote it as _L^2()_.Then we identify _1 = (L^2()_), and '∩_1 = (_L^2()_).Similarly with the left action ofand the right action of , (_L^2()_) is identified with '∩.Notice that we have an anti-isomorphism between '∩ and '∩_1 given by modular conjugation on L^2(). Now we consider the bimodule _L^2(_1)_.It well known <cit.> that _L^2(_1)_ is unitarily equivalent to _L^2()⊗_ L^2()_, with the equivalence given by:δ xe_yΩ__1↦ xΩ_⊗_ yΩ_, x,y∈.Because of this equivalence, we will identify _2 as (L^2()⊗_ L^2()_).It follows that (_L^2()⊗_ L^2()_) is identified with '∩_2. By the isomorphism above, the standard left action of _1 on L^2(_1) translates into the action on L^2()⊗_ L^2() as xe_ y(x_0Ω_⊗_y_0Ω_) = xE_(yx_0)Ω_⊗_y_0Ω_, x,x_0,y,y_0∈.That is, _1 acts on the first component.Thus the inclusion _1↪_2 corresponds to the inclusion(L^2()_)∋ z↦ z⊗_1∈(L^2()⊗_ L^2()_).The coincidence of τ__2 and τ__1 on _1 can then be expressed as τ__2(z⊗_1) = τ__1(z), z∈_1. §.§ Completely positive mapsSuppose 𝒜 and ℬ are von Neumann algebras andΦ:𝒜→ℬ is a linear map. The map Φ is called positive if Φ(𝒜_+)⊆ℬ_+. The map Φ is completely positive if Φ⊗𝕀_n:𝒜⊗ M_n(ℂ)→ℬ⊗ M_n(ℂ)is positive for all positive integer n, where (Φ⊗ id_n)(x_ij)^n_i,j=1=(Φ(x_ij))^n_i,j=1, x_ij∈𝒜. The map Φ is called unital if Φ(1_𝒜)=1_ℬ; faithful if Φ(x^*x)≠ 0 whenever x≠ 0.The map Φ is called normal if it is continuous with respect to the ultraweak topology of 𝒜 and ℬ. By a quantum channel we mean a normal unital completely positive map, and we denote the set of quantum channels from 𝒜 to ℬ as (𝒜,ℬ).Note that (𝒜,ℬ) is a convex set. Now we briefly recall the concept of correspondence (bimodule) of a completely positive map which was introduced by Connes in <cit.> and developed further in the context of II_1 factors in <cit.>.Let ϕ be a normal faithful state on ℬ, and (L^2(ℬ,ϕ),π_ϕ,Ω_ϕ) be the GNS-construction.The sesquilinear form ⟨·, ·⟩_0 on 𝒜⊗ L^2(ℬ, ϕ) is defined as⟨ a_1⊗ξ_1, a_2⊗ξ_2⟩_0=⟨π_ϕ(Φ(a_2^*a_1))ξ_1, ξ_2⟩,whenever a_1, a_2∈𝒜 and ξ_1,ξ_2∈ L^2(ℬ, ϕ). The kernel 𝒦_Φ of the sesquilinear form ⟨·, ·⟩_0 is given by𝒦_Φ={x∈𝒜⊗ L^2(ℬ,ϕ): ⟨ x,y⟩_0=0,∀ y∈𝒜⊗ L^2(ℬ,ϕ)},which is invariant under the left action of 𝒜 and the right action of ℬ. The sesquilinear form ⟨·,·⟩_0 then induces an inner product ⟨·,·⟩_Φ on the quotient 𝒜⊗ L^2(ℬ, ϕ)/𝒦_Φ, and we denote by ℋ^Φ the Hilbert space by completing the vector space 𝒜⊗ L^2(ℬ, ϕ)/𝒦_Φ with respect to ⟨·,·⟩_Φ.We denote the quotient map from 𝒜⊗ L^2(ℬ,ϕ) to ℋ^Φ as [·]_Φ. The left action of 𝒜 on ℋ^Φ is denoted as π_Φ, i.e.π_Φ(a)[a_0⊗ξ]_Φ=[aa_0⊗ξ]_Φ,where a, a_0∈𝒜, ξ∈ L^2(ℬ, ϕ). Similarly, the right action of ℬ on ℋ^Φ is denoted as π'_Φ, i.e.π'_Φ(b)[a_0⊗ξ]_Φ=[a_0⊗ξ b]_Φ,where a_0∈𝒜 and b∈ℬ. The intertwiner of right ℬ-modules v_Φ,ϕ:L^2(ℬ,ϕ)→ℋ^Φ is defined by v_Φ,ϕΩ_ϕ=[1_𝒜⊗Ω_ϕ]_Φ.We have v_Φ is an isometry if and only if Φ is unital.Moreover,Φ(·)=v^*_Φ,ϕπ_Φ(·)v_Φ,ϕ.The triple (ℋ^Φ,π_Φ,v_Φ,ϕ) is called a dilation of Φ.It is clear that ℋ^Φ equipped with the actions of 𝒜 and ℬ is an 𝒜-ℬ-bimodule, denoted by _𝒜ℋ^Φ_ℬ.It can be shown that the unitary equivalence class of _𝒜ℋ^Φ_ℬ is independent of the ϕ.We denote [1⊗Ω_ϕ]_Φ as Ω_Φ,ϕ.Then Ω_Φ,ϕ is separating for (_𝒜ℋ^Φ_ℬ).We remark that Ω_Φ,ϕ does depend on the choice of ϕ. §.§ Completely positive bimodule mapsSuppose 𝒜⊂ℬ is an inclusion of von Neumann algebras and Φ is a normal completely positive map from ℬ to ℬ. We say Φ is a 𝒜-𝒜-bimodule map if Φ(a_1ba_2)=a_1 Φ(b) a_2,a_1,a_2∈𝒜, and b∈ℬ.We denoted by 𝐂𝐏_𝒜(ℬ) (𝐂𝐁_𝒜(ℬ)) the set of all completely positive (bounded) 𝒜-bimodule maps from ℬ to ℬ.Let ⊂ be a finite inclusion of finite von Neumann algebras. Then E_ is a completely positive -bimodule map.Moreover, the bimodule ℋ^E_ can be naturally identified with _L^2()⊗_L^2()_.Here we recall the following proposition from <cit.>: The map ι: ℋ^E_→ L^2()⊗_L^2(), ι (xΩ_E_y)=xΩ_⊗_yΩ_, x,y∈,extends to an isometric - bimodule isomorphism.Moreover ι v_(xΩ)=Ω⊗_xΩ for all x∈. For x_1,y_1,x_2,y_2∈, ⟨ x_1Ω_E_𝒩y_1, x_2Ω_E_𝒩y_2⟩ =τ_ℳ(y^*_2E_𝒩(x^*_2x_1)y_2)=⟨ x_1Ω⊗_𝒩y_2Ω, x_2Ω⊗_𝒩y_2Ω⟩=⟨ι(x_1Ω_E_𝒩y_1),ι(x_2Ω_E_𝒩y_2),⟩.It follows that ι is well-defined and extends linearly to an surjective isometry.By definition we have ι(xΩ_E_y)=xι(Ω_E_)y.Since Ω_E_ is cyclic for - action, ι is a bimodule isomorphism.Finally for each x∈, ι v_(xΩ)=ι(Ω_E_x)=Ω⊗_xΩ as claimed. When concerning the trace-preserving conditional expectation E_ for a finite inclusion ⊂ of finite von Neumann algebras, we will identify _ℋ^E__ and _L^2()⊗_L^2()_ via the map ι and shorten our notation of v_E_ to v_.By Section 2.3 we will also identify the above two with _L^2(_1)_.Now we discuss the meaning of Proposition <ref> when ⊂ is a II_1 subfactor with finite index.Let {𝒫_n,±}_n≥ 0 be its planar algbera.Given Φ∈𝐂𝐁_(), then it defines a - bimodule map V_Φ,τ_ on L^2() as the closure of the densely defined map xΩ_↦Φ(x)Ω_.We represent V_Φ,τ_ by the two-box [scale=0.5](0.3,-0.5) – (1,-0.5) – (1,1.5) – (0.3,1.5);(0.3,-0.5) – (0.3,1.5);(1,-0.5) – (1,1.5); [fill=white] (0,0) rectangle (1.3,1);at (0.65,0.5) Φ; ∈𝒫_2,+. According to <cit.>, see also Theorem <ref>, its inverse Fourier transform[scale=0.7](-0.3,-0.5) – (0.3,-0.5) – (0.3,1.5) – (-0.3,1.5);(1,-0.5) – (1.6,-0.5) – (1.6,1.5) – (1,1.5);(0.3,-0.5) – (0.3,1.5);(1,-0.5) – (1,1.5); [fill=white] (0,0) rectangle (1.3,1); [scale = 0.65] at (0.65,0.5) ℱ^-1(Φ);:=[scale=0.7] (-0.6,-0.5) – (1.9,-0.5) – (1.9,1.5) – (-0.6,1.5);(0.3,-0.5)–(0.3, 1) .. controls +(0,0.4) and +(0,0.4) .. (-0.3,1) – (-0.3,-0.5);(0.3,-0.5)–(0.3, 1) .. controls +(0,0.4) and +(0,0.4) .. (-0.3,1) – (-0.3,-0.5)–(0.3,-0.5);(1,1.5)–(1,0) .. controls +(0,-0.4) and +(0,-0.4) .. (1.6,0) – (1.6,1.5);(1,1.5)–(1,0) .. controls +(0,-0.4) and +(0,-0.4) .. (1.6,0) – (1.6,1.5)– (1,1.5); [fill=white] (0,0) rectangle (1.3,1);at (0.65,0.5) Φ; is the unique operator in 𝒫_2,- = (_⊗__) such thatΦ(x)=[scale=0.6](-1.3,1.5) – (-1.3, -0.5) – (1.3, -0.5) – (1.3, 1.5) – (-1.3,1.5);(1.3, -0.5) – (1.3, 1.5);(0.6,1) .. controls +(0,0.4) and +(0,0.4) .. (-0.6,1);(0.6,0) .. controls +(0,-0.4) and +(0,-0.4) .. (-0.6,0); [line width=2pt] (-1.3,-0.5) – (-1.3,1.5);(0.6,1) .. controls +(0,0.4) and +(0,0.4) .. (-0.6,1) – (-0.6,0) – (-0.6,0) .. controls +(0,-0.4) and +(0,-0.4) .. (0.6,0) – (0.6,1);(1.3,-0.5) – (1.9,-0.5) – (1.9,1.5) – (1.3,1.5); [fill=white] (0.3,0) rectangle (1.6,1); [scale=0.55] at (0.95,0.5) ℱ^-1(Φ); [fill=white, rounded corners] (-1.6,0) rectangle (-0.3,1);at (-0.95,0.5) x; , ∀ x∈ℳ.Note that we have used boxes with rounded corners to indicate elements in , and strictly speaking the diagram can NOT be understood in 𝒫^⊂. The operator Φ:=ℱ^-1(Φ) is called the Fourier multiplier of Φ.Equivalently we have, see for instance <cit.>, Φ(xΩ_⊗_yΩ_)=δ^-1∑^n_j=1xη_jΩ_⊗_Φ(η^*_j)yΩ_, x,y∈.When Φ is positive, the two-box Φ is called 𝔉-positive in <cit.> (c.f. Definition 2.6). It is known that Φ is positive iff Φ is completely positive,and the set {ℱ(Φ)| Φ∈𝐂𝐏_() coincides with the positive cone of 𝒫_2,-.By Proposition <ref>, (_ℋ^E__) is naturally isomorphic to 𝒫_2,-. § PIMSNER-POPA ENTROPY FOR COMPLETELY POSITIVE MAPS In this section, we revisit the Pimsner-Popa entropy for subfactors and broaden extend it to a relative entropy applicable to completely positive bimodule maps.Suppose ℛ is finite von Neumann algebra with a normal faithful trace τ, and let , be von Neumann subalgebras of ℛ.The Connes-Størmer relative entropy <cit.> is defined as follows:H(|)=sup_𝐱∈𝐒∑_iτ(η(E_(x_i)))-τ(η(E_(x_i))),where η(t)=-tlog t defined for t≥ 0, E_, E_ are trace-preserving conditional expectations from ℛ onto , and the set 𝐒 consists of all finite partitions of unity in ℛ, which are finite subsets 𝐱={x_i}_i ⊂ℛ_+ such that ∑_ix_i=1.When =ℛ, and ⊂ is a II_1 subfactor of finite index, Popa and Pimsner <cit.> showed a nice formula as follows:H(|)=2logδ-∑_kτ_(f_k)log(τ_(f_k)/τ_'(f_k)),where {f_k}_k is a set of atoms in '∩ such that ∑_k f_k=1.Now we propose a generalization of H(|) based on the following observation.Using the bimodule property of E_, we can rewrite the summands in Definition <ref> asτ(η(E_(x_i)))-τ(η(E_(x_i)))=-τ_(E_(x_i)log E_(x_i))+ τ_(x_ilog x_i)=τ_(x_ilog x_i)-τ_(x_ilog E_(x_i)).In a von Neumman algebra ℛ with faithful normal tracial state τ, for two positive operators ρ,σ the quantity τ(ρlogρ - ρlogσ)=D_τ(ρσ) is called the relative entropy between ρ and σ.Notice that in order for D_τ(ρσ)<∞, we require p_ρ≤ p_σ, where p_ρ and p_σ are range projections of ρ and σ. We have thus proved the following lemma:Suppose ⊂ is an finite inclusion of finite von Neumann algebras. Then H(|)=sup_𝐱∈𝐒∑_i D_τ_(x_iE_(x_i)),where 𝐒 is the set of all finite partitions of unity in .Having rewritten H(|) in the above form, it is now natural to replace the identity id_ and the conditional expectation E_ by arbitrary completely positive bimoudle maps.In the rest of this section, we assume ⊂ is an inclusion of finite von Neumann algebras and we fix a normal faithful trace τ_ on . Suppose ⊆ and Φ, Ψ∈𝐂𝐏_().Define the Connes-Størmer entropy between Φ and Ψ asH(Φ|Ψ)=sup_𝐱∈𝐒∑_i D_τ_(Φ(x_i)Ψ(x_i)),where 𝐒 is the set of all finite partitions of unity in . Suppose ℂ⊂ P,Q⊂ are von Neumann subalgebras, then H(E_P|E_Q) equals to the original definition of Connes and Størmer only if Q⊂ P. Notice that for the right hand side of Equation (<ref>) to be finite, we need p_E_(x)≥ p_x for every x∈_+, the set of all positive elements in . In the case of a finite index subfactor, this assumption is full-filled by the Pimsner-Popa inequality <cit.> which asserts that E_(x)≥δ^-2x,∀ x∈_+.To make sure that H(Φ|Ψ) is well-defined, we adopt the following definition of majorization between completely positive maps.This notion has already appeared in <cit.> and <cit.>.Suppose Ψ,Φ:𝒜→ℬ is normal completely positive. We say Φ is majorized by Ψ and write Φ≼Ψ if there is a positive scalar c such that c·Ψ-Φ is completely positive.We write Φ∼Ψ if both Φ≼Ψ and Ψ≼Φ hold. In <cit.> it has been proved that for a finite inclusion of finite von Neumann algebras ⊂, Φ∈𝐂𝐏_() implies that Φ≼ E_.Let {η_i}^m_i=1 be a basis of L^2()_ and define the m× m positive -valued matrix as [G]_ij = η^*_i η_j.It is also proved that for Φ,Ψ∈𝐂𝐏_(), Φ≼Ψ is equivalent to 𝐬𝐮𝐩𝐩Φ(G)≤𝐬𝐮𝐩𝐩Ψ(G) as positive operators. Our goal in this section is to derive an upper bound for H(Φ|Ψ) for completely positive -bimodule maps in terms of their Fourier multipliers.We assume that ⊂ is of finite index.In this and the next section, we denote by Ω the cyclic separating tracial vector in L^2(). Denote the vector state on (_L^2()⊗_L^2()_) implemented by Ω⊗_Ω as ω_.Since Ω⊗_Ω generates _L^2()⊗_L^2()_ under - action, the state ω_ is faithful.As we shall see in Lemma <ref>, ω_ may not be a trace.In fact, if ⊂ is a II_1 subfactor of finite index, then ω_ is a trace if and only if ⊂ is extremal <cit.>. As pointed out by Longo in <cit.> (see the Remark after Theorem 5.5), the extremality of a subfactor corresponds to the minimality of the trace preserving conditional expectation.For any Φ∈𝐂𝐏_() the following hold:* for all x∈ we have Φ(x)=δ v^*_ (x⊗_1) Φv_; * δω_(Φ)=τ_(Φ(1)).Recall that by Proposition <ref> we identify ℋ^E_ with L^2()⊗_L^2() as - bimodules, and thus v_:L^2()_→ L^2()⊗_L^2()_ is defined by v_(yΩ)=Ω⊗_yΩ,∀ y∈.Therefore for any x,y_1,y_2∈, by Equation (<ref>)⟨ v^*_ x Φv_y_1Ω,y_2Ω⟩ = ⟨Φ(xΩ⊗ y_1Ω),Ω⊗ y_2Ω⟩= δ^-1τ_ (y^*_2Φ(x)y_1)= δ^-1⟨Φ(x)y_1Ω,y_2Ω⟩.This proves (1).Since ω_(Φ)=⟨Φ(Ω⊗_Ω),Ω⊗_Ω⟩=δ^-1τ_(Φ(1)), (2) holds. For any Φ,Ψ∈𝐂𝐏_(), we have Φ≼Ψ if and only if 𝐬𝐮𝐩𝐩Φ≤𝐬𝐮𝐩𝐩Ψ. For c>0, the Fourier multiplier of cΨ-Φ is easily seen to be cΨ-Φ, hencecΨ-Φ being completely positive implies cΨ-Φ≥ 0.Conversely suppose cΨ-Φ≥ 0, and let R be its positive square root, then for all x∈, cΨ(x)-Φ(x) = δ v^*_R(x⊗_1)R v_,which proves cΨ-Φ is completely positive.We now find the density operator of the state ω_ with respect to τ__2.Notice that τ__2(z⊗_ 1)=τ__1(z) for all z∈(_L^2()_).Define an isometry u_∈(_L^2()_, _L^2()⊗_L^2()_) as u_(ξ)=ξ⊗_Ω,ξ∈ L^2(),so that u^*_(ξ⊗_yΩ) = ξ· E_(y).Therefore for any A∈(_L^2()⊗_L^2()_), u^*_Au_∈(_L^2()_).It now follows from Equation (<ref>) and the definition of ω_ thatω_(A)= τ_(J_u^*_A^*u_J_).In addition, by Equation (<ref>) and Equation (<ref>) we haveτ__2(A)= τ__1(u^*_Au_). There exists a unique operator Δ>0 in (_L^2()_) such that τ__2((Δ⊗_ 1)A) =ω_(A),∀ A∈(_L^2()⊗_L^2()_).Let Δ be the unique positive operator in '∩_1(which we identify as (_L^2()_)) such that τ__1(Δ z)=τ_(J_z^*J_) for all z∈'∩_1. Observe that operators in '∩_1 commute with the right action ofon L^2().Thus for x,y∈:u^*_(Δ⊗_ 1)(xΩ⊗_yΩ)= u^*_( Δ(xΩ)⊗_yΩ) =Δ(xΩ)· E_(y)= Δ(xE_(y)Ω) = Δ u^*_(xΩ⊗_yΩ),and we get u^*_Δ⊗_ 1=Δ u^*_.This implies that for all A∈(_L^2()⊗_L^2()_),τ__2((Δ⊗_ 1)A)= τ__1(Δ u^*_Au_) = τ_(J_u^*_A^*u_J_) = ω_(A). To prove uniqueness, suppose Δ' ∈(_L^2()_) satisfies Equation (<ref>).Then for every z∈(_L^2()_):τ_1(Δ' z)= τ__2(Δ' z⊗_ 1) = ω_(z⊗_ 1)= τ_(J_z^*J_) = τ_1(Δ z),hence Δ'=Δ.If there is no confusion, we will denote Δ⊗_1 as Δ.If ⊂ is a II_1 subfactor of finite index, then pictorially we have:ω_(Φ_0)= δ^-1τ_(Φ_0(1)) = δ^-1τ_( [scale=0.55](1.3, -0.5) – (1.3, 1.5);(0.6,1) .. controls +(0,0.4) and +(0,0.4) .. (-0.6,1);(-0.6,1) – (-0.6,0);(0.6,0) .. controls +(0,-0.4) and +(0,-0.4) .. (-0.6,0);(0.6,1) .. controls +(0,0.4) and +(0,0.4) .. (-0.6,1) – (-0.6,0) – (-0.6,0) .. controls +(0,-0.4) and +(0,-0.4) .. (0.6,0) – (0.6,1);(1.3,-0.5) – (1.9,-0.5) – (1.9,1.5) – (1.3,1.5); [fill=white] (0.3,0) rectangle (1.6,1); [scale=1] at (0.95,0.5) Φ_0; ) =δ^-2[scale=0.55](0.3,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (-0.3,0) – (-0.3,1) .. controls + (0,0.4) and + (0,0.4).. (0.3,1);(1,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (1.6,0) – (1.6,1) .. controls + (0,0.4) and + (0,0.4).. (1,1);(0.3,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (-0.3,0) – (-0.3,1) .. controls + (0,0.4) and + (0,0.4).. (0.3,1);(1,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (1.6,0) – (1.6,1) .. controls + (0,0.4) and + (0,0.4).. (1,1); [fill=white] (0,0) rectangle (1.3,1);at (0.65,0.5) Φ_0; .Therefore we obtain a pictorial characterization of Δ⊗_ 1:[scale=0.55](0.3,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (-0.3,0) – (-0.3,1) .. controls + (0,0.4) and + (0,0.4).. (0.3,1);(1,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (1.6,0) – (1.6,1) .. controls + (0,0.4) and + (0,0.4).. (1,1);(0.3,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (-0.3,0) – (-0.3,1) .. controls + (0,0.4) and + (0,0.4).. (0.3,1);(1,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (1.6,0) – (1.6,1) .. controls + (0,0.4) and + (0,0.4).. (1,1); [fill=white] (0,0) rectangle (1.3,1);at (0.65,0.5) x;=[scale=0.55](-0.6, -0.9) – (2.6,-0.9) –(2.6, 2.4) –(-0.3, 2.4) –(-0.3,-0.9);(0.3,0) .. controls+ (0,-1) and + (0,-1) .. (2.3,0) – (2.3,1.5) .. controls + (0,1) and + (0,1).. (0.3,1.5) – (0.3,1)–(0.3,0);(1,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (1.6,0) – (1.6,1.5) .. controls + (0,0.4) and + (0,0.4).. (1,1.5)–(1,1)–(1,0);(0.3,0) .. controls+ (0,-1) and + (0,-1) .. (2.3,0) – (2.3,1.5) .. controls + (0,1) and + (0,1).. (0.3,1.5) – (0.3,1);(1,0) .. controls+ (0,-0.4) and + (0,-0.4) .. (1.6,0) – (1.6,1.5) .. controls + (0,0.4) and + (0,0.4).. (1,1.5)–(1,1); [fill=white] (0,0) rectangle (1.3,1);at (0.65,0.5) x; [fill=white] (0,1.1) rectangle (0.6,1.7);[scale=0.8] at (0.3,1.4) Δ; ,∀ x∈𝒫^⊂_2,+.Additionally, it is worth noting that the operator Δ has been introduced in Burns' thesis <cit.>, where it is denoted as w̃ (see notation 2.2.13 and lemma 2.2.14 on page 35). Next we derive an upper bound for H(Φ|Ψ), and we will find Δ naturally appear in this process.In fact as a direct consequence of Equation (<ref>), one can show that H(|)=H(id_,E_)≤logδ^2 when ⊂ is a II_1 subfacotr of finite index (c.f. Proposition 3.5 of <cit.>).However, as formula (<ref>) indicates, this upper bound is optimal only if ⊂ is extremal.Suppose 𝒩⊂ℳ is a finite inclusion of finite von Neumann algebras, and Φ,Ψ∈𝐂𝐏_() with Φ≼Ψ.ThenH(Φ|Ψ)≤δ D_τ__2(Δ^1/2ΦΔ^1/2Δ^1/2ΨΔ^1/2 ).Let {x_j}_1≤ j≤ n⊂ be a finite partition of unity. We define a completely positive map T:(_L^2()⊗_L^2()_)→^⊕ n byT(A)=(v^*_x_jΔ^-1/2 AΔ^-1/2 v_)_1≤ j≤ n. Define the trace Tr on ^⊕ n as Tr((x_j)_1≤ j≤ n) = ∑_jτ_(x_j). We shall show that T is trace-preserving.Then for all A in (_L^2()⊗_L^2()_)Tr(T(A)) =∑^n_j=1τ_(v^*_x_jΔ^-1/2AΔ^-1/2v_)=τ_(v^*_Δ^-1/2AΔ^-1/2v_)=ω_(Δ^-1/2AΔ^-1/2)=τ__2(A),where the last equality follows from the Lemma <ref>.Thus we deduce that Tr∘ T=τ__2.Now by the statement (1) of Proposition <ref>:δ D_Tr(T(Δ^1/2ΦΔ^1/2) T(Δ^1/2ΨΔ^1/2))=∑^n_j=1 D_τ_(Φ(x_j)Ψ(x_j)).Applying the data processing inequality, we have∑^n_j=1 D_τ_(Φ(x_j)Ψ(x_j)) = δ D_Tr(T(Δ^1/2ΦΔ^1/2_L) T(Δ^1/2ΨΔ^1/2_L))≤δ D_τ__2(Δ^1/2ΦΔ^1/2Δ^1/2ΨΔ^1/2).This implies, by taking supremum over all finite partitions in , H(Φ|Ψ)≤δ D_τ__2(Δ^1/2ΦΔ^1/2Δ^1/2ΨΔ^1/2).Hence the theorem follows. Another natural trace defined on '∩_2 is given byτ' (x) = τ_'(v^*_xv_), x∈'∩_2.The density operator of ω_ with respect to τ' is given by 1⊗_ J_Δ J_, so it is possible to express the upper bound using relative in terms of relative entropy with respect to τ' as well. We now compute the upperbound in Inequality (<ref>) subject to the case where Φ = id_ and Ψ = E_. Let h__1,∈ Z() be as in Equation (<ref>), thenδ^-1id_ = h__1,e_,and consequently E^_2__1(δ^-1id_) = δ^-2. By cyclicity of e_Ω__1, it is enough to check that h__1,e_ (e_Ω__1) = δ^-2Ω__1 = δ^-1id_(e_Ω__1).Equivalently we need to showE^_1_ (e_) = δ^-2h^-1__1,,which is nothing but Equation (<ref>).Let ⊂ be a finite inclusion of finite von Neumann algebras.The elementδ^-1Δ^1/2id_Δ^1/2is a projection in '∩_2 equivalent to e_. By Lemma <ref>, we haveδ^-1Δ^1/2id_Δ^1/2 = yy^*, y = Δ^1/2h^1/2__1,e_.Therefore it suffices to show that y^*y = E^_1_(Δ)h__1,e_ is a projection.In deed, for any x∈ Z():τ__1(xΔ) = τ_(Jx^*J) = τ_(x) = τ__1(xh^-1__1,),which implies E^_1_(Δ) = h^-1__1,.Therefore y^*y = h^-1__1,, h__1, e_ = e_ is a projection. Let ⊂ be a finite inclusion of finite von Neumann algebras. We define Δ_0 to be the positive operator ∈'∩ such that τ_(Δ_0·) = τ_' as traces on '∩.Equivalently, we have Δ_0 = J_Δ^-1 J_.Let ⊂ and Δ_0 be as above.Thenδ D_τ__2(Δ^1/2id_Δ^1/2Δ^1/2E_Δ^1/2) = 2logδ+τ_(logΔ_0).Firstly, we have E_ = δ^-1.So Δ^1/2E_Δ^1/2 = δ^-1Δ. By Lemma <ref>,the first term e = δ^-1Δ^1/2id_Δ^1/2 is a projection equivalent to e_, so τ__2 (e) = τ__2(e_) = δ^-2.Now we haveδ D_τ__2(Δ^1/2id_Δ^1/2Δ^1/2E_Δ^1/2)= δ^2 D_τ__2(e|δ^-2Δ)= δ^2 τ__2 (elog e -elogδ^-2Δ)= -δ^2 τ__2 (elogδ^-2Δ)= 2logδ - δ^2τ__2 (elogΔ). Note that Δ∈'∩_1. We obtainδ^2τ__2 (elogΔ)=δ^2τ__1 (E^_2__1(e)logΔ)= δ^2τ__1 (E^_2__1(δ^-1id_)ΔlogΔ)= τ__1(ΔlogΔ).By the fact that Δ^-1_0 = J_Δ J_, we seeτ_'(logΔ) = τ_'(J_logΔ^-1_0J_) = -τ_(logΔ_0).This completes the proof.Let {f_k}_k be a set of atoms in '∩.Then Δ_0 = ∑_k τ_'(f_k)/τ_(f_k)f_k,and -τ_(logΔ_0) = ∑_k τ_(f_k)logτ_(f_k)/τ_'(f_k).Therefore we obtain2logδ + τ_(logΔ_0) = 2logδ - ∑_k τ_(f_k)logτ_(f_k)/τ_'(f_k).This precisely corresponds to the formula for H(|) obtained by Pimsner and Popa when ⊂ is subfactor.In particular, it means that if ⊂ is a subfactor of finite index, thenH(|) = δ D_τ__2(Δ^1/2id_Δ^1/2Δ^1/2E_Δ^1/2).This coincidence suggests us to look for the case of equality in Equation (<ref>),which we explore in the next section. § THE DOWNWARD JONES BASIC CONSTRUCTIONOur goal in the this section is to prove that the equality in Theorem <ref> does occur if we assume ⊂ admits downward Jones basic construction. We say the inclusion ⊂ admits a downward Jones basic construction if there exists a subalgebra _-1⊂ and a trace-preserving *-isomorphism α:→ J__-1'J_ such that α() becomes the standard representation ofon L^2().We denote the preimage of e__-1 as e_-1∈ and call it Jones projection for _-1⊂.When ⊂ admits a downward Jones basic construction, we will simply suppress the isomorphism α and identifywith J__-1'J_.We use the symbol _-1⊂⊂^e_-1 to indicate a downward Jones basic construction with Jones projection e_-1. Let _-1⊂⊂^e_-1 be a downward Jones basic construction. Then it is known that all canonical traces (including the trace onas (L^2()__-1)) induced by basic constructions are compatible.As a consequence, we restore the Temperley-Lieb relation between Jones projections:e_e_e_ = δ^-2e_, e_-1e_e_-1 = δ^-2e_-1. Before going into the details, let us outline our proof strategy.We will leverage the Jones projection e_-1 to construct a sequence of partitions of unity insuch that the sequence of associated completely positive trace-preserving maps approaches a * homomorphism, which is closely related to the canonical shift introduced by Ocneanu <cit.>. Let _-1⊂⊂^e_-1 be a downward Jones basic construction, then by iterating the Jones basic construction twice we get inclusions_-1⊂⊂^e_-1⊂^e__1⊂^e__2,with _2 =(L^2(_1)_) acting naturally on L^2(_1).Let e^__-1 be the Jones projection on L^2() with range L^2(_-1).Then according to Theorem 2.6 of <cit.> (see also Proposition 2.1 in <cit.>), the map ϕ:δ^2 x e_e_-1e_e_ y↦ xe^__-1y where x,y∈ extends to a *-isomorphism from _2 to J_'_-1J_.With this identification, the canonical shift γ: '_-1∩→'∩_2 is defined asγ(x) = J_J_xJ_J_, x∈'_-1∩.It is known that γ is a *-isomorphism and that τ__2∘γ = τ_.For our purpose, we shall consider the inverse of the canonical shift. Let ⊂ be a finite inclusion of finite von Neumann algebras that admits adownward Jones basic construction _-1⊂⊂^e_-1.Let γ: '_-1∩→'∩_2 be the canonical shift, then the following statements hold:* For any x∈'∩_2, γ^-1(x) is the unique element in _-1'∩ such that γ^-1(x)e_=δ^4 e_ e_ e_-1xe_-1e_ e_; * For any Φ∈𝐂𝐏_() so that Φ exists, γ^-1(Φ)=δΦ(e_-1). (1): Let x∈'∩_2 be acting on L^2(_1).By viewing L^2() as the image of e_, we see δ^4 e_ℳe_𝒩e_-1xe_-1e_𝒩e_ℳ is anoperator on L^2() commuting with right action of . This implies that δ^4 e_ℳe_𝒩e_-1xe_-1e_𝒩e_ℳ=ye_ for some y∈. Since all operators involved in the expression commute with _-1, y∈_-1'∩. By Theorem 2.11 of <cit.> we know that the canonical shift has the following expression:γ(y) = δ^4∑^n_i ξ_ie_-1e_e_ y e_e_-1ξ^*_i,y∈'_-1∩where {ξ_i}^n_i=1 is a basis ofover _-1.So pick x∈'∩_2 and let y∈'_-1∩ be such that ye_ = δ^4 e_ e_ e_-1xe_-1e_ e_, then γ(y)= δ^4∑^n_i ξ_ie_-1e_(ye_)e_e_-1ξ^*_i= δ^8 ∑^n_i ξ_ie_-1e_e_e_e_-1 x e_e_e_e_-1ξ^*_i= ∑_iξ_ie_-1ξ^*_i x =x.This proves that y = γ^-1(x). (2): By definition Φ(e_Ω__1)=1/δ∑_iη_ie_Φ(η^*_i)Ω__1 for any Pimsner-Popa basis {η_i}^m_i=1 ofover .Thus by (1):γ^-1(Φ_0)e_ (mΩ__1)= δ^4 e_ e_ e_-1Φ e_mΩ__1= δ^3∑_i e_(e_ e_-1η_ie_Φ(η^*_i)) mΩ__1= δ^3∑_i e_ e_Φ(E_(e_-1η_i)η^*_i) mΩ__1= δ^3 e_e_Φ(e_-1)mΩ__1 = δΦ(e_-1)mΩ__1, ∀ m∈.Therefore γ^-1(Φ_0) = δΦ(e_-1).Let ⊂ be a finite inclusion of finite von Neumann algebras, _-1⊂⊂^e_-1 be a downward Jones basic construction. Then the followings hold: * For any x,y∈_-1'∩, δ^2 τ_(J_x^*J_e_-1y)=τ_(xy);* δ^2 E^_'∩( e_-1) = Δ_0 (c.f. Definition <ref>);* δ^2 E^_(Δ^-1/2_0e_-1Δ^-1/2_0)=J_Δ^-1_0J_;* J_Δ^-1_0J_e_-1=Δ^-1_0e_-1;* γ^-1(Δ)=J_Δ^-1_0J_.(1): Choose a Pimsner-Popa basis {ξ_j}_j ofover _-1 and identifyas J_'_-1J_. We haveδ^2 τ_(J_x^*J_e_-1y) =∑_j⟨ J_x^*J_E__-1(yξ_j)Ω,ξ_jΩ⟩=∑_j⟨ E__-1(yξ_j)xΩ,ξ_jΩ⟩=∑_j⟨ xE__-1(yξ_j)Ω,ξ_jΩ⟩(since x∈'_-1)=∑_j⟨ xE__-1(yξ_j)ξ^*_jΩ,Ω⟩=τ_(xy).(2): By (1), we have that for any x∈ J_ J_∩, δ^2 τ_(x E^_' ∩(e_-1))= δ^2 τ_(xe_-1)=τ_(J_ x^*J_)=τ_J_ J_(x)This implies that δ^2 E^_' ∩(e_-1) ∈'∩ is the Radon-Nikodym derivative between τ_ and τ_J_ J_.(3): We see that δ^2 E_(Δ^-1/2_0e_-1Δ^-1/2_0)∈_-1'∩, and for all y∈_-1'∩, δ^2 τ_(Δ_0^-1/2e_-1Δ_0^-1/2y)=δ^2 τ_(Δ_0^-1e_-1y)=τ_N(J_Δ^-1_0 J_ y) followed from (1).Then δ^2 E^_(Δ^-1/2_0e_-1Δ^-1/2_0)=J_Δ^-1_0J_.(4): This follows from the fact that J_Δ^-1_0J_ and Δ^-1_0 commute with _-1.(5): By Definition <ref>, we see Δ^-1_0 = J_Δ J_. ThereforeJ_Δ^-1_0 J_ = J_J_Δ J_J_ = γ^-1(Δ),completing the proof. Let 𝒩⊂ℳ be a finite inclusion of finite von Neumann algebras and Φ≼Ψ:→ are completely positive -bimodule maps. If the inclusion admits a downward Jones basic construction _-1⊂⊂^e_-1, thenH(Φ|Ψ) = δ D_τ__2(Δ^1/2ΦΔ^1/2| Δ^1/2ΨΔ^1/2).We begin by constructing the partition of unity.Let e_-1∈ be a Jones projection.By (2) of Corollary <ref>, we have δ^2E^_'∩(Δ^-1/2_0 e_-1Δ^-1/2_0)=1.Then by the relative Dixmier property for finite inclusions <cit.>, for each ϵ>0 we can take a set of n unitaries {u_k}^n_k=1 insuch that1-ϵ/1+ϵ≤δ^2 /n(1+ϵ)∑^n_k=1u_kΔ^-1/2_0e_-1Δ^-1/2_0u^*_k≤ 1.Put x_k=δ^2 /n(1+ϵ)u_kΔ^-1/2_0e_-1Δ^-1/2_0u^*_k for 1≤ k≤ n andx_n+1=1-∑^n_k=1x_k.Let T:'∩_2→ be the completely positive trace-preserving map constructed in the proof of Theorem <ref> associated to the partition {x_k}^n+1_k=1, then δ T(Δ^1/2ΦΔ^1/2) = (y_k )^n+1_k=1 where y_k = δ^2/n(1+ϵ)Φ(u_kΔ^-1/2_0e_-1Δ^-1/2_0u^*_k),y_n+1 = Φ(x_n+1), 1≤k≤ n.For each 1≤ k≤ n, we havey_k= δ^2/n(1+ϵ)Φ(u_kΔ^-1/2_0e_-1Δ^-1/2_0u^*_k)= δ^2/n(1+ϵ)u_kΦ(J_Δ^-1/2_0J_e_-1J_Δ^-1/2_0J_)u^*_k = δ^2/n(1+ϵ)u_kγ^-1(Δ^1/2)Φ(e_-1)γ^-1(Δ^1/2)u^*_k= δ/n(1+ϵ)u_kγ^-1(Δ^1/2ΦΔ^1/2)u^*_k ,where in the third and the last equality we used Corollary <ref> and Lemma <ref>.ThereforeH(Φ|Ψ) ≥δ D(T(Δ^1/2ΦΔ^1/2)T(Δ^1/2ΨΔ^1/2))≥δ/n(1+ϵ)∑^n_k=1D_τ_(u_kγ^-1(Δ^1/2ΦΔ^1/2)u^*_ku_kγ^-1(Δ^1/2ΨΔ^1/2)u^*_k)=δ/1+ϵD_τ__2(Δ^1/2ΦΔ^1/2Δ^1/2ΨΔ^1/2).Note that in the las equality we used τ_∘γ^-1=τ__2.Now taking ϵ→ 0, we see that the theorem is true. Let 𝒩⊂ℳ be a finite inclusion of finite von Neumann algebras with a downward Jones basic construction _-1⊂⊂^e_-1.Then for any Φ,Ψ∈𝐂𝐏_() with Φ≼Ψ,H(Φ|Ψ)=δ^2 D_τ_(Φ(Δ_0^-1/2e_-1Δ_0^-1/2)Φ(Δ_0^-1/2e_-1Δ_0^-1/2)). Applying γ^-1 to the right hand side of Equation (<ref>) and by τ_∘γ^-1=τ__2 we obtain:H(Φ|Ψ)=δ D_τ__2(Δ^1/2ΦΔ^1/2Δ^1/2ΨΔ^1/2)=δ D_τ_(γ^-1(Δ^1/2ΦΔ^1/2)γ^-1(Δ^1/2ΦΔ^1/2))= δ D_τ_((J_Δ^-1/2_0J_)γ^-1(Φ)(J_Δ^-1/2_0J_)(J_Δ^-1/2_0J_)γ^-1(Ψ)(J_Δ^-1/2_0J_))= δ^2 D_τ_( Φ(Δ^1/2_0e_-1Δ^1/2_0)Ψ(Δ^1/2_0e_-1Δ^1/2_0)),Notice that the last equality is implied by Lemma <ref> and Corollary <ref>. We now consider an inclusion ⊂ of finite dimensional C^* algebras and try to determine the necessary and sufficient condition for equalityH(|) = δ D_τ__2(Δ^1/2id_Δ^1/2| Δ^1/2E_Δ^1/2) We start with the computation of the second term, using the formula obtained in Section 3.2.We adopt the notations from Section 6 of <cit.>.Let K and L be two index sets of finite cardinals. andwill be described as = ⊕_k∈ KM_n_k(ℂ), = ⊕_l∈ LM_m_l(ℂ). The inclusion is described by the adjacent matrix A = (a_kl)_k∈ K,l∈ L.We have the dimension (row) vectors n⃗ = (n_k)_k∈ K, m⃗ = (m_l)_l∈ L and trace (column) vectors s⃗=(s_k)_k∈ K, t⃗=(t_l)_l∈ L for ,respectively.They satisfy the relationn⃗A = m⃗, At⃗ = s⃗. For k∈ K and l∈ L, denote by e_k (f_l) the minimal central projection of(), then e_kf_l are the minimal central projections of '∩.Note that e_kf_l is a rank n_ka_kl subprojection of f_l, so we haveτ_(e_kf_l) = n_ka_klt_l, k∈ K,l∈ L. To compute τ_', we construct a set of Pimsner-Popa basis ofas follows.For a fixed l, we decompose f_l asf_l = ⊕_(k_1,k_2)∈ K× K e_k_1 f_le_k_2.Denote e_k_1 f_le_k_2 as B^l_k_1,k_2.Each B^l_k_1,k_2 will be identified withM_n_k_1× n_k_2(ℂ)⊗ M_a_k_1 l× a_k_2 l(ℂ) in the way such that for any b⊗ c∈ B^l_k_1,k_2 we havex( b⊗ c)y = x_k_1by_k_2⊗ c,where x = ⊕_k∈ Kx_k and y = ⊕_k∈ Ky_k are in .Therefore B^l_k_1,k_2, as an e_k_1-e_k_2-bimodule, decomposes into the direct sum of a_k_1 la_k_2 l irreducibles.Identity e_k as M_n_k(ℂ), we see that each irreducible submodule of B_k_1,k_2 is isomorphic to_M_n_k_1(ℂ)M_n_k_1× n_k_2(ℂ)_M_n_k_2(ℂ),with the bimodule structure given by matrix multiplications.By considering M_n_k_1× n_k_2(ℂ) as right e_k_2-module, it further decomposes into n_k_1 copies of irreducibles of e_k_2. Now it is easy to produce a set of Pimsner-Popa basis for L^2()_.For (k,l)∈ K× L, choose a set of basis of ⊕_k'∈ K B^l_k',k as a right -module as {ξ_i,k',k,l}^n_k'a_k' la_k l_i=1 so that each ξ_i,k',k,l generates an irreducible right module of e_k (thus of ).By properly scaling each ξ_i,k',k,l, we can assume that E^_(ξ^*_i,k',k,lξ_i,k',k,l) is a minimal projection under e_k.Then we can compute τ_'(e_kf_l) asτ_'(e_kf_l)= δ^-2∑_k'∈ K∑_1≤ i≤ n_k'a_k' la_k lτ_(ξ_i,k',k,le_kf_lξ^*_i,k',k,l)= δ^-2∑_k'∈ K n_k'a_k' la_k ls_k = δ^-2s_ka_klm_l.Therefore we obtain:Δ_0 = ∑_k∈ K,l∈ Ls_ka_klm_l/δ^2 n_ka_klt_le_kf_l = ∑_k∈ K,l∈ Ls_km_l/δ^2 n_kt_le_kf_l.Inserting it into the formula obtained in Proposition <ref>:δ D_τ__2(Δ^1/2id_Δ^1/2Δ^1/2E_Δ^1/2)= 2logδ -(-τ_(logΔ_0))= 2logδ -∑_k∈ K,l∈ Ln_ka_klt_l logδ^2 n_kt_l/s_km_l= ∑_k∈ K,l∈ Ln_ka_klt_l logs_km_l/n_kt_l.By Theorem 6.2 of <cit.>, we haveH(|)= ∑_l∈ Lm_lt_l logm_l/t_l + ∑_k∈ Kn_ks_k logs_k/n_k + ∑_k∈ K,l∈ Ln_ka_klt_llogmin{n_k/a_kl,1}= ∑_k∈ K,l∈ Ln_ka_klt_l logs_km_l/n_k t_l + ∑_k∈ K,l∈ Ln_ka_klt_llogmin{n_k/a_kl,1},where in the second equality we use that m_l = ∑_k∈ Kn_ka_kl and s_k = ∑_l∈ L a_klt_l, l∈ L and k∈ K.Thus we obtainδ D_τ__2(Δ^1/2id_Δ^1/2Δ^1/2E_Δ^1/2)-H(|)= -∑_k∈ K,l∈ Ln_ka_klt_llogmin{n_k/a_kl,1}= ∑_k∈ K,l∈ Ln_ka_klt_llogmax{a_kl/n_k,1}.In all, we have proved the following proposition.Let ⊂ be an inclusion of finite dimensional C^* algebras with inclusion matrix [a_kl] and dimension vector n= n_k_k∈ K for .Then the necessary and sufficient condition for the equality H(|) = δ D_τ__2(Δ^1/2id_Δ^1/2Δ^1/2E_Δ^1/2)to hold is that a_kl≤ n_k, k∈ K,l∈ L.This condition can be related to downward Jones basic construction as follows.Consider the inclusion ℂ⊂⊂ℬ(L^2()) which is a basic construction.By characterization of finite dimensional basic construction as in <cit.>, the adjacent matrix for the inclusion ⊗ 1⊂ℬ(L^2())⊗ℂ^|L| is à = (ã_kl)_k∈ K,l∈ L, ã_kl = n_k.Therefore the condition a_kl≤ n_k for all k and l is equivalent to the existence of a projection p'∈'⊗ℂ^|L| with central support 1 such that the inclusion ⊂ is isomorphic to (⊗ 1) p'⊂ p'(ℬ(L^2())⊗ℂ^|L|)p'.§ DERIVATIVES FOR COMPLETELY POSITIVE MAPSIn this section, we study the derivative between comparable completely positive maps as a generalization of Fourier multiplier.Our aim is to establish that Fourier multipliers can be regarded as derivatives with respect to a conditional expectation. With the notion of relative tensor product of bimodules, we derive a formula expressing the derivative of the composition of completely positive maps in terms of their derivatives.This formula will be instrumental in proving the monotonicity of relative entropy in the subsequent section. The following lemma, though simple, is essential for us.It has been observed in many cases, see for instance <cit.>, <cit.>.Suppose Ψ,Φ:𝒜→ℬ are normal completely positive maps. Then the following are equivalent: (1) Φ≼Ψ;(2) there is a unique positive element h∈(_𝒜ℋ^Ψ_ℬ) such that for all a∈𝒜Φ(a)=v_Ψ^*π_Ψ(a)hv_Ψ. (1)(2): Assume cΨ-Φ is completely positive for some c>0. Suppose φ is a normal faithful state on ℬ. For any ζ=∑^n_i=1 a_i⊗ξ_i∈𝒜⊗ L^2(ℬ, φ), we denote by [ζ]_Φ, [ζ]_Ψ the image of ζ in ℋ^Φ, ℋ^Ψ. Note that⟨ [ζ]_Φ,[ζ]_Φ⟩_Φ=∑_i,j=1^n ⟨Φ(a_i^*a_j)ξ_j,ξ_i⟩.With A=(a^*_ia_j)_i,j=1^n∈(𝒜⊗ M_n(ℂ))_+ and ξ=[ ξ_1; ⋮; ξ_n ]∈ L^2(B, φ)⊗ℂ^n, we have⟨[ζ]_Φ,[ζ]_Φ⟩_Φ =⟨Φ(A)ξ,ξ⟩≤ c ⟨Ψ(A)ξ,ξ⟩ =c⟨ [ζ]_Ψ,[ζ]_Ψ⟩_Ψ.By the fact that 𝒜⊗ L^2(ℬ, φ) is dense in ℋ^Ψ, ℋ^Φ respectively, the linear map u:ℋ^Ψ→ℋ^Φ defined by u[ζ]_Ψ=[ζ]_Φ for any ζ∈𝒜⊗ L^2(ℬ, φ) is a bounded bimodule map and u≤ c^1/2.By a direct check, we see that h=u^*u∈(_𝒜ℋ^Ψ_ℬ). Note that for any ξ∈ L^2(ℬ, φ),v_Φξ =[1_𝒜⊗ξ]_Φ =u[1_𝒜⊗ξ]_Ψ=u v_Ψξ.We see that uv_Ψ=v_Φ. Now, we obtain that for any a∈𝒜,Φ(a)= v_Φ^*π_Φ(a)v_Φ =v_Ψ^*u^*π_Φ(a)uv_Ψ= v_Ψ^*u^*uπ_Ψ(a)v_Ψ =v_Ψ^*hπ_Ψ(a)v_Ψ We shall prove the uniqueness of h. Suppose k is another positive operator in (_𝒜ℋ(Ψ)_ℬ) such that Φ(a)=v_Ψ^*kπ_Ψ(a)v_Ψ. Then we have that for any a_1, a_2∈𝒜 and ξ_1, ξ_2∈ L^2(ℬ, φ),⟨ k^1/2[a_1⊗ξ_1]_Ψ, k^1/2 [a_2⊗ξ_2]_Ψ⟩_Ψ = ⟨Φ(a_2^*a_1)ξ_1, ξ_2⟩_φ= ⟨ h^1/2[a_1⊗ξ_1]_Ψ, h^1/2[a_2⊗ξ_2]_Ψ⟩_Ψ.Hence k=h.(2) (1): Note that the operator h_∞-h is positive. We see that(h_∞Ψ-Φ)(·)=v_Ψ^*(h_∞-h)π_Ψ(·)v_Ψis a completely positive. This completes the proof. Suppose Ψ,Φ:𝒜→ℬ normal completely positive with Φ≼Ψ, the unique positive element in End(_𝒜ℋ^Ψ_ℬ) which satisfies (2) of Lemma <ref> will be called the derivative of Φ with respect to Ψ, and is denoted as h_Φ,Ψ.Given a normal completely positive map Φ:𝒜→ℬ and a faithful normal state φ on ℬ, put φ_0=φ∘Φ.Then by Kadison-Schwarz inequality, there exists M>0 such thatφ(Φ(a)^*Φ(a))≤ Mφ(Φ(a^*a)), a∈𝒜.It then follows thataΩ_φ_0→Φ(a)Ω_φ, a∈𝒜extends to a bounded linear map from L^2(𝒜,φ_0) to L^2(ℬ,φ).We shall adopt the notion from <cit.> and denote this bounded linear map as V^φ_Φ.Here we show that V^φ_Φ can be recovered from the derivative h_Φ,Ψ.Given normal completely positive map Ψ:𝒜→ℬ and a faithful state φ on ℬ, set φ_0=φ∘Ψ.Define the isometry u_Ψ:L^2(𝒜,φ_0)→ℋ^Ψ such that u_Ψ:aΩ_φ_0↦ aΩ_Ψ,φ, a∈𝒜.For every Φ≼Ψ:𝒜→ℬ and every faithful state φ on ℬ, V^φ_Φ=v^*_Ψ,φh_Φ,Ψu_Ψ,φ.Moreover, h_Φ,Ψ is the unique positive operator in (_𝒜ℋ^Ψ_ℬ) which satisfies the above equation. For each a∈𝒜, we havev^*_Ψ,φh_Φ,Ψ(u_Ψ,φaΩ_φ_0)= v^*_Ψ,φ h_Φ,ΨaΩ_Ψ,φ=(v^*_Ψ,φ h_Φ,Ψπ_Ψ(a)v_Ψ,φ)Ω_φ= Φ(a)Ω_φ.Hence V^φ_Φ=v^*_Ψ,φh_Φ,Ψu_Ψ,φ.Clearly for positive element k∈(_𝒜ℋ^Ψ_ℬ)satisfying V^φ_Φ=v^*_Ψ,φk u_Ψ,φ we havev^*_Ψ,φkπ_Ψ(a)v_Ψ,φ=Φ(a) for every a∈𝒜.Therefore by Lemma <ref> k=h_Φ,Ψ. Notice that v_Ψ and u_Ψ are left and right multiplications by the bounded vector Ω_Ψ in _𝒜ℋ^Ψ_ℬ. When ⊂ is a finite inclusion and Φ∈𝐂𝐏_() is completely positive we have Φ≼ E_ by Remark <ref>, and we shall abbreviate h_Φ,E_ as h_Φ. Now we discuss the connection between derivative for completely positive bimodule maps and their Fourier multipliers.First let us remark that, using planar algebra, for Φ∈𝐂𝐏_() we haveV^τ__Φ = [scale=0.5](0.3,-0.5) – (1,-0.5) – (1,1.5) – (0.3,1.5);(0.3,-0.5) – (0.3,1.5);(1,-0.5) – (1,1.5); [fill=white] (0,0) rectangle (1.3,1);at (0.65,0.5) Φ; .Since Φ≼ E_ by the result of <cit.>, its the derivative h_Φ is well-defined.So we are in a position to apply Proposition <ref>.Along with this, we obtain a generalization of the Pimsner-Popa inequality (c.f. Theorem 2.2 of <cit.>) for completely positive bimodule maps. Let ⊂ be a finite inclusion of finite von Neumann algebras that admits a downward Jones basic construction _-1⊂⊂^e_-1.Suppose Φ∈𝐂𝐏_(). Then δ^2Φ(e_-1) _∞=inf{c>0|cE_-Φ is completely positive}.We first prove that the map δ^2Φ(e_-1)_∞E_-Φ is positive.Recall that by the matrix trick as in Proposition 2.1 of <cit.>, any positive operator inis of the form a=∑^m_i=1n^*_ie_-1n_i with n_i∈. ThereforeΦ(a) =∑^m_i=1Φ(n^*_ie_-1n_i)=∑^m_i=1n^*_iΦ(e_-1)n_i≤Φ(e_-1)_∞∑^m_i=1n^*_in_i=δ^2Φ(e_-1)_∞∑^m_i=1n^*_iE_(e_-1)n_i=δ^2Φ(e_-1)_∞E_(a),so δ^2Φ(e_-1)_∞E_-Φ is positive. Now consider the case Φ⊗ id_n, which is a completely positive ⊗ M_n(ℂ)-bimodule map on ⊗ M_n(ℂ).We notice that e_-1⊗ I_n is a Jones projection in ⊗ M_n(ℂ).Thus by replacing e_-1 with e_-1⊗ I_n and choosing n_i to be in ⊗ M_n(ℂ), the above argument applies also to ( δ^2Φ(e_-1)_∞E_-Φ)⊗ id_n, showing that it is a positive map for each n.Since the tensor product preserves downward Jones basic construction, a --bimodule map onis positive if and only if it is completely positive. Moreover, we haveinf{c>0|cE_-Φ is completely positive}= inf{c>0|cE_-Φ is positive}= inf{c>0|cE_(e_-1)-Φ(e_-1)≥ 0}= δ^2Φ(e_-1)_∞.This completes the proof. Notice that the Pimsner-Popa inequality is a special case of this proposition with Φ=id_.That is, the constantδ^-2id_(e_-1)_∞=δ^-2=λ(,)is the largest among all λ>0 such that λ· id_≼ E_.Here λ(,) is called the Pimsner-Popa index. Let ⊂ be a finite inclusion of finite von Neumann algebras.Suppose Φ∈𝐂𝐏_(). Then under the natural isomorphism between(_ℋ^E__) and '∩_2 from Proposition <ref>, we haveh_Φ=δΦ.Since Φ≼ E_, the derivative h_Φ exists.Recall that in 𝒫^⊂ the tangle[scale=0.6] (1.6,0)–(1.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (1,0.5) – (1,0);(1.6,0)–(1.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (1,0.5) – (1,0);(0.3,0) – (0.3,1);(-0.3,0)–(0.3,0)–(0.3,1)–(-0.3,1); represents the bimodule morphism __∋ x↦δ^1/2x⊗_1∈ _⊗__, and[scale=0.6, rotate = 180] (1.6,0)–(1.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (1,0.5) – (1,0);(1.6,0)–(1.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (1,0.5) – (1,0);(0.3,0) – (0.3,1);(-0.3,0)–(0.3,0)–(0.3,1)–(-0.3,1); the bimodule morphism _⊗__∋ x⊗_y→δ^1/2 E_(x)y∈ __.Therefore with Proposition <ref>, we obtain:[scale=0.6] (1.6,0)–(1.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (1,0.5) – (1,0);(1.6,0)–(1.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (1,0.5) – (1,0);(0.3,0) – (0.3,1);(-0.3,0)–(0.3,0)–(0.3,1)–(-0.3,1); =δ^1/2u_E_,τ_,[scale=0.6, rotate = 180] (1.6,0)–(1.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (1,0.5) – (1,0);(1.6,0)–(1.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (1,0.5) – (1,0);(0.3,0) – (0.3,1);(-0.3,0)–(0.3,0)–(0.3,1)–(-0.3,1); =δ^1/2v^*_E_,τ_.Consequently by Proposition <ref>: δ^-1[scale=0.6] (1.6,-0.5)–(1.6, 1) .. controls +(0,0.4) and +(0,0.4) .. (1,1) – (1,-0.5);(1.6,-0.5)–(1.6, 1) .. controls +(0,0.4) and +(0,0.4) .. (1,1) – (1,-0.5);(-0.3,1.5)–(-0.3,0) .. controls +(0,-0.4) and +(0,-0.4) .. (0.3,0) – (0.3,1.5);(-0.3,1.5)–(-0.3,0) .. controls +(0,-0.4) and +(0,-0.4) .. (0.3,0) – (0.3,1.5); [fill=white] (0,0) rectangle (1.3,1); [scale = 0.9] at (0.65,0.5) h_Φ; = v^*_E_,τ_h_Φu_E_,τ_=V^τ__Φ= [scale=0.6](0.3,-0.5) – (1,-0.5) – (1,1.5) – (0.3,1.5);(0.3,-0.5) – (0.3,1.5);(1,-0.5) – (1,1.5); [fill=white] (0,0) rectangle (1.3,1);at (0.65,0.5) Φ; .Therefore by planar isotopy, we have[scale=0.5](-0.3,-0.5) – (0.3,-0.5) – (0.3,1.5) – (-0.3,1.5);(1,-0.5) – (1.6,-0.5) – (1.6,1.5) – (1,1.5);(0.3,-0.5) – (0.3,1.5);(1,-0.5) – (1,1.5); [fill=white] (0,0) rectangle (1.3,1);at (0.65,0.5) h_Φ;=δ[scale=0.6] (2.2,2) – (2.2,-0.5) .. controls +(0,-0.4) and +(0,-0.4) .. (1.6,-0.5)–(1.6, 1) .. controls +(0,0.4) and +(0,0.4) .. (1,1) – (1,-1);(1,-1) – (2.8,-1) –(2.8,2) – (2.2,2) – (2.2,-0.5) .. controls +(0,-0.4) and +(0,-0.4) .. (1.6,-0.5)–(1.6, 1) .. controls +(0,0.4) and +(0,0.4) .. (1,1) – (1,-1);(-0.9,-1)–(-0.9,1.5).. controls +(0,0.4) and +(0,0.4).. (-0.3,1.5)–(-0.3,0) .. controls +(0,-0.4) and +(0,-0.4) .. (0.3,0) – (0.3,2);(0.3,2) – (-1.5,2) – (-1.5,-1) – (-0.9,-1)–(-0.9,1.5).. controls +(0,0.4) and +(0,0.4).. (-0.3,1.5)–(-0.3,0) .. controls +(0,-0.4) and +(0,-0.4) .. (0.3,0) – (0.3,2); [fill=white] (0,0) rectangle (1.3,1); [scale = 0.9] at (0.65,0.5) h_Φ; =δ[scale=0.5] (-0.6,-0.5) – (1.9,-0.5) – (1.9,1.5) – (-0.6,1.5);(0.3,-0.5)–(0.3, 1) .. controls +(0,0.4) and +(0,0.4) .. (-0.3,1) – (-0.3,-0.5);(0.3,-0.5)–(0.3, 1) .. controls +(0,0.4) and +(0,0.4) .. (-0.3,1) – (-0.3,-0.5)–(0.3,-0.5);(1,1.5)–(1,0) .. controls +(0,-0.4) and +(0,-0.4) .. (1.6,0) – (1.6,1.5);(1,1.5)–(1,0) .. controls +(0,-0.4) and +(0,-0.4) .. (1.6,0) – (1.6,1.5)– (1,1.5); [fill=white] (0,0) rectangle (1.3,1);at (0.65,0.5) Φ; = δΦ.Algebraically, we check that for all x∈, δ v^*_E_,τ_Φ u_E_,τ_ (xΩ_)= ∑^n_j=1v^*_E_,τ_(x η_j Ω_⊗_Φ(η^*_j) Ω_)= ∑^n_j=1Φ(E_(xη_j)η^*_j)Ω_ = Φ(x)Ω_.Hence by uniqueness of h_Φ as in Proposition <ref> the result follows.Suppose Φ_1≼Φ_2 and Ψ_1≼Ψ_2 are normal completely positive and we are allowed to compose Φ_1,Φ_2 with Ψ_1,Ψ_2.It follows that Φ_1Φ_2≼Ψ_1Ψ_2, hence h_Φ_1Φ_2,Ψ_1Ψ_2 exists.In the following we will prove a formula expressing h_Φ_1Φ_2,Ψ_1Ψ_2 in terms of h_Φ_1,Φ_2 and h_Ψ_1,Ψ_2. First we briefly recall from <cit.> relative tensor product of bimodules over von Neumann algebras. Suppose H_ℬ and _ℬKare ℬ-modules, and φ is afaithful normal state on ℬ. A vector ξ∈ H is called φ-bounded if the densely defined operator L_φ(ξ):L^2(ℬ,φ)∋Ω_φb↦ξ· b∈ Hadmits a bounded extension, which is still denoted as L_φ(ξ).Denote the dense suspace of φ-bounded vectors in H_ℬ as 𝔇(H,φ).Define a positive bilinear form on 𝔇(H,φ)⊗ K as⟨ξ_1⊗η_1,ξ_2⊗η_2⟩ _ℬ,φ=⟨π_K(L^*_φ(ξ_2)L_φ(ξ_1))η_1,η_2⟩ _K.The relative tensor product of H and K over ℬ with respect to φ, denoted as H⊗_φ K, will be the completion of 𝔇(H,φ)⊗ K/⟨·,·⟩ _ℬ,φ with respect to the norm induced by the bilinear form.The equivalence class of a vector ξ⊗η∈𝔇(H,φ)⊗ K will be denoted as ξ⊗_φη. In case where H is a 𝒜-ℬ-bimodule and K is a ℬ-𝒞-bimodule, the left (right) actions of 𝒜 (𝒞) descend to H⊗_φ K, making it a 𝒜-𝒞-bimodule.The map ι: _ℬK→ _ℬL^2(ℬ)⊗_φ K defined as ι(b·η)=bΩ_φ⊗_φη extends to a unitary intertwiner of left ℬ-representations.For x∈End(_𝒜H_ℬ) and y∈End(_ℬK_𝒞), the operator x⊗_φ y∈End(_𝒜H⊗_φ K_𝒞) is defined as (x⊗_φ y)(ξ⊗_φη)=xξ⊗_φ yη,and is called the tensor product of x and y. Suppose Ψ_1:ℬ→𝒞 and Ψ_2: 𝒜→ℬ are normal completely positive. Fix φ to be a faithful normal state on ℬ. Then there is an isometry 𝒴∈ (_𝒜ℋ^Ψ_1Ψ_2_𝒞,_𝒜ℋ^Ψ_2⊗_φℋ^Ψ_1_𝒞) satisfying 𝒴(Ω_Ψ_1Ψ_2,ϕ)=Ω_Ψ_2,φ⊗_φΩ_Ψ_1,ϕ.Consequently 𝒴v_Ψ_1Ψ_2=(v_Ψ_2⊗_φ id_ℋ^Ψ_1)v_Ψ_1.We first show that the assignment 𝒴: aΩ_Ψ_1Ψ_2c↦ aΩ_Ψ_2⊗_φΩ_Ψ_1 c preserves inner product. Indeed, for any a_1,a_2∈𝒜 and c_1,c_2∈𝒞:⟨ a_1Ω_Ψ_2⊗_φΩ_Ψ_1 c_1,a_2Ω_Ψ_2⊗_φΩ_Ψ_1 c_2⟩=⟨ L^*_φ(Ω_Ψ_2)a^*_2a_1L_φ(Ω_Ψ_2)Ω_Ψ_1c_1, Ω_Ψ_1c_2⟩=⟨Ψ_2(a^*_2a_1)Ω_Ψ_1c_1, Ω_Ψ_1c_2⟩=⟨Ψ_1(Ψ_2(a^*_2a_1))Ω_ϕc_1,Ω_ϕc_2⟩=⟨ a_1Ω_Ψ_1Ψ_2 c_1,a_2Ω_Ψ_1Ψ_2c_2⟩.Since Ω_Φ is cyclic in ℋ^Ψ_1Ψ_2, 𝒴 extends linearly to an isometric bimodule map from ℋ^Ψ_1Ψ_2 into ℋ^Ψ_2⊗_φℋ^Ψ_1. By the definition of v_Ψ, we have the equation 𝒴v_Ψ_1Ψ_2=(v_Ψ_2⊗_φ id_ℋ^Ψ_1)v_Ψ_1.Under the same assumption as in Lemma <ref>, suppose Φ_1≼Ψ_1 and Φ_2≼Ψ_2 are normal completely positive.Then Φ_1Φ_2≼Ψ_1Ψ_2 andh_Φ_1Φ_2,Ψ_1Ψ_2=𝒴^*(h_Φ_2,Ψ_2⊗_φ h_Φ_1,Ψ_1)𝒴.For any a∈𝒜, by Lemma <ref>, v_Ψ_1Ψ_2^*(𝒴^*(h_Φ_2,Ψ_2⊗_φ h_Φ_1,Ψ_1)𝒴)π_Ψ_1Ψ_2(a)v_Ψ_1Ψ_2 =v^*_Ψ_1(v^*_Ψ_2⊗_φ id_ℋ^Ψ_1)π_Ψ_2(a)(h_Φ_2,Ψ_2⊗_φ h_Φ_1,Ψ_1)(v_Ψ_2⊗_φ id_ℋ^Ψ_1)v_Ψ_1=v^*_Ψ_1(v^*_Ψ_2h_Φ_2,Ψ_2^1/2⊗_φ h_Φ,Ψ^1/2)π_Ψ_2(a)(h_Φ_2,Ψ_2^1/2v_Ψ_2⊗_φ h_Φ,Ψ^1/2)v_Ψ_1=v^*_Ψ_1h_Φ,Ψ^1/2π_Ψ_1(Φ_2(a))h_Φ,Ψ^1/2v_Ψ_1=Φ_1(Φ_2(a)).Hence by the uniqueness of derivative in Proposition <ref>, we have h_Φ_1Φ_2,Ψ_1Ψ_2=𝒴^*(h_Φ_2,Ψ_2⊗_φ h_Φ_1,Ψ_1)𝒴.We explain the role of isometry w appeared in Proposition <ref> in connection with convolution studied in planar algebra settings.Suppose now 𝒜=ℬ=𝒞= is a II_1 factor and Ψ_1=Ψ_2 =E_ is the trace-preserving conditional expectation down to a finite index subfactor.Then w∈ (_⊗__,_⊗_⊗__) is represented by the following diagram in 𝒫^⊂:𝒴 = δ^-1/2·[scale=0.6](0.6,0)–(0.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (0,0.5) – (0,0);(0.6,0)–(0.6, 0.5) .. controls +(0,0.4) and +(0,0.4) .. (0,0.5) – (0,0);(-0.6,0) .. controls +(0,0.5) and +(0,-0.5) .. (0,1.5);(1.2,0) .. controls +(0,0.5) and +(0,-0.5) .. (0.6,1.5);(-1,0) – (-0.6,0) .. controls +(0,0.5) and +(0,-0.5) .. (0,1.5)– (-1,1.5) – (-1,0); (1.6,0) – (1.2,0) .. controls +(0,0.5) and +(0,-0.5) .. (0.6,1.5)– (1.6,1.5) – (1.6,0);As proved in Section 4 for Φ_1,Φ_2∈𝐂𝐏_() we have h_Φ_i = δΦ_i.Now Proposition <ref> translates to the following equation of two boxes:Φ_2Φ_1 = [scale=0.6](-1.3,1.5) – (-1.3, -0.5) – (1.3, -0.5) – (1.3, 1.5) – (-1.3,1.5);(1.3, -0.5) – (1.3, 1.5);(-1.3,-0.5) – (-1.3,1.5);(0.6,1) .. controls +(0,0.4) and +(0,0.4) .. (-0.6,1);(0.6,0) .. controls +(0,-0.4) and +(0,-0.4) .. (-0.6,0);(0.6,1) .. controls +(0,0.4) and +(0,0.4) .. (-0.6,1) – (-0.6,0) – (-0.6,0) .. controls +(0,-0.4) and +(0,-0.4) .. (0.6,0) – (0.6,1);(1.3,-0.5) – (1.9,-0.5) – (1.9,1.5) – (1.3,1.5);(-1.9,-0.5) – (-1.3,-0.5) – (-1.3,1.5) – (-1.9,1.5) – (-1.9,-0.5); [fill=white] (0.3,0) rectangle (1.6,1);at (0.95,0.5) Φ_2; [fill=white] (-1.6,0) rectangle (-0.3,1);at (-0.95,0.5) Φ_1; .This is nothing but the fact that Fourier transform intertwines composition and convolution (c.f. Equation (2) <cit.>). § ARAKI RELATIVE ENTROPY FOR COMPLETELY POSITIVE MAPSIn <cit.>, Araki extends the relative entropy between density operators to positive linear functionals on arbitrary von Neumann algebras.Based on this notion together with Connes' correspondences (bimodules) we define and study the relative entropy between completely positive maps. Let us recall Araki's definition of relative entropy.Given normal normal positve linear functionals ρ,σ on a von Neumman algebra ℛ in its standard form, we use p_σ and p'_σ to represent the supports of σ in ℛ and ℛ'.Let ξ_ρ,ξ_σ be their unique representatives in the natural positive cone.When p_ρ≤ p_σ, the densely defined conjugate linear operatorS_ρ,σ: xξ_σ+η↦ p_σx^*ξ_ρ, x∈ℛ, p'_ση = 0.is closable (with closure still denoted as S_ρ,σ) and the relative modular operator is the positive selfadjoint operator Δ_φ,ψ=S^*_ρ,σS_ρ,σ.The relative entropy between ρ,σ is defined asS(ρ,σ)=⟨logΔ_ρ,σξ_ρ,ξ_ρ⟩. Let 𝒜,ℬ be von Neumann algebras and Ψ∈𝐂𝐏(𝒜,ℬ).We fix a faithful normal state φ on ℬ and consider the 𝒜-ℬ bimodule ℋ^Φ.Denote the normal faitfhul positive linear functional on (_𝒜ℋ^Ψ_ℬ) implemented by Ω_Ψ,φ as ω_Ψ,φ. Now let (_𝒜ℋ^Φ_ℬ) be in its standard form and ξ_Ψ,φ be the unique representative of ω_Ψ,φ in the natural positive cone. Denote the modular conjugation associated to ω_Ψ,φ as J_Ψ.For every Φ∈𝐂𝐏(𝒜,ℬ), define a positive normal functional on (_𝒜ℋ^Φ_ℬ) asω_Ψ,φ(Φ) (x)=⟨ xξ_Ψ,φ, J_Ψh_Φ,Ψξ_Ψ,φ⟩, x∈(_𝒜ℋ^Φ_ℬ).Notice that if Φ is unital and φ is a state, then so is ω_Ψ,φ(Ψ).By definition we have ω_Ψ,φ = ω_Ψ,φ(Ψ). Suppose Ψ,Φ:𝒜→ℬ are completely positive maps and Φ≼Ψ. Let φ be a faithful normal state on ℬ. Then the relative entropy S_φ(Φ, Ψ) of Φ,Ψ with respect to φ is defined to be:S_φ(Φ,Ψ) =S( ω_Ψ,φ(Φ) ,ω_Ψ,φ(Ψ)).Alternatively S_φ(Φ,Ψ) can be defined using the corresponding positive functionals on J_Ψ(_𝒜ℋ^Ψ_ℬ)J_Ψ = (_𝒜ℋ^Ψ_ℬ)^op.Using the modular conjugation j: (_𝒜ℋ^Ψ_ℬ)^op∋ x↦ J_Ψx^*J_Ψ∈(_𝒜ℋ^Ψ_ℬ), we define the state on (_𝒜ℋ^Ψ_ℬ)^op asω'_Ψ,ϕ(Φ) := ω_Ψ,φ(Φ)∘ j.Then ω'_Φ,Ψ(Φ) is implemented by the vector h^1/2_Φ,Ψξ_Ψ,φ.Therefore by the fact that j is an anti-isomorphism, we arrived at the following proposition. Under the assumption of Definition <ref>, we haveS_φ(Φ,Ψ)=S(ω'_Ψ,φ(Φ),ω'_Ψ,φ(Ψ)).Suppose Ψ,Φ,ℰ: 𝒜→ℬ are completely positive and Φ≼Ψ≼ℰ.Let φ be a faithful normal state on ℬ, then S_φ(Φ,Ψ)=S(ω_ℰ,φ(Φ),ω_ℰ,φ(Ψ)).Let (_𝒜ℋ^ℰ_ℬ,Ω_ℰ) be the dilation of ℰ with respect to φ.By definition of the derivative between completely positive maps the assignment aΩ_Ψb↦ ah^1/2_Ψ,ℰΩ_ℰb for all a∈𝒜 and b∈ℬ extends to an isometric bimodule intertwiner.Let p_0 be the support of h^1/2_Ψ,ℰ.In the rest of the proof, we identify (ℋ^Ψ,Ω_Ψ) with (p_0ℋ^ℰ,h^1/2_Ψ,ℰΩ_ℰ) as 𝒜-ℬ-bimodules.Consequently (_𝒜ℋ^Ψ_ℬ) is identified with p_0(_𝒜ℋ^ℰ_ℬ)p_0,and J_ω_Ψ(_𝒜ℋ^Ψ_ℬ)J_ω_Ψ is identified with J_ω_ℰ(_𝒜ℋ^ℰ_ℬ)J_ω_ℰp_0. Now we find that ω'_Ψ is implemented by the vector h^1/2_Ψ,ℰΩ_ℰ.By uniqueness of the derivative h_Φ,Ψ=h^-1/2_Ψ,ℰh_Φ,ℰh^-1/2_Ψ,ℰ as an operator on p_0ℋ^ℰ_0, so ω'_Ψ(Φ) is implemented by h^1/2_Φ,ℰΩ_ℰ.Therefore by Corollary <ref>: S_φ(Φ,Ψ)= S(ω'_Ψ,φ(Φ),ω'_Ψ,φ)= S(ω'_ℰ,φ(Φ),ω'_ℰ,φ(Ψ))= S(ω_ℰ,φ(Φ),ω_ℰ,φ(Ψ)),completing the proof.We now interpret the upper bound in Equation (<ref>) as a special case of the quantity S_φ(Φ,Ψ). Let 𝒩⊂ℳ is a finite inclusion of finite von Neumann algebras. Suppose Φ,Ψ∈𝐂𝐏_() and Φ≼Ψ.Then δ D_τ__2(Δ^1/2ΦΔ^1/2Δ^1/2ΨΔ^1/2) = S_τ_ℳ(Φ,Ψ).By Proposition <ref> we haveS_τ_ℳ(Φ,Ψ)=S(ω_E_,τ_(Φ),ω_E_,τ_(Ψ)).For any Φ_0∈𝐂𝐏_(), by Theorem <ref> δΦ_0 = h_Φ_0,E_.Then Proposition <ref> impliesδω_E_,τ_(Φ_0) = ω_E_,τ_(h_Φ_0,E_)=τ_(Φ_0(1))=δτ__2(ΔΦ_0),so we have ω_E_,τ_ = ω_ and Δ is the density operator of ω_E_,τ_ with respect to τ__2.The density operators for ω_E_,τ_(Φ) and ω_E_,τ_(Ψ) with respect to the trace can then be computed as Δ^1/2h_ΦΔ^1/2 and Δ^1/2h_ΨΔ^1/2 respectively.ThusS(ω_E_,τ_(Φ),ω_E_,τ_(Ψ)) =D_τ__2(Δ^1/2h_ΦΔ^1/2Δ^1/2h_ΨΔ^1/2)= δ D_τ__2(Δ^1/2ΦΔ^1/2Δ^1/2ΨΔ^1/2). In the end of this section, we discuss the monotonicity of S_φ(Φ,Ψ) under compositions with completely positive maps. Let Ψ_1,Φ_1∈𝐂𝐏(ℬ,𝒞) with Φ_1 ≼Ψ_1.Then for any faithful normal positive linear functional ϕ on 𝒞 and Ψ_2∈(𝒜,ℬ):S_ϕ(Φ_1Ψ_2,Ψ_1Ψ_2) ≤ S_ϕ(Φ_1,Ψ_1).We fix a faithful normal state φ on ℬ and adopt the setting of Proposition <ref>. Consider the map Γ: (_ℬℋ^Ψ_1_𝒞)→(_𝒜ℋ^Ψ_1Ψ_2_𝒞)Γ(y)=𝒴^*(id_ℋ^Ψ_2⊗_φ y)𝒴,which is normal unital and completely positive.Proposition <ref> then reads (with Ψ_2=Φ_2)Γ(h_Φ_1,Ψ_1)=h_Φ_1Ψ_2,Ψ_1Ψ_2.By defining properties of w as in Lemma <ref> and Ψ_2 being unital one checks for any y∈(_ℬℋ^Ψ_1_𝒞), ω_Ψ_1Ψ_2∘Γ (y)=⟨Ω_Ψ_2⊗_φyΩ_Ψ_1,Ω_Ψ_2⊗_φΩ_Ψ_1⟩=⟨ yΩ_Ψ_1,Ω_Ψ_1⟩=ω_Ψ_1(y).Therefore ω_Ψ_1Ψ_2∘Γ=ω_Ψ_1.In particular, Γ is faithful.Let Γ^*_Ψ_1Ψ_2:=Γ^*_ω_Ψ_1Ψ_2 be the adjoint in the sense of Petz <cit.>.We have that Γ^*_Ψ_1Ψ_2:(_𝒜ℋ^Ψ_1Ψ_2_𝒞)→(_ℬℋ^Ψ_1_𝒞) and that⟨Γ^*_Ψ_1Ψ_2(x)ξ_Ψ_1,J_Ψ_1yξ_Ψ_1⟩ = ⟨ xξ_Ψ_1Ψ_2,J_Ψ_1Ψ_2Γ(y)ξ_Ψ_1Ψ_2⟩,wheneverx∈(_𝒜ℋ^Ψ_1Ψ_2_𝒞), y∈(_ℬℋ^Ψ_1_𝒞). Taking y=1, we have ω_Ψ_1∘Γ^*_Ψ_1Ψ_2=ω_Ψ_1Ψ_2. In addition to this, one has for any x∈(_𝒜ℋ^Ψ_1Ψ_2_𝒞):ω_Ψ_1(Φ_1)∘Γ^*_Ψ_1Ψ_2(x)=⟨Γ^*_Ψ_1Ψ_2(x)ξ_Ψ_1, J_Ψ_1h_Φ_1,Ψ_1ξ_Ψ_1⟩=⟨ xξ_Ψ_1Ψ_2, J_Ψ_1Ψ_2Γ(h_Φ_1,Ψ_1)ξ_Ψ_1Ψ_2⟩=⟨ xξ_Ψ_1Ψ_2, J_Ψ_1Ψ_2h_Φ_1Ψ_2,Ψ_1Ψ_2ξ_Ψ_1Ψ_2⟩=ω_Ψ_1Ψ_2(Φ_1Ψ_2)(x).Therefore the monotonicity of S_ϕ follows from that of Araki's relative entropy: S_φ(Φ_1Ψ_2,Ψ_1Ψ_2)=S(ω_Ψ_1Ψ_2(Φ_1Ψ_2),ω_Ψ_1Ψ_2)=S(ω_Ψ_1(Φ_1)∘Γ^*_Ψ_1Ψ_2,ω_Ψ_1∘Γ^*_Ψ_1Ψ_2)≤ S(ω_Ψ_1(Φ_1),ω_Ψ_1)=S_φ(Φ_1,Ψ_1).This completes the proof. Let Ψ_1∈(ℬ,𝒞) and Φ_2 ≼Ψ_2: 𝒜→ℬ be normal completely positive.Then for any faithful normal positive linear ϕ on 𝒞:S_ϕ(Ψ_1Φ_2,Ψ_1Ψ_2) ≤ S_ϕ∘Ψ_1(Φ_2,Ψ_2).The proof is similar in structure to the previous one.We again adopt the setting of Proposition <ref> and define the normal unital completely positive map Λ: (_𝒜ℋ^Ψ_2_ℬ)→(_𝒜ℋ^Ψ_1Ψ_2_𝒞) asΛ(y)=𝒴^*(y⊗_ϕ∘Ψ_1id_ℋ^Ψ_1)𝒴.In what follows, we will use the full notation ω_Ψ,φ.Now we check that for any x∈(_𝒜ℋ^Ψ_2_ℬ):ω_Ψ_1Ψ_2,ϕ∘Λ(x)=⟨ xΩ_Ψ_2,ϕ∘Ψ_1⊗_ϕ∘Ψ_1Ω_Ψ_1,Ω_Ψ_2,ϕ∘Ψ_1⊗_ϕ∘Ψ_1Ω_Ψ_1⟩=⟨ v^*_Ψ_2,ϕ∘Ψ_1yv_Ψ_2,ϕ∘Ψ_1Ω_Ψ_1,Ω_Ψ_1⟩=ϕ∘Ψ_1(v^*_Ψ_2, ϕ∘Ψ_1yv_Ψ_2,ϕ∘Ψ_1)=ω_Ψ_2,ϕ∘Ψ_1(y).Therefore with Λ^*_Ψ_1Ψ_2,ϕ: (_𝒜ℋ^Ψ_1Ψ_2_𝒞)→(_𝒜ℋ^Ψ_2_ℬ) being the Petz transpose of Λ with respect to ω_Ψ_1Ψ_2,ϕ, we check that ω_Ψ_1Ψ_2,ϕ(Ψ_1Φ_2)= ω_Ψ_2,ϕ∘Ψ_1(Φ_2)∘Λ^*_Ψ_1Ψ_2,ϕ, ω_Ψ_1Ψ_2,ϕ= ω_Ψ_2,ϕ∘Ψ_1∘Λ^*_Ψ_1Ψ_2,ϕ.The conclusion follows again from the monotonicity of Araki's relative entropy. Let {Φ_i}^n_i=1 and {Ψ_i}^n_i=1 be two families of normal completely positive maps from 𝒜 to ℬ.Then for any faithful positive linear functional φ on ℬ and any probability mass function {p_i}^n_i=1, we haveS_φ(∑^n_i=1p_iΦ_i,∑^n_i=1p_iΨ_i)≤∑^n_i=1 p_i S_φ(Φ_i,Ψ_i).Without the loss of generality, we assume p_i≠ 0 for all i.Define ℰ_1∈𝐔𝐂𝐏(𝒜,𝒜^⊕ n), and ℰ_2∈𝐔𝐂𝐏(ℬ^⊕ n,ℬ) asℰ_1 (a) = (a)^n_i=1,a∈𝒜;ℰ_2 ((b_i)^n_i=1) = ∑^n_i=1 p_ib_i,(b_i)^n_i=1∈ℬ^⊕ n.Meanwhile defineΦ,Ψ∈𝐂𝐏(𝒜^⊕ n,ℬ^⊕ n) asΦ((a_i)^n_i=1) = (Φ_i(a_i))^n_i=1, (a_i)^n_i=1∈𝒜^⊕ n;Ψ((a_i)^n_i=1) = (Ψ_i(a_i))^n_i=1, (a_i)^n_i=1∈𝒜^⊕ n.It is then straightforward to check that S_φ(∑^n_i=1p_iΦ_i,∑^n_i=1p_iΨ_i) = S_φ (ℰ_2Φℰ_1,ℰ_2Ψℰ_1).By right monotonicity (Theorem <ref>), then by left monotonicity (Theorem <ref>), we obtainS_φ (ℰ_2Φℰ_1,ℰ_2Ψℰ_1)≤ S_φ∘ℰ_2(Φ,Ψ).Since φ∘ℰ_2 = (p_iφ)^n_i=1, the result follows. § RÉNYI RELATIVE ENTROPIES Given a finite von Neumann algebra M with normal faithful normalized trace τ, for any 1/2≤ p≤∞ and any density operators ρ,σ∈ M, the sandwiched Rényi relative entropy between ρ and σ is defined asD_p,τ(ρσ)=1/p-1logτ_(|σ^-1/2p'ρσ^-1/2p'|^p).It is known that D_1,τ(ρ|σ) is the usual relative entropy, and as a function of p, D_p,τ(ρσ) is non decreasing on [1/2,+∞].Moreover, it has been proved that Rényi entropy is monotone under completly positive trace-preserving maps <cit.>. Recall that for a finite inclusion ⊆, the Pimsner-Popa index is defined asλ(,) = sup{λ>0| E^_ - λ id_ is positive }.In <cit.>, Gao, Junge, and Laracuente studied relations between Pimsner-Popa index and Rényi entropy.When ⊂ is finite inclusion of tracial von Neumann algebras, they proved that (c.f. Proposition 3.2 in <cit.>)-logλ(,) ≥ D_p(|)≥ H(|),∀ p∈ [1,+∞],where the quantity in the middle is defined as D_p(|) = sup_ρinf_σ D_p(ρσ),with the supremum taken over all densities inand the infimum taken over all densities in .Moreover, if ⊂ are subfactors of type II_1 or finite dimensional then by Theorem 3.1 of <cit.> -logλ(,) = D_p(|),∀ p∈[1/2,∞]. We consider the Rényi relative entropy between completely positive bimodule maps.For a finite inclusion ⊂, we define for p∈ [1/2,+∞]S_p(Φ,Ψ) = D_p,τ__2 (Δ^1/2ΦΔ^1/2Δ^1/2ΨΔ^1/2),where Φ≼Ψ are completely positive bimodule maps. Let 𝒜 and ℬ be von Neumann algebras.For Φ,Ψ∈𝐂𝐏(𝒜,ℬ) with Φ≼Ψ, we define λ (Φ,Ψ) = sup{λ>0|Ψ-λΦ is completely positive}.Otherwise, set λ (Φ,Ψ) = +∞.Let ⊂ be a finite inclusion of finite von Neumann algebras.Let Φ,Ψ∈𝐂𝐏_().Then for p∈[1,+∞], -logλ(Φ,Ψ) ≥ S_p(Φ,Ψ) ≥ H(Φ|Ψ).By Theorem <ref>, we haveH(Φ|Ψ)≤δ D_1,τ__2(Δ^1/2ΦΔ^1/2Δ^1/2ΨΔ^1/2).On the other hand, by definitionδ D_∞,τ__2(Δ^1/2ΦΔ^1/2Δ^1/2ΦΔ^1/2)= loginf{λ>0|λΨ≥Φ}.By Corollary <ref>, λΨ-Φ≥ 0 if and only if λΨ-Φ is completely positive, loginf{λ>0|λΨ≥Φ} = -logλ(Φ,Ψ).So the result follows from that D_p,τ__2(·|·) is non decreasing with respect to p in the interval [1,∞]Let us take Φ = id_ and Ψ = E_ as an example.When ⊂ admits a downward Jones basic construction, for instance when ⊂ is a subfactor, we have S_∞(id_,E_)= -logλ(,) = log [:],by Proposition <ref>, as well as S_1(id_,E_) = H(|)by Theorem <ref>.Thus in this case both bounds in Equation (<ref>) are tight.Therefore the Rényi relative entropy S_p(id_,E_) interpolates between Pimsner-Popa index and Connes-Sørmer relative entropy.This suggests that S_p(Φ,Ψ) is a more natural entropic quantity for completely positive maps. Now assumingis a finite factor, we can further compute, similar to the proof of Proposition <ref>, thatS_1/2(id_,E_) = 2logδ -logτ_(Δ^-1_0).We notice that if ⊂ admits downward Jones basic construction, then H(|) = S_1(id_,E_) ≥ S_1/2(id_,E_),since the sandwiched Rényi relative entropy does not decrease as the parameter increases.When the inclusion doesn't admit downward Jones basic construction, the reversed inequality can occur.For instance if = ⊕_k∈ K M_n_k(ℂ) and = M_m(ℂ) with m = ∑_k∈ Ka_kn_k, then S_1(id_,E_) - S_1/2(id_,E_) = ∑_k∈ Kn_ka_k/mloga_k/n_k.Compare to S_1(id_,E_) - H(|) = ∑_k∈ Kn_ka_k/mlogmax{a_k/n_k,1},we see that S_1/2(id_,E_)>H(|) if and only if a_k<n_k for some k∈ K.Thus the sign of the difference H(|)-S_1/2(id_,E_) can be treated as a criterion for the existence of downward Jones basic construction.
http://arxiv.org/abs/2312.16576v2
{ "authors": [ "Zishuo Zhao" ], "categories": [ "math.OA", "cs.IT", "math-ph", "math.FA", "math.IT", "math.MP", "46L37, 46L55, 94A15" ], "primary_category": "math.OA", "published": "20231227135533", "title": "Relative Entropy for Quantum Channels" }
Department of Engineering Science, National Cheng Kung University, Tainan 701401, Taiwan Center for Quantum Frontiers of Research & Technology, NCKU, Tainan 701401, Taiwan Physics Division, National Center for Theoretical Sciences, Taipei 10617, TaiwanDepartment of Engineering Science, National Cheng Kung University, Tainan 701401, TaiwanDepartment of Engineering Science, National Cheng Kung University, Tainan 701401, Taiwan Physics Division, National Center for Theoretical Sciences, Taipei 10617, TaiwanDepartment of Engineering Science, National Cheng Kung University, Tainan 701401, TaiwanDepartment of Electrophysics, National Yang Ming Chiao Tung University, Hsinchu 300093, TaiwanDepartment of Physics, National Cheng Kung University, Tainan 701401, Taiwan Center for Quantum Frontiers of Research & Technology, NCKU, Tainan 701401, Taiwan Physics Division, National Center for Theoretical Sciences, Taipei 10617, [email protected] Department of Engineering Science, National Cheng Kung University, Tainan 701401, Taiwan To unequivocally distinguish the genuine quantumness from classicality, a widely adopted approach appeals to the negativity within a join quasi-distribution representation as a compelling evidence for the nonclassical essence. However, to construct a joint quasi-distribution with negativity from experimental data typically proves to be highly cumbersome. Here we propose a computational approach utilizing a deep generative model integrated with color mapping to construct the bivariate joint quasi-distribution functions by processing three marginals. We first apply our model to predict the Wigner functions subject to thermal noises. Our model successfully predicts the Wigner functions with a prominent accuracy by processing three marginals of probability distributions. We also tackle a challenging problem of the canonical Hamiltonian ensemble representation (CHER), which is developed for characterizing the dynamical process nonclassicality. Furthermore, we also design optimal synthetic datasets to train the model for overcoming the ground-truth deficiency of the CHER problem. While trained with synthetic data, the physics-informed optimization enables our model to capture the detrimental effect of the thermal fluctuations on nonclassicality. Our approach also provides a significant reduction of the experimental efforts of constructing the Wigner functions of quantum states. Deep learning the nonclassicality within quasi-distribution representations from marginals Chi-Hua Yu January 14, 2024 ==========================================================================================§ INTRODUCTION Since the birth of quantum theory, the obscure boundaries between quantum and classical realms have always been a subject of fascination, spurring extensive research interest <cit.>. One widely adopted approach to distinctly prove genuine quantumness as separate from classicality hinges on a classical strategy's inability to mimic the statistics of an experiment. The underlying philosophy of such approach recognizes that, while a certain perspective might be intuitively true in the classical realm, it could lead to contradictory outcomes in quantum experiments. As such, when a classical strategy falls short, it strongly evidences the distinct quantum essence referred to as nonclassicality. This is especially evident in the field of quantum information science. For instance, the renowned example of the experimental violation <cit.> of Bell's inequality <cit.> accentuates the breakdown of the EPR paradox <cit.> formulated on the tenets of realism and locality. This specific kind of nonclassical correlation resulting in the violation of Bell's inequality is termed Bell nonlocality <cit.>. In line with this philosophy, the inadequacy of using probability distributions to describe quantum systems indicates the nonclassicality of quantum states. For continuous-variable quantum systems like nano-mechanical resonators or cavity fields, quantum states are described by quasi-distribution functions such as the Wigner function <cit.> or the Glauber-Sudarshan P representation <cit.>. These can be interpreted as the phase-space representation of quantum states. In the presence of strong quantum coherence, the quasi-distribution function exhibits interference patterns, yielding negativity in phase space. Hence, such a quantum state lacks a classical analog, highlighting its inherent quantum essence. This philosophy can also be employed to signify the nonclassicality of quantum dynamical processes <cit.> through the newly emerging canonical Hamiltonian ensemble representation (CHER) <cit.>. In CHER, the dynamical behaviors of open quantum systems are described by the ensemble average of unitary evolutions characterized by a Hamiltonian ensemble (HE) <cit.>, which comprises a collection of parameterized system Hamiltonian operators and the corresponding quasi-distribution function. Crucially, the presence of negativity in the quasi-distribution function indicates the establishment of system-environment quantum correlations, thereby emphasizing the quantum essence of the dynamical process. Consequently, the quasi-distribution representation emerges as a cornerstone in characterizing the inherent nonclassicality in quantum dynamical processes.Despite its potential in characterizing the nonclassicality, constructing a multivariate joint quasi-distribution with negativity from experimental data remains a cumbersome task. For instance, to construct the Wigner function, one might first determine the complete quantum state through methods such as tomographically measuring rotated quadratures <cit.>, transformations from different quasi-distribution representations <cit.>, or characteristic-function tomography <cit.>. Afterward, the Wigner function can be constructed by post-processing the measured data. Alternatively, one can also directly scan the Wigner function point-by-point in phase space <cit.>. All these approaches demand extensive experimental efforts. Even in the machine-learning-assisted method <cit.>, thousands of data points are still required.The complexity escalates when dealing with the nonclassicality of dynamical processes, as it involves quantum process tomography. According to the underlying Lie algebraic structure, only specific marginals can be efficiently solved, suggesting that the CHER can only be built from several marginal distributions. This is reminiscent of a very challenging mathematical issue that has been extensively studied <cit.>. Particularly, in the context of CHER, complications arise as the quasi-distribution may exhibit negative values, rendering previously developed mathematical techniques unreliable. Therefore, a reliable novel technique to construct quasi-distribution functions, allowing for negativity, from given marginal distributions is highly demanded. Such a technique would not only be crucial for constructing CHER, but might also significantly cut down the experimental efforts needed for constructing the Wigner functions of quantum states.Here we propose to harness the power of deep generative models (DGMs) to generate the corresponding joint quasi-distribution by processing a series of marginals (Fig. <ref>). DGMs excel in discerning hidden patterns in expansive datasets, proving invaluable in extracting new insights. Our DGM integrates the ResNet structure <cit.> to seamlessly generate the bivariate joint quasi-distribution for both challenges. To further enhance our DGM's pattern detection within an image, we have devised a color mapping strategy to translate the joint quasi-distribution into three monochromatic images. To validate our approach and to offer an intuitive result presentation, we present examples including the Wigner function of quantum harmonic oscillator exposed to thermal bath and the CHER of qubit-pair pure dephasing at various temperatures.Our appraoch offers a streamlined computational alternative, lessening the rigorous experimental demands of Wigner function measurements. Once the well-trained DGM receives three marginals, which correspond to three probability distributions in real or momentum space, it can generate the desired Wigner function. Simultaneously, we have addressed the issue of lacking ground truth (GT) for predicting the CHER. By leaning on prior knowledge about the target marginals, synthetic training datasets can be effectively optimized. We demonstrate that our physics-informed approach adeptly constructs high-precision Wigner functions and identifies the detrimental effect of thermal fluctuation on the nonclassicality, especially as temperature rises in the qubit-pair pure dephasing dynamics. §.§ Constructing joint quasi-distribution functions from marginals The quasi-distribution representation is a prevalent approach in quantum physics. Its negativity conveys a visualized insight into genuine quantum characteristics. However, in many practical scenarios, its construction can be challenging or demands substantial efforts. For instance, the direct point-by-point measurement of Wigner functions in phase space is experimentally intensive, and the construction of the CHERs for higher-dimensional dynamical processes is typically infeasible. In contrast, the marginals derived from the joint quasi-distribution functions describing nonclassicality are generically standard probability distributions. This renders the marginals also physically meaningful and, most importantly, makes them accessible for construction using experimental raw data.Motivated by this observation, our goal is to devise an approach for constructing joint quasi-distribution functions, which permit negativity, using three marginals. As illustrated in Fig. <ref>, we harness artificial intelligence techniques, specifically designing a DGM integrated with color mapping. Once we have the marginals, whether sourced from experimental data or theoretical calculations, our well-trained DGM efficiently constructs the corresponding joint quasi-distribution function with notable precision. §.§ Wigner functions and the marginals The motional state of a continuous-variable quantum system can be equivalently described by either the density matrix ρ or the Wigner function 𝒲(x,p)=(πħ)^-1∫⟨x+x^'|ρ|x-x^'⟩exp(-i2px^'/ħ)dx^', which is a real-valued quasi-distribution function permitting negativity. Such a phase-space representation has been proven useful for visualizing quantum states. Particularly, the negativity indicates the inherent quantum essence of the states. Most importantly, a unique property of the Wigner function is that the marginals are standard probability distributions over certain quadratures, and therefore easily experimentally measurable, in contrast to the point-by-point measurement of the Wigner function in phase space <cit.>. However, to construct a joint quasi-distribution from a few number of specific marginals remains a challenging mathematical issue <cit.>. Moreover, the well-developed algorithm of inverse Radon transform requires an extensive measurement over rotated quadratures <cit.> for tomographically constructing the Wigner function.We design a DGM for constructing the Wigner function using merely three marginals. To showcase our approach, here we specifically consider a quantum harmonic oscillator in coherent or cat sates exposed to thermal bath. The two natural marginals, 𝒲(x) and 𝒲(p), possess clear physical interpretation as the probability distributions in real and momentum space. While the third one 𝒲(u) over the oblique variable u=(x+p)/√(2) is accessible in real space after a π/4-rotation of the quantum states (Supplementary Note 1). §.§ Data representation and color mapping To train a supervised DGM, it is necessary to prepare a labelled training dataset. In the case of the Wigner functions, the training data can be generated efficiently (Methods). The data representation is schematically illustrated in Fig. <ref>. The feature of each training datum consists of three marginals (𝒲(x),𝒲(p),𝒲(u)), which can be derived analytically (Fig. <ref>d), even in the presence of thermal noises. Each marginal is numerically sampled into 721 pixels as the raw data. To take the advantage of our DGM in discerning patterns and relationships within the RGB channels of an image, each generated Wigner function z=𝒲(x,p) is translated into three monochromatic images according to a color mapping (Figs. <ref>b and c). The color mapping comprises three functions (f_R(ζ),f_G(ζ),f_B(ζ)), mapping the (rescaled) height value ζ=(z+0.45)/0.9 to a set of RGB values, resulting in the three monochromatic images. Note that the color mapping should be optimized according to the training data to be learned (Methods). Finally, the output monochromatic images serve as the label to be learned by the DGM, each of which is sampled into a 256 × 256 pixel array. The visualization of a single training datum is shown in Fig. <ref>e (Supplementary Note 3). Crucially, the color mapping associates the positivity and the negativity in the generated data to the R- and B-channels, respectively. This helps our DGM pay particular attention to the negativity in the Wigner function, capturing the nonclassicality of quantum states. §.§ ResNet-based deep generative model Our DGM is structured into six stages, each constructed by repeatedly stacking two core building blocks: the identity and deconvolution blocks. As our model aims to produce three monochromatic images from input marginals, the main stream of both blocks utilizes three deconvolutional layers. These layers extract key patterns from the input marginals and expand them to produce the output monochromatic images of a quasi-distribution. For the model to handle this intricate task, it requires more depth and additional deconvolutional layers. However, this intensifies the vanishing gradient problem, which significantly impedes our model's training.To address this challenge, we have incorporated a shortcut to each block, known as the ResNet structure <cit.>. Thanks to the skip connections in the ResNet structure, information from previous layers is retained, substantially mitigating the vanishing gradient problem. This allows us to deepen our model, enhancing its capabilities. Further insights and comprehensive layout of our DGM are shown in Supplementary Notes 4. §.§ Constructed Wigner functions by the DGM After training our DGM, we apply it to predict the Wigner functions from the three testing data marginals. Figure <ref> shows several representative samples. The top row shows the GT Wigner functions used to derive the marginals input into the DGM. The middle row shows the DGM's predictions. A close comparison underscores the DGM's accuracy with negligible differences between GT and predictions. The bottom row highlights the pixel-wise absolute difference between GT and predictions in grayscale. The DGM's predictions are notably precise for noisy coherent states, as seen in the first and fifth columns. However, regions with significant interference patterns in the noisy cat states show more pronounced differences. Further comprehensive quantitative assessment of the model's performance is presented in Supplementary Note 5. §.§ Dynamical process nonclassicality characterized by the CHER To further demonstrate the potential of our DGM, we apply our model to the problem of CHER, which is even more challenging due to its GT-deficiency.In the theory of dynamical process nonclassicality, the mathematical toolbox of the HE <cit.> {(p_λ,H_λ)}_λ is leveraged to formulate the classical strategy simulating an incoherent dynamics. A HE is a collection of traceless Hermitian operators H_λ∈𝔰𝔲(n) associated with a probability p_λ of occurrence. It has been shown that if a system can merely establish classical correlations with its environment during their interaction, then the resulting reduced dynamics ℰ_t of the system admits a HE decomposition <cit.> written asℰ_t{ρ(0)}=∫ p_λU_λ(t)ρ(0)U^†_λ(t)dλfor a certain HE, U_λ(t) = exp(-iH_λt), and the initial state ρ(0) of the system. Therefore, irrespective of the inaccessible environmental degrees of freedom in the vast majority of authentic experimental setups, we classify an incoherent dynamics ℰ_t as classical if it admits a HE-simulation (<ref>). An intuition of this criterion can be acquired from the observation that Eq. (<ref>) can be explained classically as a statistical mixture of random unitary channels. On the contrary, ℰ_t is nonclassical if it admits no HE-simulations. Such nonclassical dynamics arises from the consumption of nonclassical correlations, not reproducible by classical resources. The nonexistence of simulating HEs can be proven by the necessity to resort to a nonclassical HE encapsulating a quasi-distribution ℘_λ showing negative values. Consequently the negativity in ℘_λ is an indicator of the nonclassicality of ℰ_t.To further promote ℘_λ to a characteristic representation over the frequency domain for ℰ_t, referred to as CHER, the ensemble-averaged simulation (<ref>) has been recast into the FToG formalism <cit.>ℰ^(L)_t=∫_𝒢℘_λ e^-iλLt dλ,where ℰ^(L)_t is the linear map of ℰ_t expressed with respect to the adjoint representation {L} of the generators of 𝔲(n). Noteworthily, this formalism is derived according to the Lie-algebraic structure underlying the HE, rather than the conventional Fourier transform. The FToG (<ref>) associates the quasi-distribution ℘_λ with the dynamical process ℰ_t, i.e., ℘_λ↦ℰ_t^(L), highlighting the role of ℘_λ as a CHER for ℰ_t. More specifically, to explore the HE-simulation for ℰ_t means to construct the CHER ℘_λ of a joint quasi-distribution by solving the FToG (<ref>). For a given pure dephasing dynamics ℰ_t of any dimension, the Lie-algebraic structure of the FToG leads to the marginals along the root vectors of the Lie algebra (Supplementary Note 7). Then the corresponding CHER ℘_λ can only be built from the marginals solved from the FToG, rendering the direct construction of the CHER intractable.To compensate this deficiency, we apply the DGM to predict the CHER by processing a set of marginals solved from the FToG. To facilitate an intuitive result presentation, here we specifically consider a two-dimensional problem over the x_1-x_13 plane, as well as the u=(x_1+x_13)/√(2) coordinate describing the correlation between them. We stress that our approach can be generalized to higher dimension by appropriately including more marginals and correlations. §.§ Preparation of synthetic training data Unlike the situation with the Wigner function, the challenge of CHER stems from its GT-deficiency. Specifically, the absence of explicit CHER solution from the FToG (<ref>) makes the generation of adequate training data problematic. To address this issue, we opt to generate synthetic training datasets. In these datasets, the joint quasi-distribution is crafted from a combination of one bivariate Gaussians with a pair of Gaussians with opposite peaks, simulating potential negativity in the CHER. This allows us to efficiently derive the marginals. Additionally, the synthetic datasets are informed by physical principles, making them adaptable. By tweaking relevant parameters, they can be optimized for sufficiently reflecting the target marginals of a specific quantum dynamics model (Methods and Supplementary Note 8). This optimization aids in enhancing the proficiency and precision of our DGM. Once the synthetic data are generated, they can be encoded into the raw data following the process illustrated in Fig. <ref>.Furthermore, we set our sights on training the DGM to forecast two separate quantum dynamics models. To achieve this, we generate two unique synthetic training datasets, each optimized for specific quantum dynamics models. This optimization process begins by examining the profile of the marginals solved from the FToG (<ref>) for the intended quantum models. Armed with this knowledge, we can fine-tune the parameters during the synthetic data generation. This ensures that our synthetic marginals align closely with the nuances of the quantum model to be solved (Supplementary Note 8). §.§ Quantum pure dephasing dynamics As a concrete paradigm, we consider an extended spin-boson model consisting of a non-interacting qubit pair coupled to a common boson bath with total Hamiltonian H_T=∑_j=1,2ω_jσ̂_z,j/2+∑_𝐤ω_𝐤b̂_𝐤^†b̂_𝐤 +∑_j,𝐤σ̂_z,j⊗(g_𝐤b̂_𝐤^†+g_𝐤^∗b̂_𝐤). The qubit pair undergoes a pure dephasing dynamics characterized by four dephasing factors and the corresponding CHER ℘(x_1,x_6,x_13)=℘_1,13(x_1,x_13)℘_6(x_6) is governed by the FToG (<ref>) explicitly written as:{[ϕ_1(t)=exp[iϑ(t)-Φ(t)]=∫_ℝ℘_1(x_1)exp[-ix_1t]dx_1; ϕ_9(t)=exp[-4Φ(t)]=∫_ℝ^2℘_1,13(x_1,x_13)exp[-ix_1t]exp[-ix_13t]dx_1dx_13;ϕ_13(t)=exp[-iϑ(t)-Φ(t)]=∫_ℝ℘_13(x_13)exp[-ix_13t]dx_13;ϕ_6(t)=1=∫_ℝ℘_6(x_6)exp[-ix_6t]dx_6 ].,with ϑ(t)=4∫_0^∞[𝒥(ω)/ω^2](ω t-sinω t)dω, Φ(t)=4∫_0^∞[𝒥(ω)/ω^2](ħω/2k_BT)(1-cosω t)dω, and 𝒥(ω)=∑_k⃗|g_k⃗|^2δ(ω-ω_k⃗) being the spectral density. Note that the first three lines determine the three marginals along the x_1-, u-, and x_13-axis, respectively, which will be fed into a well-trained model to forecast a ℘_1,13(x_1,x_13); while the last line leads to an independent component ℘_6(x_6)=δ(x_6). In the following, we demonstrate two types of spectral density, i.e., the super-Ohmic family and the Drude-Lorentz spectral densities, at various temperatures. Further details are shown in Supplementary Notes 9. §.§ Verifying model performance After training with our synthetic data, we proceed to verify the model's performance on this data, using a verification procedure consistent with the one used for the Wigner functions. We also compare both the joint quasi-distributions and the derived marginals of the GT and the model's prediction. This allows us to affirm the reliability of our well-trained models on both synthetic datasets. Further details are shown in Supplementary Notes 10.Nevertheless, when forecasting the CHER of a dynamics, the GT-deficiency makes the direct comparison of the joint quasi-distributions infeasible. Subsequently, we show the predicted CHERs for both quantum spectral densities and estimate the pixel-averaged L_1 norm, defined as ∑_pixel|GT-Prediction|/721, for marginals derived both from FToG and the predicted CHERs. This methodology allows us to gauge the reliability of our model in forecasting the CHER, even with the obstacles of GT-deficiency. §.§ Prediction of CHER and nonclassicality Figure <ref> shows the prediction for the quantum dynamics models (<ref>) with a specific super-Ohmic spectral density (Ohmicity s=2) in the family. The first row shows the predicted CHER with increasing temperature T, which suppresses the central peak of the CHER. Each CHER possesses a shallow negative region indicating the nonclassicality of the dynamics, which is magnified in the second row for clarity. Crucially, the negative region is gradually attenuated with increasing temperature, reflecting the effects of increasing thermal fluctuation. This is in line with the intuition that the thermal fluctuation is detrimental to the nonclassical essence of a quantum system. Due to the lack of the GT CHER constructed from the FToG (<ref>), to underscore the reliability of the predictions, we have also numerically derived the marginals (red solid curves) from the predicted CHERs and compared with those of GT (dashed black curves) solved from the quantum dynamics models (<ref>), as shown in the last three rows. The pixel-averaged L_1 norm is shown at the upper-right corner of each panel. Generically, the prediction fits the GT better at higher temperatures, resulting in lower errors. Furthermore, the errors are two-order of magnitude lower than the peaking values of the marginals. This affirms the reliability of the predicted CHERs.Figure <ref> shows the prediction for the Drude-Lorentz spectral density. We show the predicted CHER in the first row with increasing temperature T. Similar suppression of the central peak with increasing T can also be observed. The negative regions are magnified in the second row. The nonclassicality is destroyed at a high enough T, revealing a quantum-to-classical transition due to the thermal fluctuations. We also show the comparison between the derived the marginals (red solid curves) from the prediction and the GT marginals (dashed black curves) in the last three rows with estimated pixel-averaged L_1 norm shown at the upper-right corner, which underpin the reliability of our model.§ DISCUSSION In this study, we've reached a notable milestone in quantum information science by creatively employing deep generative models (DGMs) to craft multivariate joint quasi-distribution functions. This is a critical step in detailing the nonclassical nature of quantum states and their dynamics. Our method skillfully sidesteps the constraints of traditional techniques, such as the Wigner function, by utilizing sophisticated neural networks like the ResNet model, along with a novel color mapping technique. This advanced approach has not only streamlined the analysis of experimental data but has also introduced a new computational framework in quantum physics. Our DGMs' accuracy in predicting quantum states, even amidst thermal noise, represents a substantial advancement in our comprehension and measurement of nonclassicality. This work signifies a major stride in merging quantum mechanics with artificial intelligence, setting the stage for revolutionary progress in quantum computing and information processing.The deployment of DGMs in our research signifies a pivotal transformation in quantum physics, particularly in characterizing and analyzing quantum states. These models' superior computational abilities have surpassed conventional boundaries, allowing for a more intricate and precise depiction of quantum states. The capacity of the DGMs to analyze and decode complex datasets, such as those from quantum experiments, has unveiled new pathways for grasping quantum phenomena. By accurately forecasting the behavior of quantum systems, inclusive of the nuances introduced by thermal noise, these models offer an expanded comprehension of quantum nonclassicality. This breakthrough not only boosts the precision of quantum state analysis but also considerably lowers the experimental and computational efforts typically needed in this domain. The integration of DGMs signals a shift in both technological progress and the conceptual approach to quantum state analysis and representation, potentially sparking novel discoveries and applications in quantum computing, simulation, and data handling.Despite the advances made in this research, there are intrinsic limitations to our method. A key challenge is the dependence on the quality and diversity of the training data for the DGMs. Inaccuracies or biases within this data could result in less reliable forecasts, especially in scenarios vastly differing from the training set. Moreover, the computational complexity of DGMs, when handling high-dimensional quantum systems, presents a significant hurdle, potentially affecting the scalability of our method. Another concern is the interpretability of the model's output; like many deep learning models, DGMs often operate as 'black boxes', which can obscure the rationale behind their predictions.To overcome these limitations, future studies should concentrate on refining data collection and preprocessing methods, to ensure a thorough and impartial dataset for training the DGMs. Advancing data augmentation techniques may also contribute to developing a robust model that can generalize effectively to new, uncharted scenarios. Addressing the computational hurdles, optimizing the neural network structure, and employing more efficient training algorithms may offer solutions. Moreover, the adoption of explainable AI methods could enhance the DGMs' interpretability, enabling researchers to delve deeper into the model's decision-making process. Continued collaboration between quantum physicists and AI experts is essential to refine these models, ensuring they stay relevant and effective in the rapidly evolving quantum research landscape.Looking forward, the scope for further advancements in this field is immense. Our research illustrates how the convergence of quantum physics and artificial intelligence can lead to innovative approaches in quantum computing, simulation, and information handling. Future research should aim to improve the training and scalability of DGMs to manage more intricate quantum systems, and possibly explore new quantum phenomena. Furthermore, improving the models' interpretability is crucial for deeper insight into their predictions and for building trust in their applications. As quantum technology advances, the incorporation of sophisticated computational models like DGMs will be vital in unlocking fresh capabilities and fostering innovation. Interdisciplinary collaboration, especially between quantum physicists and AI experts, will be central to fully leveraging these technologies, leading to landmark discoveries and applications in quantum science.Our study signifies an important advance in quantum information science through the successful integration of DGMs into the analysis and characterization of quantum states and dynamics. This novel approach surmounts several limitations of conventional methods, offering a more nuanced and comprehensive understanding of quantum nonclassicality. The precision and computational efficiency brought by our DGMs not only improve the accuracy of quantum state characterization but also open new avenues for exploring intricate quantum phenomena. The effective implementation of these models underscores the vast potential of artificial intelligence in enhancing our understanding of quantum mechanics and its practical applications.§ METHODS§.§ Generation of Wigner function training data of noisy states In our Wigner function training dataset, we specifically consider two types of parameterized states, i.e., the noisy coherent states and the noisy cat states. Since the coherent parameter α∈ℂ is a complex number, its real Re[α] and imaginary parts Im[α] are independently sampled in the range [-2,2]. While the relative phase θ in the noisy cat state is sampled in the range [0,2π). The decoherence effect is characterized by μ∈[0.5,1], while the thermal noise model <cit.> is characterized by ν=(1-μ^2)n̅ with n̅∈[0,2] denoting the averaged excitation in the thermal bath. We have generated 16,000 noisy coherent states and 18,000 noisy cat states in the Wigner function training dataset. Since, for both states, the Wigner function 𝒲(x,p) and the three marginals (𝒲(x),𝒲(p),𝒲(u)) can be derived analytically, the Wigner function training dataset can be generated efficiently. The detialedanalytic expressions are presented in Supplementary Note 1. §.§ Color mapping To deal with the joint quasi-distribution in the sense of an image, we design a color mapping to convert each datum into three monochromatic images. The color mapping consists of three functions{[ f_R(ζ) = 2×1.148/(e^-25(ζ-(ζ_0-0.12))+1)(e^5(ζ-(ζ_0+0.45))+1)-1; f_G(ζ) = 2× e^-(ζ-ζ_0)^2/0.0392-1; f_B(ζ) = 2×1.148/(e^25(ζ-(ζ_0+0.12))+1)(e^-5(ζ-(ζ_0-0.45))+1)-1 ].,where ζ_0 controls the peaking positions of the RGB curves, and the hight value of the generated data z↦ζ is rescaled such that ζ∈[0,1] for most of the training data. These parameters should be determined according to the statistics of the training data to be learned. Over-compressing will smear the details in the joint quasi-distributions, and under-compressing will truncate the peaking values of the quasi-distributions. In the case of Wigner function, we set ζ_0=1/2 and ζ=(z+0.45)/0.9. While for the case of CHER, we set ζ_0=1/5.5 and ζ=(z+0.01)/0.055. Further advantages of the color mapping on improving our model and the raw data structure are discussed in Supplementary Note 3. §.§ Generation of synthetic training data To train the DGM for solving the GT-deficient problem, we need to generate a sufficient amount of relevant training data. However, it is infeasible for us to achieve this by solving the FToG of a quantum dynamics. We circumvent this obstacle by generating synthetic training datasets. We combine a conventional bivariate Gaussian with a pair of Gaussians with opposite peaks according to℘(x_1,x_13)=p(x_1,x_13)+A p'(x_1,x_13)-A p”(x_1,x_13),where p(x_1,x_13), p'(x_1,x_13), and p”(x_1,x_13) are three randomly generated (conventional) Gaussians with individual statistical parameters, i.e., mean value, standard deviation, and correlation, and A is a random amplitude. It is easy to see that ℘(x_1,x_13) is normalized. Further details are shown in Supplementary Note 8.§ DATA AVAILABILITY The data that support the findings of this study are available upon reasonable request from the corresponding authors.apsrev41Control apsrev4-1 § ACKNOWLEDGMENTS This work is supported by the National Science and Technology Council, Taiwan, with Grants No. MOST 108-2112-M-006-020-MY2, MOST 109-2112-M-006-012, MOST 110-2112-M-006-012, MOST 111-2112-M-006-015-MY3, MOST 111-2112-M-A49-014, NSTC 112-2112-M-A49-019-MY3, NSTC 112-2123-M-006-001, MOST 112-2314-B-006-011, MOST 110-2224-E-007-003, and MOST 109-2222-E-006-005-MY2, partially by the Higher Education Sprout Project, Ministry of Education to the Headquarters of University Advancement at NCKU, and partially by the National Center for Theoretical Sciences, Taiwan.§ AUTHOR CONTRIBUTIONS H.-B.C. conceived and initiated the research. C.-H.L. developed the methodology with input from B.-Y.T. under the supervision of C.-H.Y. and H.-B.C. K.-L.L. optimized the model, collected the data, and analysed the results under the supervision of C.-H.Y. and H.-B.C. P.-Y.L. and Y.-N.C provided the physical meanings of the results. H.-B.C. drafted the manuscript with input from P.-Y.L. and C.-H.Y. H.-B.C., Y.-N.C, and C.-H.Y. were responsible for the integration among different research units.§ COMPETING INTERESTS The authors declare no competing interests.
http://arxiv.org/abs/2312.16055v1
{ "authors": [ "Hong-Bin Chen", "Cheng-Hua Liu", "Kuan-Lun Lai", "Bor-Yann Tseng", "Ping-Yuan Lo", "Yueh-Nan Chen", "Chi-Hua Yu" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226135607", "title": "Deep learning the nonclassicality within quasi-distribution representations from marginals" }
Visual Instruction Tuning towards General-Purpose Multimodal Model: A Survey Jiaxing Huang^†, Jingyi Zhang^†, Kai Jiang, Han Qiu and Shijian Lu^* All authors are with the School of Computer Science and Engineering, Nanyang Technological University, Singapore.† denotes equal contribution; * denotes corresponding author. ========================================================================================================================================================================================================================================================= This work continues the research done in Jordanova and Veleva (2023) where the history of the problem could be found. In order to obtain the structure distribution of the newly-defined Mixed Poisson process, here the operation "max" is replaced with "min". We start with the definition of Min-U-Exp distribution. Then, we compute its numerical characteristics and investigate some of its properties. The joint distribution of the inter-arrival times (which are dependent) is the MultivariateExp-Min-U-Exp distribution of II^-nd kind. Its univariate and multivariate versions are described, and the formulae for their numerical characteristics are obtained. The distribution of the moments of arrival of different events is called Erlang-Min-U-Exp.Different properties of these distributions are obtained, and their numerical characteristics are computed. Multivariate ordered Mixed Poisson-Min-U-Exp distribution describes the joint distribution of the time-intersection of a Mixed Poisson process with Min-U-Exp mixing variable. The corresponding distribution of the additive increments (which are also dependent) is the Mixed Poisson-Min-U-Exp one.The considered relations between these distributions simplify their understanding. § DESCRIPTION OF THE MODEL AND PRELIMINARIESLet 𝔲 be a Uniformly distributed random variable (r.v.) on the interval (0, a), briefly 𝔲∈ U(0, a). Here and further on, we denote by 𝔢 an Exponentially distributed r.v. with mean 1/λ, λ > 0, i.e. 𝔢∈ Exp(λ). Analogously to Jordanova and Veleva (2023) <cit.>, by replacing the maxima with minima, we consider the distribution of ξ := min(𝔲, 𝔢), and we call it Min-U-Exp distribution.Definition 1.We say that the r.v. ξ is Min-U-Exp distributed with parameters a > 0 and λ > 0, if it has a cumulative distribution function (c.d.f.)F_ξ(x) = {[0,x ≤ 0; 1-e^-λ x + x/ae^-λ x, x ∈ (0, a];1,x > a;].. Briefly, we will denote this in this wayξ∈ Min-U-Exp(a; λ).In the next proposition and theorem we investigate the main properties of Min-U-Exp distribution. The proves are analogous to the corresponding one in Jordanova and Veleva (2023).Proposition 1. a) ξ∈ Min-U-Exp(a; λ) if and only if the probability density function (p.d.f.) of ξ isP_ξ(x) = {[ 0 ,x ∉(0,a); e^-λ x/a(λ a + 1 - x λ) ,x ∈ (0, a) ].. b) (Scaling property) If ξ∈ Min-U-Exp(a; λ) and k > 0 is a constant, thenkξ∈ Min-U-Exp(ka; λ/k). c)If ξ∈ Min-U-Exp(a; λ), the hazard rate function of this distribution is h_ξ(x) = {[0, x ∉(0,a);λ + 1/a-x, x ∈ (0, a] ]..8cm< g r a p h i c s >Figure 1. Hazard function of ξ∈ Min-U-Exp(110; 0.04). 7.5cmNote: The hazard function described in c), and plotted in Figure 1, for λ = 0.04 and a = 110, is increasing andlim_x ↑∞ h_ξ(x) = ∞.It is well-known, that in the terms of survival theory this means that if ξ describes the length of someone's life, then, the risk for his/her death is higher as his/her age increases, and when the age approaches a, this risk becomes infinite. The fact thatlim_x ↓ 0 h_ξ(x) = λ + 1/ameans that in the beginning of his/her life also there is some possibility for death. These conclusions allow us to say that, for different parameters, this distribution is very appropriate for modelling of many lengths of lives.Theorem 1. Let a > 0, λ > 0, 𝔲∈ U(0, a), 𝔢∈ Exp(λ), and 𝔲 and 𝔢 be independent r.vs.Denote by ξ := min(𝔲, 𝔢). Then, a) ξ∈ Min-U-Exp(a; λ);b) The mean, and the moments of ξ are correspondingly𝔼ξ = 1/a λ^2(aλ - 1 + e^-λ a), and𝔼(ξ^k) = k/λ^k(γ(k, aλ) - γ(k+1, aλ)/aλ),k ∈ℕ. c) The variance of ξ is 𝔻ξ = 1/λ^2(2 + 2e^-λ a-1/a^2λ^2(aλ+1-e^-λ a)^2) .d) The Laplace-Stieltjes transform of ξ is 𝔼(e^-ξ t) = λ/λ + t + t/a(λ + t)^2(1 - e^-(λ + t) a), t ≥ 0. Proof: a) Consider x ∈ℝ, the definition of ξ and the independence between 𝔲 and 𝔢 entail, F_ξ(x) =ℙ(min(𝔲, 𝔢) ≤ x) = ℙ(𝔲≤ x ∪𝔢≤ x)= 1 - ℙ(𝔲 > x ∩𝔢 > x) = 1 - ℙ(𝔲 > x)ℙ(𝔢 > x) = 1 - (1-ℙ(𝔲≤ x))(1- ℙ(𝔢≤ x)). The definitions of Exp(λ) and U(0, a) distributions via their c.d.fs. entail (<ref>) and complete the proof of a) .b) [Although this proof could be analogous to the corresponding one in Jordanova and Veleva (2023) <cit.> here we present a different approach.] By taking expectation in the both sides of the equality min(𝔲, 𝔢) + max(𝔲,𝔢) = 𝔲 + 𝔢, and by using the additive property of the expectations we obtain, 𝔼(min(𝔲, 𝔢)) + 𝔼(max(𝔲,𝔢)) = 𝔼𝔲 + 𝔼𝔢. Now, the Theorem 1, b) in Jordanova and Veleva (2023) <cit.>, and the well-known formulae for the expectations of the exponential and uniform distributions lead us to the equality 𝔼(min(𝔲, 𝔢)) + a/2 + 1/a λ^2(1 - e^-λ a) = a/2 + 1/λ.For all k ∈ℕ, the equality min(𝔲, 𝔢)^k + max(𝔲,𝔢)^k = 𝔲^k + 𝔢^k, entails 𝔼(min(𝔲, 𝔢)^k) + 𝔼(max(𝔲,𝔢)^k) = 𝔼(𝔲^k) + 𝔼(𝔢^k). Analogously to the proof of the expectations 𝔼(min(𝔲, 𝔢)^k) + a^k/k+1 + k/aλ^k+1γ(k+1, aλ) + k/λ^kΓ(k, λ a)= a^k/k+1 + k!/λ^k. The rest follows by the well-known relation k! = kΓ(k) = kγ(k, aλ) + kΓ(k, aλ).c) After some algebra, the relation 𝔻ξ = 𝔼(ξ^2) - (𝔼ξ)^2 and b) entail c). d)^1 For all t ≥ 0, the equality e^-t min(𝔲, 𝔢) + e^-t max(𝔲,𝔢) = e^-t𝔲 + e^-t𝔢, and the additive property of the mean entail𝔼e^-t min(𝔲, 𝔢) + 𝔼e^-t max(𝔲,𝔢) = 𝔼e^-t𝔲 + 𝔼e^-t𝔢.Now, the Theorem 1, d) in Jordanova and Veleva (2023) <cit.>, and the well-known formulae for the Laplace-Stieltjes transform of the exponential and uniform distributions lead us to𝔼(e^-tξ) =𝔼(e^-tmin(𝔲, 𝔢)) =e^-at-1/ta + λ/λ + t - 1/at(1 - e^-λ a) +t/a(λ + t)^2(1 - e^-(λ + t) a) =λ/λ + t +t/a(λ + t)^2(1 - e^-(λ + t) a).§ EXP-MIN-U-EXP AND ERLANG-MIN-U-EXP DISTRIBUTIONS Definition 2.We say that the r.v. τ is Exp-Min-U-Exp distributed with parameters a > 0 and λ > 0, if it has a c.d.f.F_τ(t) = {[0,t ≤ 0; t/λ+ t - t/a(λ + t)^2(1-e^-a(λ + t)),t > 0 ].. Briefly we will denote this in this wayτ∈ Exp-Min-U-Exp(a; λ).The well-known relations between c.d.fs., p.d.fs. and the correspondingprobability distribution entail the following result.Proposition 2. For a > 0 and λ > 0, τ∈ Exp-Min-U-Exp(a; λ) if and only if the p.d.f.P_τ(t) = {[0,t ≤ 0; λ/(λ + t)^2+t-λ/a(λ + t)^3(1-e^-a(λ + t))-t/(λ + t)^2e^-a(λ + t),t > 0 ].. Analogously to the corresponding result in <cit.> we obtain that ∫_0^∞ P_τ(t) dt = 1. The last means that this distribution is proper.Definition 3.We say that the random vector (rv.) (τ, ξ) has bivariate Exp-Min-U-Exp distribution of I^-st kind with parameters a > 0, and λ > 0, if it has a joint p.d.f.P_τ, ξ(t,x) = {[ 1/axe^-(λ + t)x(1+λ a- x λ) , t > 0 ∩ x ∈ (0;a); 0 , otherwise; ]..Briefly we will denote this in this way(τ, ξ) ∈ Exp-Max-U-Exp-I^-st(a, λ).The proves of the results in the next theorem are analogous to the corresponding one in <cit.>. Here we are going to present only some different approaches for some of them.Theorem 2. For a > 0 and λ > 0, if ξ∈ Min-U-Exp(a; λ) and for x > 0, (τ|ξ=x) ∈ Exp(x), then: a) τ∈ Exp-Min-U-Exp(a, λ);b) τd=η/ξ, where η∈ Exp(1), and ξ and η are independent.c) For p ∈ (-1, 1),𝔼(τ^p) = 1/aΓ(p+1)λ^p-1((λ a+1)γ(1-p, aλ) -γ(2-p, aλ)), and 𝔼(τ^p) = ∞, otherwise.d) The joint distribution of τ and ξ is (τ, ξ) ∈ Exp-Min-U-Exp-I^-st(a, λ) and (τ, ξ) d=(η/ξ, ξ), where η∈ Exp(1), and ξ and η are independent.e) For all t > 0,P_ξ(x|τ=t) = 0,x ∉(0;a), P_ξ(x|τ=t) = x(λ + t)^3e^-(λ + t)x(1+λ a - λ x)/aλ(λ + t) + (t-λ)(1-e^-a(λ +t))- a t (λ+t)e^-a(λ + t), x ∈ (0, a).f) The mean square regression 𝔼(τ|ξ=x) = 1/x, x > 0. g) The mean square regression function is 𝔼(ξ|τ=t) = 2(t-2λ) +2aλ(t+λ)-e^-a(λ +t)(a^2t(λ+t)^2+2a(t^2-λ^2)-2(t-2λ))/aλ(λ+t)^2+(t^2-λ^2)(1-e^-a(λ + t))-at(λ+t)^2e^-a(λ + t),t > 0.Proof: a) For t > 0, the integral form of the Total probability formula and Theorem 1, d) entail,ℙ(τ > t)= ∫_0^∞ℙ(τ > t|ξ=x)P_ξ(x)dx = ∫_0^∞ e^-λ xP_ξ(x)dx = 𝔼(e^-ξ t)= λ/λ + t + t/a(λ + t)^2(1 - e^-(λ + t) a).Thus, the relation F_τ(t) = 1- ℙ(τ > t) leads us to (<ref>), and τ∈ Exp-Min-U-Exp(a, λ).Definition 4.We say that a rv. (τ_1, τ_2, ..., τ_k) has Multivatiate Exp-Min-U-Exp distribution of II^-nd kind with parameters a > 0, and λ > 0, if it has a joint p.d.f. P_τ_1, τ_2, …, τ_k(t_1, t_2, …, t_k) = γ(k+1,a(λ + t_1 + … + t_k))aλ(λ + t_1 + … + t_k) + t_1 + … + t_k - λ k/a(λ + t_1 + … + t_k)^k+2 + λ a^k/λ +t_1 + … + t_ke^-a(λ + t_1 + … + t_k),t_1 > 0, t_2 > 0, …, t_k > 0,and P_τ_1, τ_2, …, τ_k(t_1, t_2, …, t_k) = 0,otherwise.Briefly we will denote this in this way(τ_1, τ_2, …, τ_k) ∈ Exp-Min-U-Exp-II(a, λ). Definition 5.We say that the r.v. T_n is Erlang-Min-U-Exp distributed with parameters n ∈ℕ, a > 0, and λ > 0, if it has a p.d.f.P_T_n(t) = t^n-1γ(n+1,a(λ + t))/a(n-1)!(λ+t)^n-1(λ a + t - λ n/λ+t) + λ a^n t^n-1/(n-1)!(λ + t)e^-a(λ + t),when t > 0, and P_T_n(t) = 0, otherwise. Briefly, we will denote this in this wayT_n ∈ Erlang-Min-U-Exp(n; a, λ).We will skip the proves of the results in the next theorem as far as they are analogous to the corresponding one in <cit.>.Theorem 3. For a > 0, and λ > 0, if ξ∈ Min-U-Exp(a; λ) and for x > 0, (τ_1, τ_2, ..., τ_k|ξ=x) are independent identically Exp(x) distributed r.vs.,T_n := τ_1 + … + τ_n, n ∈ℕ, then,a) (τ_1, τ_2, …, τ_k) ∈ Exp-Min-U-Exp-II(a, λ).b) For all i = 1, 2, ..., k, τ_i ∈Exp-Min-U-Exp-(a, λ).c)(τ_1, τ_2, …, τ_k)d=(η_1/ξ, η_2/ξ, …, η_k/ξ), where η_1, η_2, …, η_k are independent identically distributed (i.i.d.) Exp(1), and independent on ξ. d) T_n ∈ Erlang-Min-U-Exp(n; a, λ). T_n d=η_1 + η_2 + … + η_n/ξ, where η_1, η_2, …, η_n are i.i.d. Exp(1), and independent on ξ. T_n d=θ_n/ξ, where θ_n ∈ Gamma(n, 1) is independent on ξ. e) For p ∈ (-n, 1), 𝔼(T_n^p) = Γ(p+n)/(n-1)!λ^p{(1 + p/aλ)γ(1-p,λ a)+e^-λ a/(aλ)^p},and 𝔼(T_n^p) = ∞, for p ∉(-n, 1).§ THE MIXED POISSON-MIN-U-EXP PROCESS Definition 6.A r.v. θ has a Mixed Poisson-Min-U-Exp distributed with parameters a > 0, and λ > 0 if its probability mass function (p.m.f.) isℙ(θ = n) = 1/n!{γ(n+1,a(λ+1))/(λ+1)^n+2(λ(λ+1)+1-nλ/a) + λ a^n/λ+1e^-a(λ+1)},n = 0, 1, ….Briefly, θ∈ MPMin-U-Exp(a, λ).8cm< g r a p h i c s >Figure 2. P.m.f. of θ∈ MPMin-U-Exp(1, λ). 8cm< g r a p h i c s >Figure 3. P.m.f. of θ∈ MPMin-U-Exp(a, 1).Definition 7.Let μ(t): [0, ∞) → [0, ∞) be a nonnegative, strictly increasing and continuous function, μ(0) = 0, ξ∈ Min-U-Exp(a; λ) and N_1be a Homogeneous Poisson process (HPP) with intensity 1, independent on ξ. We call the random processN := {N(t), t≥ 0}= {N_1(ξμ(t)), t ≥ 0}a Mixed Poisson process with Min-U-Exp mixing variable or MPMin-U-Exp process. Briefly N ∈ MPMin-U-Exp(a, λ; μ(t)).Definition 8.Let n ∈ℕ. We say that a random vector (N_1, N_2, …, N_n) is Ordered Poisson-Min-U-Exp distributed with parameters a > 0, λ > 0, and 0 < μ_1 < μ_2 < ... < μ_n if, for all integers 0 ≤ k_1 ≤ k_2 ≤…≤ k_n,ℙ(N_1 = k_1, N_2 = k_2, …, N_n = k_n) = μ_1^k_1(μ_2 - μ_1)^k_2 - k_1…(μ_n - μ_n-1)^k_n - k_n - 1/k_1!(k_2 - k_1)!…(k_n - k_n-1)! ×{γ(k_n+1,a(μ_n+λ))/a(μ_n+λ)^k_n+2(λ a(λ+μ_n) + μ_n-λ k_n)+aλ/λ+μ_ne^-a(λ+μ_n)},and ℙ(N_1 = k_1, N_2 = k_2, …, N_n = k_n) = 0, otherwise. Briefly, (N_1, N_2, …, N_n) ∈ O_PMinUE(a, λ; μ_1, μ_2, ..., μ_n).Definition 9.Let n ∈ℕ. We say that a random vector (N_1, N_2, …, N_n) is Mixed Poisson-Min-U-Expdistributed with parameters a > 0, λ > 0, and 0 < μ_1 < μ_2 < ... < μ_n if, for all m_1, m_2, …, m_n ∈{0, 1, …},ℙ(N_1 = m_1, N_2 = m_2, …, N_n = m_n) = μ_1^m_1(μ_2 - μ_1)^m_2…(μ_n - μ_n-1)^m_n/ m_1!m_2!… m_n! ×{γ(m_1 + … + m_n + 1,a(λ+μ_n))/a(λ + μ_n)^m_1 + … + m_n + 2(λ a(λ+μ_n) + μ_n-λ(m_1 + … + m_n)) + aλ/λ + μ_ne^-a(λ + m_n)},and ℙ(N_1 = m_1, N_2 = m_2, …, N_n = m_n) = 0, otherwise. Briefly, (N_1, N_2, …, N_n) ∈ M_PMinUE(a, λ; μ_1, μ_2, ..., μ_n).In the next two propositions we present two relations between the distributions introduced inDefinition 8 and Definition 9. Their proves, together with the proof of Theorem 4 will be skipped, because they are analogous to the corresponding one in Jordanova et al. <cit.>, and Jordanova and Stehlik <cit.>. The algorithms are based on the above results and the general formulae for any Mixed Poisson process which could be found, for example, in Grandel <cit.>, or Karlis and Xekalaki <cit.>.Proposition 3. If (N_1, N_2, …, N_n) ∈ O_PMinUE(a, λ; μ_1, μ_2, ..., μ_n), then(N_1, N_2 - N_1, …, N_n - N_n-1) ∈ M_PMinUE(a, λ; μ_1, μ_2, ..., μ_n). Proposition 4.If (N_1, N_2, …, N_n) ∈ M_PMinUE(a, λ; μ_1, μ_2, ..., μ_n), then(N_1, N_1 + N_2, …, N_1 + N_2 + … + N_n) ∈ O_PMinUE(a, λ; μ_1, μ_2, ..., μ_n). Theorem 4. Consider a > 0, λ > 0, anda nonnegative, strictly increasing and continuous deterministic function μ(t): [0, ∞) → [0, ∞). Suppose that {N(t), t ≥ 0}∈ MPMin-U-Exp(a, λ; μ(t)). a) For all t > 0, N(t) ∈ MPMin-U-Exp(aμ(t), λ/μ(t)).b)These processes are over-dispersed,𝔼N(t) = μ(t)/λ(1 - 1/aλ + e^-λ a/aλ), 𝔻N(t) = μ(t)/λ(1 - 1/aλ + e^-λ a/aλ) + μ^2(t)/λ^2{2+2e^-λ a-(1+ 1/aλ - e^-λ a/aλ)^2 }. c) For all t ≥ 0, the probability generating function (p.g.f.) of the time intersections is𝔼(z^N(t)) = λ/λ + μ(t)(1-z) + μ(t)(1-z)/a(λ + μ(t)(1-z))^2(1 - e^-a(λ + μ(t)(1-z))),|z| < 1. d) For t > 0, and n = 0, 1, …, P_ξ(x|N(t) = n) = 0, when x ≤ 0 or x > a, and when x ∈ (0, a], P_ξ(x|N(t) = n) = x^ne^-x(λ + μ(t))(λ a + 1 - λ x)/γ(n+1,a(λ + μ(t)))/(λ + μ(t))^n+2(a λ(λ+μ(t))+μ(t) - nλ) + λ a^n+1/λ+μ(t)e^-a(λ+μ(t)). e) For t > 0, and n = 0, 1, …, the mean square regression is𝔼(ξ|N(t) = n) = γ(n+2,a(λ + μ(t)))/(λ + μ(t))^n+3(a λ(λ+μ(t))+μ(t) - (n+1)nλ) + λ a^n+2/λ+μ(t)e^-a(λ+μ(t))/γ(n+1,a(λ + μ(t)))/(λ + μ(t))^n+2(a λ(λ+μ(t))+μ(t) - nλ) + λ a^n+1/λ+μ(t)e^-a(λ+μ(t)). f)For all k = 1, 2, …,𝔼[N(t)(N(t)-1)(N(t) - k + 1)] = k(μ(t))^k/λ^k(γ(k, aλ) - γ(k+1, aλ)/aλ). g)For all n ∈ℕ, and 0 ≤ t_1 ≤ t_2 ≤…≤ t_n,(N(t_1), N(t_2), …, N(t_n)) ∈ O_PMinUE(a, λ; μ(t_1), μ(t_2), ..., μ(t_n)). h) For all n ∈ℕ, and 0 ≤ t_1 ≤ t_2 ≤…≤ t_n,(N(t_1), N(t_2) - N(t_1), …, N(t_n) - N(t_n-1)) ∈ M_PMinUE(a, λ; μ(t_1), μ(t_2), ..., μ(t_n)). i) Denote by τ_1, τ_2, … the inter-occurrence times of the counting process N. Then, τ_1, τ_2, … are dependent and Exp-Min-U-Exp(a; λ) distributed.j)For n ∈ℕ, if T_n is the moment of occurrence of the n-th event of the counting process N, then T_n ∈ Erlang-Min-U-Exp(n; a, λ).k)For any 0 < s < t, the r.v. N(s) given N(t) = n is Binomially distributed. More precisely,(N(s)|N(t) = n) ∈ Bi(n, μ(s)/μ(t)). § CONCLUSIONS This work introduces a new class of generalized Mixed Poisson processes. First a new structure distribution is defined. It is called Min-U-Exp distribution. It is very similar to the exponential one and coincides with the distribution of the minima of two random variables - Uniform and Exponential. The inter-arrival times of these processes are described by newly-introduced Exp-Min-U-Exp distribution. The probability type of the moments of arrivals of the corresponding events is called Erlang-Min-U-Exp one, and its properties are thoroughly investigated. Along with our work some new multivariate distributions are defined. Ordered Mixed Poisson-Min-U-Exp distribution describes, for example, the joint distribution of the time-intersection of Mixed Poisson process with Min-U-Exp mixing variable. The corresponding distribution of the additive increments (which are dependent) is Mixed Poisson-Min-U-Exp one. The joint distribution of the inter-arrival times (which are also dependent) is Multivatiate Exp-Min-U-Exp distribution of II^-nd kind. Different properties of these distributions are obtained, and their numerical characteristics are computed. The relations between the considered random elements are shown.§ ACKNOWLEDGMENTSThe work was supported by the Scientific Research Fund in Konstantin Preslavsky University of Shumen, Bulgaria under Grant Number RD-08-.../......2024 and project Number 2024 - FNSE – ...., financed by Scientific Research Fund of Ruse University.plain99 JV2023 Jordanova, P., Veleva, E., Mixed Poisson process with Max-U-Exp mixing variable, AIP Conference Proceedings, Accepted,<https://arxiv.org/pdf/2307.09798.pdf>.SMladen Jordanova, P., Savov, M., Tchorbadjieff, A., Stehlik, M., Mixed Poisson Process with Stady mixing variable,Stochastic Analysis and Applications, Accepted, <https://doi.org/10.1080/07362994.2023.2242471>.LMJ Jordanova, P., Stehlik, M., Mixed Poisson Process with Pareto mixing variable and its risk applications, Lithuanian mathematical journal, vol. 56(2), pp. 189-206 (2016).GrandelMixed Grandel, J., Mixed Poisson processes, CRC Press, vol. 77, 1997.KarlisandXekalaki Karlis, D., Xekalaki, E., Mixed Poisson distributions, International Statistical Review, vol. 73 (1), pp. 35-58 (2005).
http://arxiv.org/abs/2312.16595v1
{ "authors": [ "Pavlina K. Jordanova", "Evelina Veleva", "Milan Stehlik" ], "categories": [ "math.PR", "60G10" ], "primary_category": "math.PR", "published": "20231227144345", "title": "Mixed Poisson process with Min-U-Exp mixing variable" }
[email protected] Atominstitut, Technische Universität Wien, Stadionallee 2, 1020 Vienna, Austria K. B. and O. L. contributed equally. [email protected] Department of Optics, Palacký University, 17. listopadu 12, 771 46 Olomouc, Czech Republic Atominstitut, Technische Universität Wien, Stadionallee 2, 1020 Vienna, Austria K. B. and O. L. contributed equally. Department of Optics, Palacký University, 17. listopadu 12, 771 46 Olomouc, Czech Republic [email protected] Atominstitut, Technische Universität Wien, Stadionallee 2, 1020 Vienna, Austria [email protected] Atominstitut, Technische Universität Wien, Stadionallee 2, 1020 Vienna, AustriaActivation of genuine multipartite entanglement (GME) is a phenomenon whereby multiple copies of biseparable but fully inseparable states can be GME.This was shown to be generically possible in finite dimensions.Here, we extend this analysis to infinite dimensions.We provide examples of GME-activatable non-Gaussian states.For Gaussian states we employ a necessary biseparability criterion for the covariance matrix (CM) and show that it cannot detect GME activation.We further identify fully inseparable Gaussian states that satisfy the criterion but show that multiple and, in some cases, even single copies are GME.Thus, we show that the CM biseparability criterion is not sufficient even for Gaussian states.Multi-copy activation of genuine multipartite entanglementin continuous-variable systemsNicolai Friis January 14, 2024 ============================================================================================Introduction. Entanglement stands as a key phenomenon in quantum physics, playing an essential role in the advancement of contemporary quantum technologies. Initially, attention was largely centred on two-party cases, but multipartite entanglement in larger systems is now highly significant in modern quantum theory, both practically and fundamentally.In experiments distributing quantum states among various parties, often multiple identical copies of these states are shared.Therefore, understanding entanglement properties in these multi-copy situations is essential, not just theoretically but also for practical implementations. One known feature in the two-party case is that bipartite separability is tensor stable: bipartite entanglement cannot be established between two parties by sharing multiple copies of separable states.This trivially extends to partition separable states of more than two parties, i.e., states that are separable with respect to a fixed partitioning of the parties into two groups.However, the same is not true for more complex states of multiple parties. States that are mixtures of partition-separable states for different partitions are called biseparable, but they do not have to be partition-separable themselves.For such biseparable but not partition-separable states, the initially perhaps counter-intuitive phenomenon of activation of genuine multipartite entanglement (GME) can occur.That is, even though a single copy of a state might be biseparable, several identical copies of such a state can feature GME concerning the local parties sharing these copies.This is what we call multi-copy activation of GME. First remarked upon in Ref. <cit.>, for two copies of a specific four-qubit state, GME activation was investigated more comprehensively in Ref. <cit.>.There, upper bounds were provided for the number of copies maximally needed to activate GME for a family of N-qubit states decomposable as mixtures of Greenberger-Horne-Zeilinger (GHZ) states and white noise. Moreover, it was also shown in Ref. <cit.> that GME activation can even occur for biseparable states with positive partial transpose across all given cuts, i.e., states with no distillable entanglement.These results were generalized in Ref. <cit.> for all finite-dimensional multipartite states, where it was proven that all states that are biseparable but not partition separable are GME-activatable, even if the activation of GME in general requires an unbounded number of copies. Here we investigate GME activation in the infinite-dimensional regime, more specifically in continuous-variable (CV) systems.Our findings can be organized into two main categories, and concern non-Gaussian and Gaussian states, respectively. Within the first category, non-Gaussian states, we note that there are multipartite states that have a non-zero overlap with only finitely many Fock states.The density operators for these states as well as all their marginal states can be fully represented on finite-dimensional Hilbert spaces, and as such the results for GME activatability from <cit.> apply directly.However, not all non-Gaussian states are of this form. As a first result, we demonstrate GME activation for a family of biseparable three-mode non-Gaussian states that have non-zero overlap with infinitely many Fock states.These states are convex combinations of product states of two-mode squeezed vacua with a Fock state of the third mode, and are thus biseparable by definition.To detect GME we use a procedure that entails locally mapping CV modes to qubits <cit.> and investigating entanglement properties in the resulting three-qubit subspace.Together with the k-separability criterion presented in Ref. <cit.>, this technique reveals that two copies of the considered state are detected as GME for a continuous range of squeezing parameters. Hence, we confirm by example that GME activation is in principle possible for non-Gaussian states in infinite dimensions. As a second and main focus of this paper, we then turn to the question of GME activation for Gaussian states. There the challenge lies in determining if a given state is biseparable but not partition separable, to begin with. Since Gaussian states are fully described by their first and second statistical moments, and because the first moments can be freely adjusted by local unitaries (displacements), entanglement properties of Gaussian states are fully captured by their second moments, which in turn can be organized into a covariance matrix (CM).A Gaussian state with CM γ is fully separable with respect to a partition into N parties if and only if there exist CMs γ00(1), ⋯, γ00(N) corresponding to the N subsystems satisfying γ≥γ00(1)⊕⋯⊕γ00(N) <cit.>.A generalization for biseparable states (BS) can be found in Ref. <cit.>: for all biseparable states with CM γ00BS there exist CMs γ00M(i) that are block-diagonal with respect to the partition M(i) along with probability weights p_i with ∑_i p_i=1 and 0≤ p_i ≤ 1 such that γ00BS-∑_i p_iγ00M(i)≥0. In the context of this inequality, which we dub the CM biseparability criterion, we present three main results: First, we show that the CM biseparability criterion is insufficient for detecting the potential activation of GME for any number of copies.That is, we prove that if the CM of the initial single-copy state satisfies the criterion, then so do the CMs of any number of identical copies of the state.If, like its counterparts for bipartite or full separability, the CM biseparability criterion was indeed necessary and sufficient for the biseparability of Gaussian states, our first result would imply that GME activation is impossible for Gaussian states.However, as a second main result, we show that there exist Gaussian states that satisfy the CM biseparability criterion but which are in fact GME.As a corollary of this finding, we then show that this leads to the perhaps surprising conclusion that there exist Gaussian states that are GME even though the first and second statistical moments that fully define them exactly match those of a biseparable but non-Gaussian state. To present these results in more detail, we first continue with a more technical exposition on the structure of multipartite entanglement and the description of CV systems, before returning to the CM biseparability criterion, along with the proofs and discussion of our main results. Bipartite entanglement.For two quantum systems with Hilbert spaces ℋ_A and ℋ_B, respectively, a global pure state |Ψ⟩00AB∈ℋ_A⊗ℋ_B is called separable if and only if it can be written as a tensor product |Ψ⟩00AB=|ψ⟩_A ⊗|ϕ⟩_B. For mixed states represented by density operators ρ=∑_j p_j |φ_j⟩⟨φ_j|, where the p_j are probability weights fulfilling ∑_j p_j=1 and |φ_j⟩∈ℋ_A⊗ℋ_B, a global state ρ00AB is separable if and only if it can be written as a convex combination of tensor products of density operators of the two subsystems, ρ00AB =∑_ip_iρ_i00A⊗ρ_i00B.States that are not separable are called entangled. Multipartite entanglement.In multipartite scenarios with N parties and a Hilbert space ℋ_N=⊗_i=1^Nℋ_i one may investigate separability with respect to different partitions M(i) of the set [N]:={1,⋯, N} into two or more disjoint subsets whose union is [N], labelled by i. We then use the following terminology:A partition into k subsets is called a k-partition, and a pure state in ℋ_N is called k-separable if it can be written as a tensor product of k pure states for at least one k-partition.A mixed state is called k-separable if it can be decomposed as a convex mixture of pure states that are (at least) k-separable.Note that the different terms of the decomposition may be k-separable with respect to different k-partitions.A state of N parties is called fully separable if it is N-separable, and it is called biseparable if it is k-separable for k=2.Any state that is separable with respect to any fixed partition is called partition separable, whereas a state that is not separable with respect to any fixed partition is called fully inseparable.The sets 𝒮_k formed by all states that are (at least) k-separable form a hierarchy of nested convex sets, 𝒮_N⊆…⊆𝒮_k⊆…𝒮_3 ⊆𝒮_2.Here it is crucial to note that the set 𝒮_2 of biseparable states is the convex hull of all partition-separable states. As such, 𝒮_2 contains some states that are fully inseparable and thus multipartite entangled.Yet, only states that are not (at least) biseparable, and which are hence outside of the set 𝒮_2, are called genuinely N-partite entangled or genuinely multipartite entangled (GME).A schematic illustration of the state space of three parties is given in Fig. <ref>, and for reviews see, e.g., <cit.> or <cit.>). In this paper we will pay special attention to the states that belong tothe set 𝒮_2 but which do not belong to any set of partition separable states, we will call these fully inseparable biseparable states.These are the states that are potentially GME activatable and, indeed, it was shown in Ref. <cit.> that all such states in finite-dimensional Hilbert spaces are GME activatable. That is, for any fully inseparable biseparable state ρ00ABC… in a finite-dimensional Hilbert space there exists a k≥2 such that ρ00ABC…^⊗ k=ρ00A_1B_1C_1…⊗…⊗ρ00A_kB_kC_k… is GME with respect to the partition A_1… A_k|B_1… B_k|C_1… C_k|… . In the following, we investigate this phenomenon for infinite-dimensional Hilbert spaces. Continuous-variable systems.For infinite-dimensional quantum systems, some observables have continuous spectra, in particular, the quadrature operators that we will discuss shortly.This is the case, for instance, for modes of the quantized electromagnetic field.To each mode labelled by j, one associates annihilation and creation operators, â_j=1√(2)(x̂_j+ip̂_j) and â^†_j=1√(2)(x̂_j-ip̂_j), respectively, where x̂_j and p̂_j are the quadrature operators that satisfy the canonical commutation relations[x̂_j,p̂_k] = iδ_jk, [x̂_j,x̂_k] =[p̂_j,p̂_k] =0 .In the case of N modes with Hilbert space ℋ_N = ⊗_i=1^Nℋ_i the system is described by 2N quadrature operators x̂_1,p̂_1,…,x̂_N,p̂_N, which can be arranged into a vector𝐫̂=(x̂_1,p̂_1,…,x̂_N,p̂_N)^T.The commutation relations, in Eq. (<ref>) can then be compactly expressed as [r̂_j,r̂_k]=i Ω_jk, whereΩ=⊕^N_i=1[01; -10 ],is the so-called symplectic form. In practice, the properties of CV quantum systems described by density operators ρ̂ can also be characterized by the statistical moments of the quadrature operators and their quasiprobability distributions.One of them is the Wigner function defined asW(𝐱,𝐩)[ρ̂]=1(2π)^N∫d^N𝐱' e^i𝐱'·𝐩⟨𝐱-𝐱'2|ρ̂|𝐱+𝐱'2⟩,with 𝐱'·𝐩=∑^N_i=1 x'_ip_i,d^N𝐱'=dx'_1 dx'_2…dx'_N, and|𝐱±𝐱'2⟩=⊗^N_i=1|x_i±x'_i2⟩, with x̂_i|x_i⟩=x_i|x_i⟩. Gaussian states.An important family of CV states are so-called Gaussian states.These are defined as states for which the Wigner function (<ref>) is Gaussian, in which case it reduces toW(𝐫)=e^-(𝐫-𝐝)^Tγ^-1(𝐫-𝐝)/π^N√(detγ),where 𝐝 is the vector of first moments with elements d_i=⟨r̂_i⟩=Tr(ρ̂ r̂_i) and γ is the CM with componentsγ_ij=⟨r̂_i r̂_j+ r̂_j r̂_i⟩-2⟨r̂_i⟩⟨r̂_j⟩.This family of CV states is noteworthy not only due to the feasibility of their preparation in the laboratory but also because they are fully determined by their first and second moments, i.e., any N-mode Gaussian state is fully determined by its 2N-component vector of first moments 𝐝 along with its 2N×2N CM γ, Eq. (<ref>).Since the first moments d_i can always be set to zero by local displacements, which has no impact on the entanglement of the system or the CM elements, we can fully characterize correlations in Gaussian states only via the CM γ.States whose Wigner function is not of the form of Eq. (<ref>) are called non-Gaussian states, and we will begin the presentation of our results with an example for such states.For reviews of CV systems for quantum-information processing, see, e.g., <cit.>. GME activation for non-Gaussian states.Now we turn to the demonstration of GME activation in infinite-dimensional systems.Since non-Gaussian states that have an overlap with only finitely many Fock states can be represented completely on a finite-dimensional Hilbert space, their GME activatability follows trivially from the results of Ref. <cit.>.As we will show next, GME activation is also possible for states that have non-zero overlap with infinitely many Fock states. To this end, we construct a one-parameter family of three-mode non-Gaussian states with this property by considering convex combinations of the tensor product of two-mode squeezed (TMS) states ρ00 TMS=(1-λ^2)∑_n,m=000∞λ^m+n|mm⟩⟨ nn| with λ=tanhr, and n-excitation Fock states |n⟩ in the third mode.In this way, we obtain fully symmetric (FS) statesρ00 FS00ABC=13(ρ00 TMS00AB⊗|n⟩⟨n|00C+ρ00 TMS00AC⊗|n⟩⟨n|00B+|n⟩⟨n|00A⊗ρ00 TMS00BC).These states are biseparable by construction, but entangled with respect to all three bipartitions, and hence fully inseparable for all non-zero values of the squeezing parameter r≠ 0 and all excitation numbers n.This can be seen by noting that the two-qubit states obtained by tracing out any single mode, e.g., C, and locally projecting the remaining two modes into the subspace spanned by any two local Fock-state pairs {|k⟩00A, |k'⟩00A} and {|k⟩00B, |k'⟩00B} for k≠ k'≠ n≠ k, see Appendix <ref>. For investigating multipartite entanglement in CV systems several methods are available (see, e.g., <cit.>).Here, we use a special case of a k-separability criterion <cit.>:Every k-separable N-partite state ρ satisfies√(⟨ϕ|ρ^⊗ 2P_tot|ϕ⟩)≤∑_{M}(∏_i=1^k⟨ϕ|P^†_M_iρ^⊗ 2P_M_i|ϕ⟩)0012kfor every fully separable 2N-partite state |ϕ⟩=⊗_i=1^2N|ϕ00i⟩, where P_M_i are permutation operators exchanging the two copies of all subsystems contained in the i-th subset of the partition M, P_tot exchanges the two copies entirely, and the sum runs over all possible partitions M of the considered system into k subsystems. Violating (<ref>) for k=2 thus detects genuine N-partite entanglement. We now employ this criterion for k=2 to check if two copies of ρ00 FS00ABC from Eq. (<ref>) are GME, focussing on the special case |n⟩=|0⟩.Thus, the state ρ in inequality (<ref>) is ρ=ρ00 FS00A_1B_1C_1⊗ρ00 FS00A_2B_2C_2 and we pick |ϕ⟩ to be the fully separable state |ϕ⟩ =|000⟩00A_1B_1C_1|000⟩00A_2B_2C_2|011⟩00A' _1B'_1C'_1|101⟩00A' _2B'_2C'_2 .For this choice, the left-hand side of (<ref>) evaluates to|⟨000|ρ00 FS00ABC|011⟩|× |⟨000|ρ00 FS00ABC|101⟩| = 19(1-λ^2)^2λ^4 ,whereas each term on the right-hand side is proportional to |⟨001|ρ00 FS00ABC|001⟩|=0 (see Appendix <ref> for more details).The inequality is violated for all non-zero values of r.The two-copy state is GME, even though a single copy is biseparable, which shows that GME activation is possible in infinite-dimensional systems for non-Gaussian states. GME activation for Gaussian states.We now turn to the characterization of the multipartite entanglement structure for Gaussian states.Since the correlations of the latter are fully captured by their second moments, the CM offers itself for this task.Indeed, it has been shown <cit.> that a Gaussian state ρ with CM γ is fully separable with respect to a partition into N subsystems (of one or more modes each) if and only if there exist CMs γ00(i) for i=1,…,N corresponding to these N subsystems such that γ-γ00(1)⊕⋯⊕γ00(N)≥0.For arbitrary (not necessarily Gaussian) states that are fully separable such a decomposition also exists, but the existence of such a decomposition generally does not imply full separability. A generalization that we call the CM biseparability criterion was given in Ref. <cit.>:For any biseparable state with CM γ00BS there exist block-diagonal CMs γ00M(i) corresponding to the partition M(i) along with a probability distribution {p_i} such that γ00BS-∑_ip_i γ00M(i) ≥ 0 .If no such convex decomposition into CMs γ00M(i) exists, one can hence conclude that the state under consideration must be GME. However, as we shall show now, this criterion cannot be used to detect GME activation for identical copies:If a CM γ00BS corresponding to a state ρ satisfies the condition (<ref>), then so does the CM ⊕_n=1^kγ00BS corresponding to the k-copy state ρ^⊗ k.To prove this, we note that if for a given CM γ00BS the ensemble {(p_i,γ00M(i))}_i is such that Δγ:=γ00BS-∑_ip_i γ00M(i)≥0, then the ensemble {(p_i,⊕_n=1^kγ00M(i)00(n))}_i with γ00M(i)00(n)=γ00M(i) ∀ n satisfies⊕_n=1^kγ00BS - ∑_ip_i ⊕_n=1^kγ00M(i)00(n) = ⊕_n=1^k(γ00BS- ∑_ip_i γ00M(i)) ≥ 0,since the left-hand side is block-diagonal and each block is identical to a positive semi-definite matrix Δγ≥0.While this result means that the CM biseparability criterion cannot be used to detect potential GME activation for identical copies of a given state, it may still succeed in detecting GME for pairs of two (or more) different fully inseparable biseparable Gaussian states with CMs γ and γ̃, respectively, as long as the states do not admit `biseparable' CM decompositions {(p_i,γ00M(i))}_i and {(q_i,γ̃00M(i))}_i with p_i=q_i ∀ i. In Appendix <ref> we present examples for such a GME activation from pairs of different Gaussian states. The perhaps more pressing question concerning the result in (<ref>) is whether it permits GME activation for identical copies of Gaussian states at all.That is, if the CM biseparability criterion (<ref>) was necessary and sufficient for biseparability of Gaussian states in analogy to the criterion for (full) separability <cit.>, then no Gaussian GME activation would be possible.However, we will show next that satisfying the CM biseparability criterion (<ref>) is not sufficient for biseparability of Gaussian states.For this purpose, we focus on an example of a three-mode Gaussian state with CM γ00ABC =13( γ00AB00TMS⊕100C +γ00BC00TMS⊕100A +γ00AC00TMS⊕100B),whereγ00TMS=([ cosh(2r)1 sinh(2r)Z; sinh(2r)Z cosh(2r)1 ])is the CM of a TMS vacuum state, Z=diag{1,-1} is the usual third Pauli matrix, and 1 is the CM of the single-mode vacuum state.One observes that this is the same CM as that of the non-Gaussian state ρ00 FS00ABC in Eq. (<ref>) for |n⟩=|0⟩, but here we use it to define a Gaussian state ρ00 G00ABC with zero first moments.Moreover, we note that γ00ABC satisfies the CM biseparability criterion by construction. Nevertheless, we find that the state is certainly GME for the parameter range 0<r<r_0 with r_0≈0.284839.Between r_0 and r_1≈1.24275 the three-mode state is fully inseparable, but we do not know if it is GME activatable (or potentially already GME at the single-copy level). For r>r_1, the state is partition separable and thus certainly not GME activatable. Let us now discuss how to obtain these values. The threshold value r_1 is obtained directly from the CM, where the PPT criterion provides a necessary and sufficient criterion for separability of 1 vs. N-mode Gaussian states <cit.>, as we discuss in more detail in Appendix <ref>.For the detection of GME up to the value r_0 we employ another witness inequality satisfied by all biseparable states ρ00 BS00ABC, stated fully and proven in Appendix <ref>. Taking into account the symmetry of the state ρ00 G00ABC that we consider here, this inequality reduces to√(3)|⟨000|ρ00 G00ABC|011⟩|≤ √(⟨000|ρ00 G00ABC|000⟩ ⟨011|ρ00 G00ABC|011⟩) + √(3) ⟨001|ρ00 G00ABC|001⟩ .We calculate the relevant density-matrix elements of the Gaussian state ρ00 G00ABC from its CM in (<ref>) via the Wigner function (<ref>) using(ρ̂ Ĝ)= (2π)^N∫d^N𝐱 d^N𝐩W(𝐱,𝐩)[ρ̂]W(𝐱,𝐩)[Ĝ] ,along with the relation for the Fock-state wave functionsnx = (-1)^ne^x^2/2/√(n!2^n√(π))(d^n/dx^ne^-x^2),and standard formulas for Gaussian integrals.As is explained in more detail in Appendix <ref>, this leads to a violation of the inequality (<ref>) in the parameter range 0<r<r_0.Thus, we conclude that the CM biseparability criterion cannot be sufficient for biseparability even for Gaussian states.A corollary of our results that we have already hinted at following Eq. (<ref>), is that a Gaussian state may have the same first and second moments as a biseparable non-Gaussian state, yet itself be GME.Thus, no GME criterion valid for all states that is based solely on first and second moments of a state can detect such Gaussian-state GME.Any detection of GME must hence rely on higher statistical moments, even if those are themselves functions only of the first and second moments if the state is Gaussian. Conclusion and outlook.We showed that the activation of GME from multiple identical copies of the state is possible also in infinite dimensions, specifically, for a family of non-Gaussian states with non-zero overlap with infinitely many Fock states.We then investigated the GME activatability of Gaussian states. However, as we showed, this matter is complicated by the fact that the CM biseparability criterion is not sufficient for biseparability even for Gaussian states. In particular, we demonstrated that Gaussian states satisfying the CM biseparability criterion can be GME. Interestingly, this is the case even though satisfying the CM biseparability criterion implies that the corresponding Gaussian states have the same first and second moments as biseparable non-Gaussian states. At the same time, our results leave us without an easily verifiable sufficient criterion for the biseparability of Gaussian states if no explicit decomposition into a convex sum of partition-separable states is given.We thus lack a tool to conclusively determine if GME activatable states are not already GME on the single-copy level to begin with.In other words, we are not aware of any example of a fully inseparable yet provably biseparable (red area in Fig. <ref>) Gaussian state. We leave the development of suitable techniques to address this question for future research. Acknowledgments.K.B. and N.F. acknowledge support from the Austrian Science Fund (FWF) through the project P 36478-N funded by the European Union - NextGenerationEU.N.F. also acknowledges funding from the Austrian Federal Ministry of Education, Science and Research via the Austrian Research Promotion Agency (FFG) through the flagship project HPQC (FO999897481) funded by the European Union - NextGenerationEU.E.A. acknowledges funding from the Austrian Science Fund (FWF) through the Lise Meitner-Programm project M3151.apsrev4-1fixed_with_article_titles_full_names_new sec:appendix § APPENDIX: SUPPLEMENTAL INFORMATION equationsection In the appendix we present additional details and explicit calculations supporting our results. The appendix is structured as follows: in Sec. <ref> we present additional details on the GME activation for non-Gaussian states. In Sec. <ref> we provide a detailed description of GME activation for non-identical Gaussian states. Finally, Sec. <ref> shows that Gaussian states satisfying the CM biseparability criterion can be GME. §.§ Additional details on the GME activation for non-Gaussian states §.§.§ Full inseparability of biseparable non-Gaussian states We begin by showing in more detail that the members of the one-parameter family of non-Gaussian states ρ00 FS00ABC from Eq. (<ref>) in the main text are fully inseparable biseparable states. The biseparability is ensured by construction since the states are (equally weighted) mixtures of product states, where two modes are in an entangled state, a two-mode squeezed vacuum state of the formρ00 TMS = (1-λ^2)∑_n,m=0^∞λ^m+n|mm⟩⟨nn| with λ=tanhr, while the remaining third mode is in an n-excitation Fock state |n⟩ and hence separable from the other two modes. For r=0, the state is a convex mixture of products of the vacuum and Fock states and is thus separable. For all non-zero values of r we will now show that the states ρ00 FS00ABC are entangled across all bipartitions. To do this we note that the symmetry of the state with respect to the exchange of the modes means that it is sufficient to show that the state is entangled for any fixed bipartition, e.g., A|BC. We then trace out the third mode, C, an operation which cannot create entanglement between A and B where none was present before, and we are left with the reduced stateρ00 FS00AB =00C(ρ00 FS00ABC) = 13( ρ00 TMS00AB + ρ00 th00A⊗|n⟩⟨n|00B+|n⟩⟨n|00A⊗ρ00 th00B), where ρ00 th=(1-λ^2)∑_m=0^∞λ^2m|m⟩⟨m| is a thermal state of a single mode. Now we can choose any pair of excitation numbers different from n, let us label them k and k', and project into the subspace spanned by the product states |i,j⟩00AB for i,j=k,k'≠ n. This is a local map that also cannot create entanglement, and after normalizing the result, we arrive at the two-qubit density operatorρ00 QB00AB = 1λ^2k+λ^2k'∑_n,m=k,k'λ^m+n|mm⟩⟨nn| .This is a pure two-qubit state ρ00 QB00AB=|ψ_kk' ⟩⟨ψ_kk' | with |ψ_kk' ⟩=(λ^k |kk⟩+λ^k'|k'k'⟩)/√(λ^2k+λ^2k') that is not a product state, and hence entangled, for all r≠0.§.§.§ GME activatability of biseparable non-Gaussian states For detecting GME activatability we turn to a k-separability criterion proposed in <cit.>: Every k-separable N-partite state ρ satisfies√(⟨ϕ|ρ^⊗ 2P_tot|ϕ⟩)≤∑_{M}(∏_i=1^k⟨ϕ|P^†_M_iρ^⊗ 2P_M_i|ϕ⟩)0012kfor every fully separable 2N-partite state |ϕ⟩, where P_M_i are permutation operators that exchanges the two copies of all subsystems contained in the i-th subset of the partition M, P_tot is an operator exchanging the two copies entirely, and the sum runs over all possible partitions M of the considered system into k subsystems. We now employ this criterion for k=2 to check if two copies of ρ00 FS00ABC (with |n⟩=|0⟩) from Eq. (<ref>) are GME.In this case the state ρ in inequality (<ref>) of the main text is ρ=ρ00 FS00A_1B_1C_1⊗ρ00 FS00A_2B_2C_2and we choose |ϕ⟩ to be the fully separable state |ϕ⟩ =|000⟩00A_1B_1C_1|000⟩00A_2B_2C_2|011⟩00A' _1B'_1C'_1|101⟩00A' _2B'_2C'_2 .For this choice, the left-hand side of (<ref>) takes the form√(⟨ϕ|ρ^⊗ 2P_tot|ϕ⟩) = √(⟨000 000 011 101|ρ^⊗ 2|011 101 000 000⟩)= |⟨000 000|ρ|011 101⟩|= |⟨000|ρ00 FS00ABC|011⟩|× |⟨000|ρ00 FS00ABC|101⟩|= 19(1-λ^2)^2λ^4 ,where P_tot exchanges the primed and unprimed subsystems with each other, and in going from the second to third line we have inserted from Eq. (<ref>). The right-hand side of (<ref>) is a sum of three terms corresponding to the three bipartitions A_1A_2|B_1B_2C_1C_2, A_1A_2B_1B_2|C_1C_2, and A_1A_2C_1C_2|B_1B_2. Each of these terms is a square root and the arguments of these square roots are products of diagonal density-matrix elements. Specifically, for the bipartition A_1A_2|B_1B_2C_1C_2 there are two factors, one obtained by exchanging the subsystem A_1A_2 with A'_1A'_2, the other by exchanging B_1B_2C_1C_2 with B'_1B'_2C'_1C'_2, such that we have⟨000 100 011 001|ρ^⊗ 2|000 100 011 001⟩×⟨011 001000 100|ρ^⊗ 2|011 001000 100⟩= |⟨000 100|ρ|000 100⟩|^2×|⟨011 001|ρ|011 001⟩|^2= |⟨000|ρ00 FS00ABC|000⟩|^2 × |⟨100|ρ00 FS00ABC|100⟩|^2× |⟨011|ρ00 FS00ABC|011⟩|^2× |⟨001|ρ00 FS00ABC|001⟩|^2 = 0 ,which vanishes because the one-excitation matrix elements are zero identically, |⟨001|ρ00 FS00ABC|001⟩|=|⟨100|ρ00 FS00ABC|100⟩|=0. Similarly, the arguments of the square roots for the other two bipartitions evaluate to⟨010 100 001 001|ρ^⊗ 2|010 100 001 001⟩×⟨001 001 010 100|ρ^⊗ 2|001 001 010 100⟩= |⟨001 001|ρ|001 001⟩|^2×|⟨010 100|ρ|010 100⟩|^2= |⟨001|ρ00 FS00ABC|001⟩|^4 × |⟨010|ρ00 FS00ABC|010⟩|^2× |⟨100|ρ00 FS00ABC|100⟩|^2 = 0 ,and⟨010 000 001 101|ρ^⊗ 2|010 000 001 101⟩×⟨001 101 010 000|ρ^⊗ 2|001 101 010 000⟩= |⟨010 000|ρ|010 000⟩|^2×|⟨001 101|ρ|001 101⟩|^2= |⟨010|ρ00 FS00ABC|010⟩|^2 × |⟨000|ρ00 FS00ABC|000⟩|^2× |⟨001|ρ00 FS00ABC|001⟩|^2× |⟨101|ρ00 FS00ABC|101⟩|^2 = 0 .Since the right-hand side of (<ref>) vanishes and the left-hand side is larger than zero for all r≠ 0, we see that all fully inseparable biseparable states in this family are GME activatable.§.§ GME activation for non-identical Gaussian states In the main text, we have shown that the CM biseparability criterion (<ref>) cannot detect GME activation for k identical copies, since the CM of ρ^⊗ k automatically satisfies the criterion if the criterion is satisfied by the CM of ρ. This is the case independently of the Gaussian or non-Gaussian character of the state. However, as we will demonstrate here, the CM biseparability criterion can be used to detect GME activation from (certain) non-identical pairs of states. This possibility can be inferred from (<ref>) in the main text. There, the equality holds under the condition that the two CMs in question admit decompositions into convex sums (with each term in the sum a CM that is block-diagonal with respect to one of the bipartitions) with the same probability distribution {p_i}_i. That is, for two CMs γ and γ̃ that admit decompositions {(p_i,γ00M(i))}_i and {(p_i,γ̃00M(i))}_i such thatγ-∑_ip_i γ00M(i) ≥ 0 , γ̃-∑_ip_i γ̃00M(i) ≥ 0 , the CM γ⊕γ̃ of the joint state still satisfiesγ⊕γ̃-∑_ip_i γ00M(i)⊕γ̃00M(i)= (γ-∑_ip_i γ00M(i)) ⊕(γ̃-∑_ip_i γ̃00M(i)) ≥ 0 .This line of reasoning no longer goes through if the two CMs do not admit decompositions with the same probability distributions {p_i}_i.In particular, let us consider the following two CMs corresponding to two different three-mode Gaussian states that satisfy Eq. (<ref>) by construction, γ00 BS00123= η001γ001|23 + η002γ002|31 + η003γ003|12 =([γ001 η003c0012 η002c0013; η003c0012γ002 η001c0023; η002c0013 η001c0023γ003 ]),γ00 BS00456= ν001γ004|56 + ν002γ005|46 + ν003γ006|45 =([γ001 ν003c0045 ν002c0046; ν003c0045γ002 ν001c0056; ν002c0046 ν001c0056γ003 ]),where η_i and ν_i are probability weights such that ∑_iη_i=∑_iν_i=1 and c_kl are 2×2 matrices capturing correlations between the modes labelled by k and l in the block-diagnoal matrix γ00j|kl, i.e.,γ00j|kl=([γ00j 0 0; 0γ00k c00kl; 0 c00klγ00l ]).If we now consider one copy of the state (<ref>) and one copy of the state (<ref>), the joint CM representing the two-copy state readsγ00 BS00142536= γ00 BS00123⊕γ00 BS00456 =([γ001 0 η003c0012 0 η002c0013 0; 0γ004 0 ν003c0045 0 ν002c0046; η003c0012 0γ002 0 η001c0023 0; 0 ν003c0045 0γ005 0 ν001c0056; η002c0013 0 η001c0023 0γ003 0; 0 ν002c0046 0 ν001c0056 0γ006 ]).To satisfy (<ref>), the CM γ00 BS00142536 must be equal to the CMγ00 BS0014|25|36=ϵ001γ0014⊕γ002536 + ϵ002γ0025⊕γ001436 + ϵ003γ001425⊕γ0036,for probabilities ϵ_i fulfilling ∑_iϵ_i=1.By comparing the components ofthe CMs (<ref>) and (<ref>) containing the parameters η_i, ν_i, and ϵ_i, with the same index i, one finds that these CMs are equal only when η_i=ν_i=ϵ_i. Consequently, by selecting values η_i≠ν_i, we have constructed a joint CM γ00 BS00142536= γ00 BS00123⊕γ00 BS00456 that does not satisfy the CM biseparability criterion and hence corresponds to a state that is GME, despite the fact that both γ00 BS00123 and γ00 BS00456 satisfy the criterion individually.§.§ GME detection for Gaussian states satisfying the CM biseparability criterion In this appendix, we focus on a specific one-parameter family of Gaussian states ρ00 G00ABC(r) described by the covariance matrix γ00ABC from Eq. (<ref>), and with a vanishing vector of first moments. In Appendix <ref> we study the range of the parameter r for which the state is fully inseparable. In Appendix <ref> we then present a GME witness that is able to detect a range of r for which the states ρ00 G00ABC(r) are certainly GME. We describe the calculation of the required density-matrix elements of ρ00 G00ABC(r) in Appendix <ref>. Before we proceed, let us make a brief remark regarding the parameter r. The covariance matrix γ00ABC in Eq. (<ref>) is a convex combination of product states of two-mode squeezed vacuum states and the vacuum state for the third mode, with each term in the convex combination corresponding to a different labelling of the modes. For each individual term, the parameter r represents a (two-mode) squeezing parameter that directly relates to the bipartite entanglement between the corresponding pair of modes. However, as we see here, the convex combination of covariance matrices is not equivalent to a convex combination of the corresponding density matrices. As such, the parameter r can no longer be interpreted as a squeezing parameter in the usual sense of parameterizing a unitary (two-mode squeezing) transformation that monotonously increases the entanglement between two modes initially in a pure product state (the vacuum). Indeed, here the purity P(ρ00 G00ABC)=1/√((γ00ABC)) of the three-mode state we consider decreases with increasing r. Specifically, the determinant of the covariance matrix is given by(γ00ABC)= (5 + 4 cosh(2r))(7 + 8 cosh(2r) + 3 cosh(4r)54)^2 .At the same time, we note that for r=0 the covariance matrix reduces to γ00ABC(r=0)=100A⊕100B⊕100C and ρ00 G00ABC(r=0) is hence the fully separable vacuum state, |0⟩00A|0⟩00B|0⟩00C. Already from these observations, it is thus expected that any non-trivial bipartite and multipartite entanglement will appear for r>0 but only up to a certain value of r, at which the increasing mixedness of the three-mode state and of the single-mode reduced states suppresses any quantum correlations between the modes. In the next section, we will quantify this intuition.§.§.§ Range of full inseparbility Here we determine the range of the parameter r for which the Gaussian state ρ00 G00ABC(r) described by the covariance matrix γ00ABC from Eq. (<ref>) is fully inseparable (i.e., fully inseparable biseparable or GME). Generally, a tripartite state is fully inseparable if it is separable with respect to all bipartitions. Here, given the symmetry of the state with respect to the exchange of the mode labels, this means we just have to check for separability with respect to any fixed bipartition. Without loss of generality we consider the bipartition AB|C, and apply the PPT criterion, which provides a necessary and sufficient criterion for separability of 1 vs. N-mode Gaussian states <cit.>. On the level of the covariance matrix, the partial transposition can be represented as a flip of the momentum quadrature of the respective single mode (here, mode C), γ00ABC↦γ̃00ABC=T̃00Cγ00ABCT̃00C , where T̃00C=100AB⊕ Z00C and Z=diag{1,-1} is the usual third Pauli matrix. Then, the corresponding Gaussian state is entangled with respect to the bipartition AB|C if the smallest symplectic eigenvalue ν̃_- of γ̃00ABC is smaller than 1. The quantity ν̃_- can be calculated as the smallest eigenvalue of |i Ω γ̃00ABC| with Ω the symplectic form from Eq. (<ref>). As a function of r, we find that the smallest symplectic eigenvalue of the `partially transposed' covariance matrix is given byν̃_- = 16(9 + 16 cosh(2r) + 11 cosh(4r) - √(2sinh^2(2r) [199 + 256 cosh(2r) + 121 cosh(4r)] ) )001/2.The condition ν̃_-(r=r_1) = 1 then determines the value r=r_1 at which the state becomes separable with respect to the chosen bipartition, and hence separable with respect to all bipartitions. This condition can then be seen to be equivalent to the condition47 + 28 cosh(2r)-3 cosh(4r) = 0 ,which can be solved numerically to obtain the result r_1≈1.24275 . §.§.§ GME Witness inequality We now present a (non-linear) GME witness inequality that is a generalization of a witness that appeared as Eq. (A4) in <cit.>, using techniques similar to the witnesses derived in <cit.> and <cit.>. As we will show here, all biseparable states satisfy |⟨000|ρ00 BS00ABC|011⟩| + |⟨000|ρ00 BS00ABC|101⟩| + |⟨000|ρ00 BS00ABC|110⟩| ≤√(⟨000|ρ00 BS00ABC|000⟩) ××√(⟨011|ρ00 BS00ABC|011⟩ + ⟨101|ρ00 BS00ABC|101⟩ + ⟨110|ρ00 BS00ABC|110⟩)+√(⟨001|ρ00 BS00ABC|001⟩ ⟨010|ρ00 BS00ABC|010⟩)+√(⟨001|ρ00 BS00ABC|001⟩ ⟨100|ρ00 BS00ABC|100⟩)+√(⟨010|ρ00 BS00ABC|010⟩ ⟨100|ρ00 BS00ABC|100⟩) . Proof. To show that this inequality holds for all biseparable states, we first show that it holds for a product state with respect to a fixed bipartition, without loss of generality, we choose the bipartition A|BC. From the symmetry of the inequality with respect to the exchange of the subsystems it then follows that the inequality holds for product states with respect to any bipartition. Finally, the validity for arbitrary convex mixtures of such states follows from the convexity of the absolute values on the left-hand side and from the concavity of the square-roots on the right-hand side. To see that the inequality holds for a product state with respect to the bipartition A|BC, we set ρ00 BS00ABC=ρ00A⊗ρ00BC, such that the left-hand side of (<ref>) becomes|⟨000|ρ00 BS00ABC|011⟩| + |⟨000|ρ00 BS00ABC|101⟩| + |⟨000|ρ00 BS00ABC|110⟩|= ⟨0|ρ00A|0⟩× |⟨00|ρ00BC|11⟩| + |⟨0|ρ00A|1⟩|× |⟨00|ρ00BC|01⟩|+ |⟨0|ρ00A|1⟩|× |⟨00|ρ00BC|10⟩| .We then use the spectral decomposition of any state ρ=∑_ip_i|ψ_i⟩⟨ψ_i| along with the Cauchy-Schwarz inequality |x⃗·y⃗ |≤|x⃗ |·|y⃗ | to write|⟨m|ρ|n⟩|= |∑_i√(p_i)mψ_i √(p_i)ψ_in| ≤√(∑_ip_i|mψ_i|^2) √(∑_jp_j|nψ_j|^2) = √(⟨m|ρ|m⟩ ⟨n|ρ|n⟩) .With this, the terms on the right-hand side of Eq. (<ref>) can be bounded according to|⟨000|ρ00 BS00ABC|011⟩| + |⟨000|ρ00 BS00ABC|101⟩| + |⟨000|ρ00 BS00ABC|110⟩| ≤ ⟨0|ρ00A|0⟩×√(⟨00|ρ00BC|00⟩ ⟨11|ρ00BC|11⟩)+ √(⟨0|ρ00A|0⟩ ⟨1|ρ00BC|1⟩)√(⟨00|ρ00A|00⟩ ⟨01|ρ00BC|01⟩) + √(⟨0|ρ00A|0⟩ ⟨1|ρ00BC|1⟩)√(⟨00|ρ00A|00⟩ ⟨10|ρ00BC|10⟩) .Now, a simple comparison with the right-hand side of inequality (<ref>) for ρ00 BS00ABC=ρ00A⊗ρ00BC shows that each of the terms on the right-hand side of (<ref>) is matched by an equal or larger term on the right-hand side of (<ref>), thus showing that the inequality holds. Since the Gaussian three-mode state ρ00 G00ABC that we consider is fully symmetric with respect to the exchange of any two modes, the witness inequality from (<ref>) takes the more compact form√(3)|⟨000|ρ00 G00ABC|011⟩|≤ √(⟨000|ρ00 G00ABC|000⟩ ⟨011|ρ00 G00ABC|011⟩) + √(3) ⟨001|ρ00 G00ABC|001⟩ . §.§.§ Reconstruction of density-matrix elements from the Wigner function To use the witness from (<ref>) and (<ref>), we need to calculate four density-matrix elements of the Gaussian state ρ00ABC00G from its CM and vector of first moments, with the latter trivially being zero. We will calculate these elements from its Wigner functionW(𝐱,𝐩)[ρ00ABC00G],which can be obtained directly by substituting the CM (<ref>) into (<ref>) with 𝐝=0. With the Wigner function at hand, we then obtain the density-matrix elements ⟨i00Aj00Bk00C|ρ00ABC00G|i'00Aj'00Bk'00C⟩ from the relation⟨i00Aj00Bk00C|ρ00ABC00G|i'00Aj'00Bk'00C⟩ =(ρ00ABC00G|i00A⟩⟨i'00A|⊗|j00B⟩⟨j'00B|⊗|k00C⟩⟨k'00C|) = (2π)^N∫d^N𝐱 d^N𝐩W(𝐱,𝐩)[ρ00ABC00G]× W(𝐱,𝐩)[ |i00A⟩⟨i'00A|⊗|j00B⟩⟨j'00B|⊗|k00C⟩⟨k'00C|] ,where W[M]:=W(𝐱,𝐩)[M] is the Wigner function for the matrix element in the argument in square brackets from Eq. (<ref>). Here, the states |i⟩, |j⟩, |k⟩ and |i'⟩, |j'⟩, |k'⟩ are single-mode Fock states with i,i',j,j',k,k'={0,1}, and for the evaluation of the Wigner function W[ |i00A⟩⟨i'00A|⊗|j00B⟩⟨j'00B|⊗|k00C⟩⟨k'00C|] we require the relationnx = (-1)^ne^x^2/2/√(n!2^n√(π))(d^n/dx^ne^-x^2),for the Fock-state wave functions. The calculation of the density-matrix elements then amounts to a tedious but straightforward evaluation of Gaussian integrals (nine for each matrix element, three each for the variables 𝐱, 𝐲, and 𝐩) and algebraic simplification of the results. As a result we obtain the expressions ⟨000|ρ00 G00ABC|000⟩ = 216[1 + 2 cosh(2r) + sinh(2 r)]√(5 + 4 cosh(2r)) [4 + 2 cosh(2r) + sinh(2r)][13 + 20 cosh(2r) + 3 cosh(4r) + 6 sinh(2r)], ⟨001|ρ00 G00ABC|001⟩ = 9 √(3) e^r(e^2 r-1)^2√(11+2 e^2 r + 81 + 8 e^2 r + 3 e^4 r)[67 + 68 cosh(2 r) + 9 cosh(4 r)] [4 + 2 cosh(2 r) + sinh(2 r)] (2 + e^2 r)^3/2 (1 + 2 e^2 r)(17 + cosh(2 r) [16 + 3 cosh(2 r)])^5/2, ⟨011|ρ00 G00ABC|011⟩ = 108e^5 r (e^2 r-1)^2 (99 + 1458 e^2 r + 8539 e^4 r + 24384 e^6 r + 38274 e^8 r + 41116 e^10 r + 38274 e^12 r + 24384 e^14 r + 8539 e^16 r + 1458 e^18 r + 99 e^20 r)(3 + 8 e^2 r + e^4 r)^3 (2 + 5 e^2 r + 2 e^4 r)^5/2 (1 + 8 e^2 r + 3 e^4 r)^3, ⟨000|ρ00 G00ABC|011⟩ = 648 sinh(2 r)[19 + 16 cosh(2 r) + cosh(4 r)][5 + 4 cosh(2 r)]^3/2 [37 + 32 cosh(2 r) + 3 cosh(4 r)]^2 . Insering these values into the witness inequality (<ref>) and numerically evaluating it, we find that the inequality is violated for all values of r in the range 0<r<r_0 with r_0≈0.284839.
http://arxiv.org/abs/2312.16570v1
{ "authors": [ "Klára Baksová", "Olga Leskovjanová", "Ladislav Mišta Jr.", "Elizabeth Agudelo", "Nicolai Friis" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231227133535", "title": "Multi-copy activation of genuine multipartite entanglement in continuous-variable systems" }
Interference-Resilient OFDM Waveform Design with Subcarrier Interval Constraint for ISAC Systems Qinghui Lu, Zhen Du, Member, IEEE, and Zenghui Zhang, Senior Member, IEEEThis work was supported in part by the National Natural Science Foundation of China under Grants 62271311 and 62301264, and in part by the Natural Science Foundation of Jiangsu Province under Grant BK20230416. Qinghui Lu and Zenghui Zhang are with the Shanghai Key Laboratory of Intelligent Sensing and Recognition, Shanghai Jiao Tong University, Shanghai 200240, China (e-mail: [email protected]). Zhen Du is with the School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China.Received Month DD, Year; accepted Month DD, Year ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper summarizes our team's efforts in both tracks of the ICMC-ASR Challenge for in-car multi-channel automatic speech recognition. Our submitted systems for ICMC-ASR Challenge include the multi-channel front-end enhancement and diarization, training data augmentation, speech recognition modeling with multi-channel branches. Tested on the offical Eval_1 and Eval_2 set, our best system achieves a relative 34.3% improvement in CER and 56.5% improvement in cpCER, compared to the offical baseline system. § TRACK 1: AUTOMATIC SPEECH RECOGNITION Fig. <ref> illustrates our Automatic Speech Recognition (ASR) system training pipeline, including the utilization of several front-end enhancement modules, data simulation techniques, HuBERT representation model pre-training, and the final ASR model finetuning. §.§ Front-end Enhancement ProcessingTo overcome the noisy recording environment of the speech data, we first implement several Speech Enhancement (SE) or Speech Separation (SS) models that include DCCRN <cit.>, BSRNN <cit.>, GSS <cit.> and IVA <cit.>. To train an enhancement model, we take non-overlap far-field training data and AISHELL1[AISHELL-1: https://www.openslr.org/33/] data as the clean data, which is then mixed with official noise data provided for noisy data simulation. As some far-field data contains loud in-car music, we filter out these far-field data with a pre-trained audio classification model <cit.>. The training loss is SI-SDR. We trained both DCCRN and BSRNN enhancement models and found that BSRNN gave better results (higher SI-SDR). In addition, to separate the different speaker sources, we apply two statistic-based speech separation methods to solve the speaker overlap problem, including GSS and IVA methods. These enhanced and separated data from far-field recordings will be further used for back-end ASR data augmentation (DA).Additionally, we employ Room Impulse Response (RIR) to simulate the far-field speech from close-talk data with the toolkit named pyroomacoustic[Pyroomacoustic: https://github.com/LCAV/pyroomacoustics] where the room parameters are estimated by the given picture of the car. Further more, we add the real recorded noise into the RIR resulting speech to simulate more authentic far-field speech with the SNR between 0∼10 dB. Finally, speed perturbation with the speed factor 0.9∼1.1, and Spec-Augmentation are applied on-the-fly when training the ASR model.§.§ Back-end ASR Training and InferenceOur ASR model is a joint architecture that constructs with a 24-layer HuBERT Encoder <cit.>, a 17-layer E-Branchformer Encoder and a 6-layer Transformer Decoder.The HuBERT-Large model was pre-trained by Tencent[HuBERT: https://github.com/TencentGameMate/chinese_speech_pretrain] with 10k hours Wenetspeech Chinese data. During decoding, GSS and IVA enhancement are first applied to the test far-field audios which then would be recognized with the ASR model. The final results are selected from either GSS or IVA enhanced parts based on the higher score of the ASR system for each sentence. Fig. <ref> (a) shows our inference pipeline for track 1.§ TRACK 2: AUTOMATIC SPEECH DIARIZATION AND RECOGNITIONIn contrast to track 1, track 2 would not provide any annotations, which indicates that the performance of Speaker Diarization (SD) system is crucial to the final ASR results in this track. The speaker diarization system consists of two part: clustering-based method and modified TS-VAD.We follow the wespeaker <cit.> pipeline to achieve the spectral clustering based on the pretrained CAM++[CAM++: https://www.modelscope.cn/models/damo/speech_campplus_sv_zh­cn_16k­common/summary] for speaker embedding extraction, which is trained on around 200k Chinese speakers. We then adopt modified TS-VAD framework. Specifically, we replaced the Bidirectional Long Short-Term Memory (BLSTM) used in the original TS-VAD <cit.> with a four-layer transformer encoder. And the speaker embedding is extracted from pretrained CAM++ <cit.> model for each speaker, instead of i-vector. We utilize another CAM++ encoder to obtain the speech feature to replace the original MFCC feature. For multi-channel input version, we use cross-attention module to leverage the information from four channel audios.Considering that the our modified TS-VAD cannot perfectly segment far-field recordings, leading to numerous insertion errors in the final speech recognition. Motivated by this observation, we attempt to combine GSS and Voicefixer Toolkit[VoiceFixer: https://github.com/haoheliu/voicefixer] to further reduce the non-speech background noise. By employing VAD operation on the processed speech, we can distinguish the boundaries between speech and non-speech better, and update the original output RTTM^1 from the modified TS-VAD to the refined RTTM^2. Thus, we can run the GSS method again based on the refined RTTM^2 to get better speech for ASR. Fig. <ref> (b) shows the inference pipeline for track 2 while the ASR model is the exactly same ASR system in section <ref>. § EXPERIMENT AND RESULTS§.§ Experiment SetupThe 95 hours of Train set of ICMC-ASR challenge is utilized for jointly training the HuBert-E-Branchformer architecture ASR systems where the Dev set acts as the cross-validation set. The Eval_1 and Eval_2 sets are provided without transcription which is used for the final performance evaluation. The statistics information of these datasets is shown in Table <ref>. ESPnet[ESPnet: https://github.com/espnet/espnet] is used to train the joint ASR model. The learning rate is set to 5e^-5 while the batchbins is 35M. The Warmuplr scheduler is applied with 40K warmup steps and the training target is hybrid CTC-Attention where the CTC weighting factor is set to 0.3. When perform decoding, the beamsize of beam-search is set to 10 and the CTC weight is 0.5. Average modeling is used and the best 5 models with lowest validation loss are selected.§.§ Experimental ResultsThe clustering-based method is used for original and rough diarization results. Based on that, we extract the non-overlap speech and speaker embedding for each speaker accordingly. These extracted speaker embeddings are fed into modified TS-VAD for further training. Observing the performance of V1 and V2 in Table <ref>, we conclude that taking 4-channel audios as input performs better due to more comprehensive information. V2 and V3 results demonstrate that second iteration can provide better diarization refinement. We average the best three models from V3 experimental configuration and obtain V4 with the best performance.Table <ref> illustrates the overall ASR results of our systems. A1 stands for the baseline system[Baseline System: https://github.com/MrSupW/ICMC­ASR_Baseline], and A2 is the proposed joint architecture. For track 1, our proposed ASR model achieves 32.6% and 34.3% performance improvement on Dev and Eval_1 sets respectively compared with the baseline system. And for track 2, we choose V4 in Table <ref> as SD model, and A2 obtains 56.5% cpCER performance improvement compared with A1.IEEEbib
http://arxiv.org/abs/2312.16002v1
{ "authors": [ "Meng Ge", "Yizhou Peng", "Yidi Jiang", "Jingru Lin", "Junyi Ao", "Mehmet Sinan Yildirim", "Shuai Wang", "Haizhou Li", "Mengling Feng" ], "categories": [ "eess.AS", "cs.AI" ], "primary_category": "eess.AS", "published": "20231226111122", "title": "The NUS-HLT System for ICASSP2024 ICMC-ASR Grand Challenge" }
Photoemission of spin-polarized electrons from aligned grains and chiral symmetry breaking Thiem Hoang Received ...; accepted... ========================================================================================== § FOREWORDThis short book is the result of various master and summer school courses I have taught. The objective is to introduce the readers to mathematical control theory, both in finite and infinite dimension.In the finite-dimensional context, we consider controlled ordinary differential equations (ODEs); in this context, existence and uniqueness issues are easily resolved thanks to the Picard-Lindelöf (Cauchy-Lipschitz) theorem. In infinite dimension, in view of dealing with controlled partial differential equations (PDEs), the concept of well-posed system is much more difficult and requires to develop a bunch of functional analysis tools, in particular semigroup theory – and this, just for the setting in which the control system is written and makes sense. This is why I have splitted the book into two parts, the first being devoted to finite-dimensional control systems, and the second to infinite-dimensional ones. In spite of this splitting, it may be nice to learn basics of control theory for finite-dimensional linear autonomous control systems (e.g., the Kalman condition) and then to see in the second part how some results are extended to infinite dimension, where matrices are replaced by operators, and exponentials of matrices are replaced by semigroups.For instance, the reader will see how the Gramian controllability condition is expressed in infinite dimension, and leads to the celebrated Hilbert Uniqueness Method (HUM). Except the very last section, in the second part I have only considered linear autonomous control systems (the theory is already quite complicated), providing anyway several references to other textbooks for the several techniques existing to treat some particular classes of nonlinear PDEs.In contrast, in the first part on finite-dimensional control theory, there are much less difficulties to treat general nonlinear control systems, and I give here some general results on controllability, optimal control and stabilization. Of course, whether in finite or infinite dimension, there exist much finer results and methods in the literature, established however for specific classes of control systems. Here, my objective is to provide the reader with an introduction to control theory and to the main tools allowing to treat general control systems.I hope this will serve as motivation to go deeper into the theory or numerical aspects that are not covered here. Paris, March 2023Emmanuel TrélatPART:Control in finite dimensionCHAPTER: CONTROLLABILITY Let n and m be two positive integers. In this chapter we consider a control system in ^nẋ(t) = f(t,x(t),u(t))where f:×^n×^m→^n is of class C^1 with respect to (x,u) and locally integrable with respect to t, and the controls are measurable essentially bounded functions of time taking their values in some measurable subset Ω of ^m (set of control constraints).First of all, given an arbitrary initial point x_0∈^n, and an arbitrary control u, we claim that there exists a unique solution x(·) of (<ref>) such that x(0)=x_0, maximally defined on some open interval ofcontaining 0.We use here a generalization of the usual Picard-Lindelöf theorem (sometimes called Carathéodory theorem), where the dynamics here can be discontinuous (because of the control). For a general version of this existence and uniqueness theorem, we refer to <cit.> and <cit.>.We stress that the differential equation (<ref>) holds for almost every t in the maximal interval. Given a time T>0 and an initial point x_0, we say that a control u∈ L^∞([0,T],^m) is admissible if the corresponding trajectory x(·), such that x(0)=x_0, is well defined on [0,T].We say that the control system is linear if f(t,x,u) = A(t)x+B(t)u+r(t), with A(t) a n× n matrix, B(t) a n× m matrix (with real coefficients), r(t)∈^n, and in that case we will assume that t↦ A(t), t↦ B(t) and t↦ r(t) are of class L^∞ on every compact interval (actually, L^1 would be enough). The linear control system is said to be autonomous if A(t)≡ A and B(t)≡ B, otherwise it is said to be instationary or time-varying. Note that, for linear control systems, there is no blow-up in finite time (i.e., admissibility holds true on any interval). Let x_0∈^n and let T>0 arbitrary. A control u∈ L^∞([0,T],Ω) is said to beadmissible on [0,T] if the trajectory x_u(·), solution of (<ref>), corresponding to the control u, and such that x_u(0)=x_0, is well defined on [0,T]. The end-point mapping E_x_0,T is then defined by E_x_0,T(u) = x_u(T). The set of admissible controls on [0,T] is denoted by 𝒰_x_0,T,Ω.It is the domain of definition of E_x_0,T (indeed one has to be careful with blow-up phenomena), when considering controls taking their values in Ω. The control system (<ref>) is said to be (globally) controllable from x_0 in time T if E_x_0,T(𝒰_x_0,T,Ω)=^n, i.e., if E_x_0,T is surjective.Accordingly, defining the accessible set from x_0 in time T by Acc_Ω(x_0,T) = E_x_0,T(𝒰_x_0,T,Ω), the control system (<ref>) is (globally) controllable from x_0 in time T if Acc_Ω(x_0,T) = ^n.Since such a global surjectivity property is certainly a very strong property which may not hold in general, it is relevant to define local controllability. Let x_1=E_x_0,T(u̅) for some u̅∈𝒰_x_0,T,Ω. The control system (<ref>) is said to be locally controllable from x_0 in time T around x_1 if x_1 belongs to the interior of Acc_Ω(x_0,T), i.e., if E_x_0,T is locally surjective around x_1. Other variants of controllability can be defined. A clear picture will come from the geometric representation of the accessible set.In this chapter we will provide several tools in order to analyze the controllability properties of a control system, first for linear (autonomous, and then instationary) systems, and then for nonlinear systems.§ CONTROLLABILITY OF LINEAR SYSTEMSThroughout this section, we consider the linear control system ẋ(t)=A(t)x(t)+B(t)u(t)+r(t), with u∈ L^∞([0,+∞),Ω). Since there is no finite-time blow-up for linear systems, we have 𝒰_x_0,T,Ω = L^∞([0,T],Ω) for every T>0. §.§ Controllability of autonomous linear systemsIn this section, we assume that A(t)≡ A and B(t)≡ B, where A is a n× n matrix and B is a n× m matrix.§.§.§ Without control constraints: Kalman conditionIn this section, we assume that Ω=^m (no control constraint). The celebrated Kalman theorem provides a necessary and sufficient condition for autonomous linear control systems without control constraint. We assume that Ω=^m (no control constraint). The control system ẋ(t)=Ax(t)+Bu(t)+r(t) is controllable (from any initial point, in arbitrary time T>0) if and only if the Kalman matrixK(A,B) = ( B , AB , … , A^n-1B )(which is of size n× nm) is of maximal rank n.Given any x_0∈^n, T>0 and u∈ L^∞([0,T],^m), the Duhamel formula givesE_x_0,T(u) = x_u(T) = e^TAx_0+∫_0^T e^(T-t)Ar(t) dt + L_Tuwhere L_T:L^∞([0,T],^m)→^n is the linear continuous operator defined by L_Tu = ∫_0^T e^(T-t)ABu(t) dt. Clearly, the system is controllable in time T if and only if L_T is surjective. Then to prove the theorem it suffices to prove the following lemma. The Kalman matrix K(A,B) is of rank n if and only if L_T is surjective.We argue by contraposition. If L_T is not surjective, then there exists ψ∈^n∖{0} which is orthogonal to the range of L_T, that is,ψ^⊤∫_0^T e^(T-t)ABu(t)dt=0∀ u∈ L^∞([0,T],^m).This implies that ψ^⊤ e^(T-t)AB=0, for every t∈[0,T]. Taking t=T yields ψ^⊤ B=0. Then, derivating first with respect to t, and taking t=T then yields ψ^⊤ AB=0. By immediate iteration we get that ψ^⊤ A^kB=0, for every k∈. In particular ψ^⊤ K(A,B)=0 and thus the rank of K(A,B) is less than n.Conversely, if the rank of K(A,B) is less than n, then there exists ψ∈^n∖{0} such that ψ^⊤ K(A,B)=0, and therefore ψ^⊤ A^kB=0, for every k∈{0,1,…,n-1}. From the Hamilton-Cayley theorem, there exist real numbers a_0,a_1,…,a_n-1 such that A^n=∑_k=0^n-1 a_k A^k. Therefore we get easily that ψ^⊤ A^nB=0. Then, using the fact that A^n+1=∑_k=1^n a_k A^k, we get as well that ψ^⊤ A^n+1B=0. By immediate recurrence, we infer that ψ^⊤ A^kB=0, for every k∈, and therefore, using the series expansion of the exponential, we get that ψ^⊤ e^(T-t)AB=0, for every t∈[0,T]. We conclude that ψ^⊤ L_T u=0 for every u∈ L^∞([0,T],^m) and thus that L_T is not surjective.Theorem <ref> is proved.Note that the Kalman condition is purely algebraic and is easily checkable.The Kalman condition does neither depend on T nor on x_0. Therefore, if an autonomous linear control system is controllable from x_0 in time T>0, starting at x_0, then it is controllable from any other x_0' in any time T'>0, in particular, in arbitrarily small time. This is due to the fact that there are no control constraints. When there are some control constraints one cannot hope to have such a property.Consider an RLC circuit with a resistor with resistance R, a coil with inductance L and a capacitor with capacitance C, connected in series. We control the input voltage u(t) of the electrical circuit. Denoting by i(t) the intensity, by additivity of voltages we haveRi(t) + Ldi/dt(t) + 1/C∫^t i(s)ds = u(t).Setting x_1(t)=∫^t i(s)ds, x_2(t)=ẋ_1(t)=i(t) and x(t)=[ x_1(t); x_2(t) ], we find the control system ẋ(t)=Ax(t)+Bu(t) with A=[ 0 1; -1/LC-R/L ],B=[ 0; 1 ] .The Kalman condition is then easily checked. This simple but illuminating example shows the importance of the RLC device in electricity. Note that the RLC circuit is paradigmatic for the three main operations in mathematical analysis: mutliplication operator, derivation operator and integration operator. [Kalman condition computations]* Let m>0 and d,k≥ 0. Prove that the system consisting of the controlled spring: mẍ+dẋ+kx=u,is controllable. * Let k_1,k_2≥ 0. Prove that the system of coupled springs (two-car train) given byẍ_1=-k_1x_1+k_2(x_2-x_1),ẍ_2=-k_2(x_2-x_1)+u,is controllable if and only if k_2>0. * Let (b_1,b_2)∈^2∖{(0,0)}. Prove that the control systemẋ_1 = x_2+b_1 u, ẋ_2 = -x_1+b_2 u,is controllable. * Prove that the control systemẋ_1=2x_1+(α-3)x_2+u_1+u_2, ẋ_2=2x_2+α(α-1)u_1,is controllable if and only if α(α-1)≠ 0. * Let N,m∈∖{0}, let A=(a_ij)_1≤ i,j≤ N be a N× N real-valued matrix and B=(b_ij)_1≤ i≤ N, 1≤ j≤ m be a N× m real-valued matrix, such that the pair (A,B) satisfies the Kalman condition. Let d∈∖{0}. Prove that the control system in (^d)^N given byv̇_i(t) = ∑_j=1^N a_ij v_j(t) + ∑_j=1^m b_ij u_j(t),i=1… N,where v_i(t)∈^d and u_j(t)∈^d, is controllable.Many other examples are given in <cit.>. The following assertions are equivalent: (1) The pair (A,B) satisfies Kalman's condition rank(K(A,B))=n.(2) ∀λ∈rank(λ I-A,B)=n.(3) ∀λ∈Spec(A)rank(λ I-A,B)=n.(4) ∀ z eigenvector of A^⊤, B^⊤ z≠ 0.(5) ∃ c>0 | ∀λ∈∀ z∈^n‖(λ I-A^⊤)z‖^2+‖ B^⊤ z‖^2≥ c‖ z‖^2.Indeed, (2)⇔ (3), (2)⇔ (5), and not (4) ⇒ not (1), are easy. We also easily get (3)⇔ (4) by contradiction. The implication not (1) ⇒ not (4) is proved as follows. We set N={z∈^n | z^⊤ A^kB=0∀ k∈}. It is easy to establish that A^⊤ N⊂ N. Then, not (1) ⇒ N≠{0}, and then to conclude it suffices to note that A^⊤ must have an eigenvector in N.The condition (5) of the Hautus test is particularly interesting because it is amenable to generalizations in the infinite-dimensional setting. Let us finally comment on how to generalize the Kalman condition in infinite dimension.This will be done properly in Lemma <ref> in Section <ref>, but we can already anticipate and provide the reader with a flavor of the new difficulties arising in infinite dimension.Let us replace ^n with a Banach space X and ^m with a Banach space U. The matrix A becomes an operator A:D(A)→ X (for instance, a Laplacian) and B∈ L(U,X) is a linear bounded operator (for instance, modelling an internal control for the heat equation). The Duhamel formula (<ref>) remains valid provided e^tA is replaced with the semigroup generated by A (we stress that this is not an exponential if A is an unbounded operator). At this stage, the reader is advised to admit this notion and continue reading. The essential elements of semigroup theory are recalled in Chapter <ref>.The argument of the proof of Theorem <ref> remains essentially the same, except of course the application of the Hamilton-Cayley theorem, and the Kalman matrix is replaced with a matrix of operators with an infinite number of columns (k does not stop at n-1 but goes to +∞), thus giving the statement of Lemma <ref>. There is however a serious and deep difference: at the beginning of the proof of Lemma <ref> we have used the fact that, when the vector space Ran(L_T) is a proper subset of ^n, there exists a nontrivial vector ψ vanishing it: this is a separation argument. In infinite dimension this argument may dramatically fail, because a proper subset of X might be dense in X: in such a case, separation is not possible. Then, to make the argument valid, one has to consider the closure Ran(L_T) of Ran(L_T), as done in Lemma <ref>, leading to a result of approximate controllability.Actually, as we will see in Part <ref> of that book, in infinite dimension we must distinguish (at least) between approximate and exact controllability. This distinction does not exist in finite dimension. At this stage, let us just say that, for linear autonomous control systems, exact controllability means Ran(E_T,x_0)=X while approximate controllability means Ran(E_T,x_0)=X. For example, a heat equation settled on a domain with internal control exerted on a proper subset of this domain is approximately controllable in X=L^2 (see Part <ref>) but can never be exactly controllable in this state space because of the smoothing property of the heat semigroup. §.§.§ With control constraintsAn easy adaptation of the proof of Theorem <ref> yields the following result. We assume that r=0, that 0∈∘Ω, and that the Kalman condition holds true.Let x_0∈^n be arbitrary. For every T>0, the accessible set Acc_Ω(x_0,T) contains an open neighborhood of the point e^TAx_0.In other words, the control system is locally controllable in time T from x_0 around e^TAx_0.More precisely, for every T>0 there exists a neighborhood V of e^TAx_0 such that, for every x_1∈ V, there exists u∈ L^∞([0,T],Ω) (which is close to 0 in L^∞ topology) such that E_x_0,T(u)=x_1. Conversely, this openness property implies the Kalman condition. Many variants of controllability properties can be obtained under various additional assumptions. For instance we have the following easy result. We assume that r=0, that 0∈∘Ω, that the Kalman condition holds true, and that all eigenvalues of A have negative real part.Let x_0∈^n be arbitrary. There exists a time T>0 and a control u∈ L^∞([0,T],Ω) such that the solution of ẋ(t)=Ax(t)+Bu(t), x(0)=x_0, satisfies x(T)=0. The time T in the above result may be large. The strategy of proof consists of taking u=0 and letting the trajectory converge asymptotically to 0, and then as soon as it is sufficiently close to 0, we apply the controllability result with controls having a small enough norm. §.§.§ Similar systemsLet us investigate the effect of a change of basis in linear autonomous control systems. The linear control systems ẋ_1=A_1x_1+B_1u_1 and ẋ_2=A_2x_2+B_2u_2 are said to be similar whenever there exists P∈ GL_n() such that A_2=PA_1P^-1 and B_2=PB_1. We have then x_2=Px_1 and u_2=u_1.We also say that the pairs (A_1,B_1) and (A_2,B_2) are similar.The Kalman property is intrinsic, that is(B_2,A_2B_2,…,A_2^n-1B_2) = P(B_1,A_1B_1,…,A_1^n-1B_1),In particular the rank of the Kalman matrix is invariant under a similar transform. Let A be a matrix of size n× n, and let B be a matrix of size n× m. Then the pair (A,B) is similar to the pair (A',B'), withA'=[ A_1' A_3';0 A_2' ]and B'=[ B_1';0 ]where A_1 is of size r× r, B_1 is of size r× m, and r=rankK(A,B)=rankK(A_1',B_1'). In other words, this result says the following. Denoting by y=[ y_1; y_2 ] the new coordinates, with y_1 of dimension r and y_2 of dimension n-r, the control system in the new coordinates is written asẏ_1= A_1' y_1 + B_1'u + A_3'y_2ẏ_2= A_2'y_2Since the pair (A_1',B_1') satisfies the Kalman condition, it follows that the part of the system in y_1 is controllable: it is called the controllable part of the system. The part in y_2 is uncontrolled and is called the uncontrollable part of the system.We assume that the rank of K(A,B) is less than n (otherwise there is nothing to prove). The subspace F=RanK(A,B)=RanB+RanAB+⋯+RanA^n-1Bis of dimension r, and is invariant under A (this can be seen by using the Hamilton-Cayley theorem). Let G be a subspace of ^n such that ^n=F⊕ G and let (f_1,…,f_r) be a basis of F and (f_r+1,…,f_n) be a basis of G. Let P be the change-of-basis matrix from the basis (f_1,…,f_n) to the canonical basis of ^n. Since F is invariant under A, we getA'=PAP^-1=[ A_1' A_3';0 A_2' ]and since RanB⊂ F, we must have B'=PB=[ B_1';0 ]. Finally, it is clear that the rank of K(A_1',B_1') is equal to the rank of K(A,B). Let A be a matrix of size n× n and let B be a matrix of size n× 1 (note that m=1 here) be such that (A,B) satisfies the Kalman condition. Then the pair (A,B) is similar to the pair (Ã,B̃), withÃ=[01⋯0;⋮⋱⋱⋮;0⋯01; -a_n -a_n-1⋯ -a_1 ]andB̃ = [ 0; ⋮; 0; 1 ]where the coefficients a_i are those of the characteristic polynomial of A, that is χ_A(X) = X^n + a_1 X^n-1 +⋯ + a_n-1X + a_n. Note that the matrix à is the companion matrix of characteristic polynomial χ_A. Theorem <ref> means that, in the new coordinates, the control system is equivalent to the scalar differential equation of order n with scalar control x^(n)(t)+a_1x^(n-1)(t)+⋯+a_nx(t)=u(t) . First, let us note that, if there exists a basis (f_1,…,f_n) in which the pair (A,B) takes the form (Ã,B̃), then we must have f_n=B (up to scaling) andAf_n=f_n-1-a_1f_n, …, Af_2=f_1-a_n-1f_n, Af_1=-a_nf_n.Let us then define the vectors f_1,…,f_n byf_n=B, f_n-1=Af_n+a_1f_n, …, f_1=Af_2+a_n-1f_n.The n-tuple (f_1,…,f_n) is a basis of ^n, sinceSpan { f_n } = Span { B } Span { f_n,f_n-1} = Span { B,AB }⋮ Span { f_n,…,f_1 } = Span { B,…,A^n-1B }=^n .It remains to check that Af_1=-a_nf_n. We haveAf_1 = A^2f_2+a_n-1Af_n= A^2(Af_3+a_n-2f_n)+a_n-1Af_n= …= A^nf_n + a_1A^n-1f_n + ⋯ + a_n-1Af_n= -a_nf_nsince by the Hamilton-Cayley theorem we have A^n=-a_1A^n-1-⋯-a_nI. In the basis (f_1,…,f_n), the pair (A,B) takes the form (Ã,B̃).This theorem can be generalized to the case m>1 but the normal form is not that simple.§.§ Controllability of time-varying linear systems In what follows, we denote by R(t,s) the state-transition matrix of the linear system ẋ(t)=A(t)x(t), that is, the unique solution of ∂_t R(t,s)=A(t)R(t,s),R(s,s)=I_n,for t,s∈. Note that, in the autonomous case A(t)≡ A, we have R(t,s)=e^(t-s)A. But in general the state-transition matrix cannot be computed explicitly. Recall that R(t,s)R(s,τ)=R(t,τ) for all t,s,τ∈; in particular, R(t,s)=R(s,t)^-1.§.§.§ Case without control constraintsWe assume that Ω=^m (no constraints on the control). The control system ẋ(t)=A(t)x(t)+B(t)u(t)+r(t) is controllable in time T (from any initial point x_0) if and only if the Gramian matrixG_T = ∫_0^T R(T,t)B(t) B(t)^⊤ R(T,t)^⊤ dtis invertible.The invertibility condition of G_T depends on T but not on the initial point. Therefore, if a linear instationary control system is controllable from x_0 in time T then it is controllable from any other initial point (but with the same time T). It may fail to be controllable in time T'<T (take for instance B(t)=0 for 0≤ t≤ T'). Anyway, controllability in time T implies controllability in any time T'≥ T: indeed, at time T the range of the end-point mapping is equal to the whole ^n and it cannot decrease in larger time.Note that G_T=G_T^⊤ and thatψ^⊤ G_Tψ = ⟨ G_Tψ,ψ⟩ = ∫_0^T ‖ B(t)^⊤ R(T,t)^⊤ψ‖^2dt ≥ 0∀ψ∈^n ,i.e., G_T is a symmetric nonnegative matrix of size n× n. Theorem <ref> states that the system is controllable in time T if and only if G_T is positive definite. By a diagonalization argument, this is equivalent to the existence of C_T>0 (the lowest eigenvalue of G_T) such that∫_0^T ‖ B(t)^⊤ R(T,t)^⊤ψ‖^2dt ≥ C_T ‖ψ‖^2∀ψ∈^n .This is an observability inequality. The system is controllable in time T if and only if the above observability inequality holds. The important concept of observability is not developed in this book. We recall it briefly for a linear control system ẋ(t)=A(t)x(t)+B(t)u(t)+r(t) and we refer to <cit.> for more details. Assume that, at any time t, we cannot observe the whole state x(t) but only a part of it, y(t)=C(t)x(t), where C(t) is a m× n matrix. The control system, with its output y(·), is said to be observable in time T if a given output y(·) observed on [0,T] can be generated by only one initial condition x_0. While controllability corresponds to a surjectivity property, observability thus corresponds to an injectivity property.Actually, the system ẋ(t)=A(t)x(t)+B(t)u(t)+r(t) with output y(t)=C(t)x(t) is observable in time T if and only if the control system ẋ(t)=A(t)^⊤ x(t) + C(t)u(t) is controllable in time T: this is the so-called controllability – observability duality (see Lemma <ref> in Part <ref>, Section <ref>, for a general mathematical statement establishing this duality in a general context). Hence, observability in time T is characterized by the observability inequality (<ref>) with B(t)^⊤ replaced by C(t).Let x_0∈^n arbitrary. Any solution of the control system, associated with some control u and starting at x_0, satisfies at time Tx_u(T)=x^*+∫_0^T R(T,t)B(t)u(t) dtwith x^*=R(T,0)x_0+∫_0^T R(T,t)r(t) dt.Let us assume that G_T is invertible and let us prove that the control system is controllable in time T. Let x_1∈^n be any target point. We seek an appropriate control u in the form u(t)=B(t)^⊤ R(t,T)^⊤ψ, with ψ∈^n to be chosen such that x_u(T)=x_1. With this control, we have x_u(T)=x^*+G_Tψ, and since G_T is invertible it suffices to take ψ=G_T^-1(x_1-x^*).Conversely, if G_T is not invertible, then by Remark <ref> there exists ψ∈^n∖{0} such that ψ^⊤ G_Tψ=∫_0^T‖B(t)^⊤ R(t,T)^⊤ψ‖^2dt=0, hence ψ^⊤ R(T,t)B(t)=0 for almost every t∈ [0,T]. It follows that ψ^⊤∫_0^T R(T,t)B(t)u(t)dt=0 for every u∈ L^∞([0,T],^m), and thus ψ^⊤ (x_u(T)-x^*)=0,which means that x_u(T) belongs to a proper affine subspace of ^n (namely, x^*+ψ^⊥) as u varies. Hence the system is not controllable in time T.This theorem can be proved in an easier and more natural way with the Pontryagin maximum principle (PMP), within an optimal control viewpoint: anticipating a bit, p(t)=R(t,T)^⊤ψ is the adjoint vector, solution of ṗ(t)=-A(t)^⊤ p(t) such that p(T)=ψ, obtained by applying the PMP with the cost being the square of the L^2 norm of the control, and actually the control used in the above proof is optimal for the L^2 norm. The above proof also leads in the infinite-dimensional setting to the HUM method (see Part <ref>).If the system is autonomous (A(t)≡ A, B(t)≡ B) then R(t,s)= e^(t-s)A and thusG_T=∫_0^T e^(T-t)ABB^⊤ e^(T-t)A^⊤dt=∫_0^T e^tABB^⊤ e^tA^⊤dt .In that case, since the controllability (Kalman) condition does not depend on the time, it follows that G_T_1 is invertible if and only if G_T_2 is invertible, which is not evident from the above integral form (this fact is not true in general in the instationary case).In the autonomous case, the observability inequality (<ref>) can be written as∫_0^T ‖ B^⊤ e^(T-t)A^⊤ψ_p(t)‖^2dt ≥ C_T ‖ψ‖^2_‖ p(T)‖^2∀ψ∈^n .Note that, setting λ(t)=p(T-t), we have λ̇(t)=A^⊤λ(t), λ(0)=ψ, and (<ref>) can also be written as ∫_0^T ‖ B^⊤λ(t)‖^2dt ≥ C_T ‖λ(0)‖^2. This observability inequality is appropriate to be generalized in the infinite-dimensional setting, replacing e^tA with a semigroup, and will be of instrumental importance in the derivation of the so-called HUM method (see Part <ref>). Let us now provide an ultimate theorem which generalizes the Kalman condition in the instationary case. We assume that Ω=^m (no constraint on the control). Consider the control system ẋ(t)=A(t)x(t)+B(t)u(t)+r(t) where t↦ A(t) and t↦ B(t) are of class C^∞. We define the sequence of matricesB_0(t)=B(t), B_k+1(t)=A(t)B_k(t)-dB_k/dt(t), k∈.* If there exists t∈[0,T] such thatSpan {B_k(t)v | v∈^m, k∈}=^nthen the system is controllable in time T.* If t↦ A(t) and t↦ B(t) are moreover analytic (i.e., expandable in a convergent power series at any t), then the system is controllable in time T if and only if (<ref>) is satisfied for every t∈[0,T]. The proof of Theorem <ref> readily follows from the Hamiltonian characterization of singular trajectories (see <cit.>, see also the proof of the weak Pontryagin Maximum Principle in Section <ref>). Thanks to Theorem <ref>, it is easy to prove that the control system ẋ(t)=A(t)x(t)+B(t)u(t), withA(t)=[ t 1 0; 0 t^3 0; 0 0 t^2 ],and B(t)=[ 0; 1; 1 ],is controllable in any time T>0, while the control system{ẋ(t) =-y(t)+u(t)cos t, ẏ(t) =x(t)+u(t)sin t, .is never controllable (Theorem <ref> can also be applied).§.§.§ Case with control constraintsWhen there are some control constraints, we can easily adapt Theorems <ref> and <ref>, like in Proposition <ref>, to obtain local controllability results. We assume that r=0 and that 0∈∘Ω.Let x_0∈^n and T>0 be arbitrary. * The control system ẋ(t)=A(t)x(t)+B(t)u(t) is locally controllable in time T around the point R(T,0)x_0 if and only if the Gramian matrix G_T is invertible.* Assume that t↦ A(t) and t↦ B(t) are C^∞. If (<ref>) is satisfied then the control system ẋ(t)=A(t)x(t)+B(t)u(t) is locally controllable in time T around R(T,0)x_0; the converse is true if t↦ A(t) and t↦ B(t) are analytic.§.§ Geometry of accessible setsConsider the control system ẋ(t)=A(t)x(t)+B(t)u(t)+r(t) in ^n with controls u taking their values in a compact subset Ω of ^m. For every x_0∈^n, for every t≥ 0, the accessible set Acc_Ω(x_0,t) is compact, convex and depends continuously on t for the Hausdorff topology.[Denoting by d the Euclidean distance of ^n, given any two compact subsets K_1 and K_2 of ^n, the Hausdorff distance d_H between K_1 and K_2 is defined byd_H(K_1,K_2)=sup( sup_y ∈ K_2 d(y,K_1), sup_y ∈ K_1d(y,K_2) ) .]Note that the convexity of the accessible set holds true even though Ω is not assumed to be convex. This property is not obvious and follows from a Lyapunov lemma (itself based on the Krein-Milman theorem in infinite dimension; see <cit.>). Actually this argument leads to Acc_Ω(x_0,t)=Acc_Conv(Ω)(x_0,t)=Acc_∂Ω(x_0,t), where Conv(Ω) is the convex closure of Ω and ∂Ω is the boundary of Ω. This illustrates the so-called bang-bang principle (see Section <ref>).In infinite dimension those questions are much more difficult (see <cit.>).We first assume that Ω is convex. In this case, we haveAcc_Ω(x_0,t) = R(t,0)x_0+∫_0^tR(t,s)r(s)ds + L_t ( L^∞([0,T],Ω) )where the linear continuous operator L_t:L^∞([0,T],^m)→^n is defined by L_t u = ∫_0^t R(t,s) B(s) u(s)ds . The convexity of Acc_Ω(x_0,t) follows by linearity from the convexity of the set L^∞([0,T],Ω).Let us now prove the compactness of Acc_Ω(x_0,t). Let (x_n^1)_n∈ be a sequence of points of Acc_Ω(x_0,t). For every n∈, let u_n∈ L^∞([0,T],Ω) be a control steering the system from x_0 to x_n^1 in time t, and let x_n(·) be the corresponding trajectory. We havex_n^1=x_n(t)=R(t,0)x_0+∫_0^tR(t,s)(B(s)u_n(s)+r(s))ds.Since Ω is compact, the sequence (u_n)_n∈ is bounded in L^2([0,T],^m). Since this space is reflexive (see <cit.>), by weak compactness we infer that a subsequence of (u_n)_n∈ converges weakly to some u∈ L^2([0,T],^m). Since Ω is assumed to be convex, we have moreover that u∈ L^2([0,T],Ω) (note that one has also u∈ L^∞([0,T],Ω) because Ω is compact). Besides, using (<ref>) and the control system we easily see that the sequence (x_n(·))_n∈ is bounded in H^1([0,t],^n). Since this Sobolev space is reflexive and is compactly imbedded in C^0([0,t],^n), we deduce that a subsequence of (x_n(·))_n∈ converges uniformly to some x(·) on [0,t]. Passing to the limit in (<ref>), we getx(t)=R(t,0)x_0+∫_0^tR(t,s)(B(s)u(s)+r(s)) dsand in particular (a subsequence of) x_n^1=x_n(t) converges to x(t)∈Acc_Ω(x_0,t). The compactness property is proved.Let us prove the continuity in time of Acc_Ω(x_0,t) in Hausdorff topology, i.e., that for any ε>0 there exists δ >0 such that, for all t_1,t_2∈ satisfying 0 ≤ t_1<t_2 and t_2-t_1≤δ, we have: (1) d(y,Acc_Ω(x_0,t_1)) for every y ∈Acc_Ω(x_0,t_2);(2) d(y,Acc_Ω(x_0,t_2)) for every y ∈Acc_Ω(x_0,t_1).Let us prove (1) ((2) being similar). Let y∈Acc_Ω(x_0,t_2). It suffices to prove that there exists z∈Acc_Ω(x_0,t_1) such that d(y,z) ≤ε. By definition of Acc_Ω(x_0,t_2), there exists u∈ L^∞([0,T],Ω) such that the corresponding trajectory, starting at x_0, satisfies x(t_2)=y. Then z=x(t_1) is suitable. Indeed, x(t_2)-x(t_1)= R(t_2,0)x_0+∫_0^t_2R(t_2,s)(B(s)u(s)+r(s))ds- R(t_1,0)x_0- ∫_0^t_1R(t_1,s) (B(s)u(s)+r(s))ds = ∫_t_1^t_2 R(t_2,s)(B(s)u(s)+r(s))ds+( R(t_2,0)-R(t_1,0) ) (x_0+∫_0^t_1 R(0,s)(B(s)u(s)+r(s))ds )where we have used that R(t_i,s)=R(t_i,0)R(0,s). If t_2-t_1 is small then the first term of the above sum is small by continuity, and the second term is small by continuity of t↦ R(t,0). The result follows.In the general case where Ω is only compact (but not necessarily convex), the proof is more difficult and uses the Lyapunov lemma in measure theory (see, e.g., <cit.>) and more generally the Aumann theorem (see, e.g., <cit.>), from which, recalling that L_T u = ∫_0^T R(T,t) B(t) u(t)dt, it follows that { L_Tu| u∈ L^∞([0,T],Ω) }= { L_Tu | u∈ L^∞([0,T],∂Ω) }= { L_Tu | u∈ L^∞([0,T],Conv(Ω)) }and moreover that these sets are compact and convex. The result follows.Theorem <ref> allows one to define the concept of minimal time: when there are some compact control constraints, due to the compactness of the accessible set, one cannot steer the control system from x_0 to another point x_1 in arbitrary small time; a minimal positive time is due. Another question of interest is to know whether the control system is controllable, in time not fixed, that is: when is the union of all sets Acc_Ω(x_0,t), over t≥ 0, equal to the whole ^n? This question is difficult. § CONTROLLABILITY OF NONLINEAR SYSTEMS§.§ Local controllability results Preliminaries: end-point mapping. It is easy[This follows from usual finite-time blow-up arguments on ordinary differential equations, and from the usual Picard-Lindelöf (Cauchy-Lipschitz) theorem with parameters, the parameter being here a control in a Banach set (see for instance <cit.>).] to establish that the set 𝒰_x_0,T,^m, endowed with the standard topology of L^∞([0,T],^m), is open, and that the end-point mapping E_x_0,T (see Definition <ref>) is of class C^1 on 𝒰_x_0,T,^m (it is C^p whenever f is C^p).Note that, for every t≥ 0, the accessible set is Acc_Ω(x_0,t) = E_x_0,t(𝒰_x_0,t,Ω).In what follows we often denote by x_u(·) a trajectory solution of (<ref>) corresponding to the control u. Let x_0∈^n and let u∈𝒰_x_0,T,^m. The (Fréchet) differential[We recall that, given two Banach spaces X and Y, a mapping F:X→ Y is said to be Fréchet differentiable at x∈ X if there exists a linear continuous mapping dF(x):X→ Y such that F(x+h)=F(x)+dF(x).h+o(h) for every h∈ X. Here, the notation dF(x).h means that the linear operator dF(x) is applied to h.] dE_x_0,T(u):L^∞([0,T],^m)→^n is given bydE_x_0,T(u).δ u = δ x(T) = ∫_0^TR(T,t)B(t)δ u(t) dtwhere δ x(·) is the solution of the so-called linearized system along (x_u(·),u(·)),δẋ(t) = A(t)δ x(t)+B(t)δ u(t),δ x(0)=0,withA(t)=∂ f/∂ x(t,x_u(t),u(t)), B(t)=∂ f/∂ u(t,x_u(t),u(t)) ,(which are respectively of size n× n and n× m),and R(·,·) is the state-transition matrix of the linearized system, defined by (<ref>). We have E_x_0,T(u+δ u)=E_x_0,T(u)+dE_x_0,T(u).δ u+o(δ u) by definition of the Fréchet differential. In this first-order Taylor expansion, E_x_0,T(u)=x_u(T) and E_x_0,T(u+δ u)=x_u+δ u(T). We want to compute dE_x_0,T(u).δ u, which is equal, at the first order, to x_u+δ u(T)-x_u(T). In what follows, we set δ x(t)=x_u+δ u(t)-x_u(t). We haveδẋ(t)= f(t,x_u+δ u(t),u(t)+δ u(t)) - f(t,x_u(t),u(t)) = f(t,x_u(t)+δ x(t),u(t)+δ u(t)) - f(t,x_u(t),u(t)) = ∂ f/∂ x(t,x_u(t),u(t)).δ x(t) + ∂ f/∂ u(t,x_u(t),u(t)).δu(t) + o(δ x(t),δ u(t))so that, at the first order, we identify the linearized system. By integration (note that the remainder terms can be rigorously handled by standard Gronwall arguments, not detailed here), we get δ x(T)=∫_0^TR(T,t)B(t)δ u(t)dt, as expected. Note that this term provides a linear continuous operator and thus is indeed the Fréchet differential of the end-point mapping.This theorem says that the differential of the end-point mapping at u is the end-point mapping of the linearized system along (x_u(·),u(·)). This is similar to the well known result in dynamical systems theory, stating that the differential of the flow is the flow of the linearized system. This remark has interesting consequences in terms of local controllability properties.Local controllability results along a trajectory. Let x_0∈^n and let u̅∈𝒰_x_0,T,^m be arbitrary. According to Remark <ref>, if the linearized system along (x_u̅(·),u̅(·)) is controllable in time T, then the end-point mapping of the linearized system is surjective, meaning that the linear continuous mapping dE_x_0,T(u̅):L^∞([0,T],^m)→^n is surjective. It follows from an implicit function argument (surjective mapping theorem) that the end-point mapping itself, E_x_0,T:𝒰_x_0,T,^m→^n,is locally surjective and locally open at u.The above argument works because we have considered controls taking their values in the whole ^m. The argument still works whenever one considers a set Ω of control constraints, provided that we have room to consider local variations of u̅: this is true as soon as u̅ is in the interior of L^∞([0,T],Ω) for the topology of L^∞([0,T],^m) (note that this condition is stronger than requiring that u̅ takes its values in the interior of Ω).We have thus obtained the following result (see Definition <ref>). Let x_0∈^n and let u̅∈𝒰_x_0,T,Ω.We denote by x̅(·) = x_u̅(·) the trajectory solution of (<ref>), corresponding to the control u̅(·), such that x̅(0)=x_0, and we set x̅_1=x̅(T)=E_x_0,T(u̅). We assume that the function u̅(·) is in the interior of L^∞([0,T],Ω) for the topology of L^∞([0,T],^m). If the linearized system along (x̅(·),u̅(·)) is controllable in time T, then the nonlinear control system (<ref>) is locally controllable from x_0 in time T around x_1. More precisely, there exists an open neighborhood V of x̅_1 in ^n and an open neighborhood U of u̅(·) in L^∞([0,T],^m), satisfying U⊂𝒰_x_0,T,Ω⊂ L^∞([0,T],Ω), such that, for every x_1∈ V, there exists a control u∈ U such that x_u(0)=x_0 and x_1=x_u(T)=E_x_0,T(u). The controllability in time T of the linearized system δẋ(t)=A(t)δ x(t)+B(t) δ u(t) along (x̅(·),u̅(·)) can be characterized thanks to Theorems <ref> and <ref>. We thus have explicit sufficient conditions for local controllability. Note that the conditions are not necessary (for instance, in 1D, ẋ(t)=u(t)^3, along u̅=0). Consider the Reeds-Shepp control systemẋ(t) = v(t)cosθ(t), ẏ(t) = v(t)sinθ(t), θ̇(t) = u(t),where the controls u and v are subject to the constraints | u|≤ 1 and | v|≤ 1. We call segment any connected piece of trajectory along which u=Cst=0 and v=Cst=v̅ with v̅≠ 0. We call arc of a circle any connected piece of trajectory along which u=Cst=u̅ and v=Cst=v̅ with u̅≠ 0 and v̅≠ 0. * Prove that the control system is locally controllable along any segment such that |v̅|<1 (in time equal to that of the segment).* Prove that the control system is locally controllable along any arc of a circle such that |u̅|<1 and |v̅|<1 (in time equal to that of the arc of a circle).* Deduce that the system is globally controllable: for all (x_0,y_0,θ_0)∈^3 and (x_1,y_1,θ_1) ∈^3, there exist T>0 and controls u et v satisfying the constraints and generating a trajectory steering the system from (x_0,y_0,θ_0) to (x_1,y_1,θ_1) in time T. Describe such a strategy in a very simple way.Let us next provide two important applications of Theorem <ref> (which are particular cases): local controllability around a point, and the return method. Local controllability around an equilibrium point. Assume that the general control system (<ref>) is autonomous, i.e., that f does not depend on t. Assume that (x̅,u̅)∈^n×^m is an equilibrium point of f, i.e., f(x̅,u̅)=0. In that case, the constant trajectory defined by x̅(t)=x̅ and u̅(t)=u̅ is a solution of (<ref>). The linearized system along this (constant) trajectory is given byδẋ(t)=Aδ x(t)+B δ u(t)with A=∂ f/∂ x(x̅,u̅) and B=∂ f/∂ u(x̅,u̅). It follows from Theorem <ref> that, if this linear autonomous control system is controllable (in time T) then the nonlinear control system is locally controllable in time T around the point x̅, i.e., x̅ can be steered in time T to any point in some neighborhood. By reversing the time (which is possible because we are here in finite dimension), the converse can be done. We have thus obtained the following result. With the above notations, assume that rankK(A,B)=n and that u̅∈Ω (interior of Ω). Then, for every T>0, the control system ẋ(t)=f(x(t),u(t)) is locally controllable in time T around the point x̅ in the following sense:for every T>0 there exist an open neighborhood V of x̅ in ^n and an open neighborhood W of u̅ in ^m, satisfying W⊂Ω and L^∞([0,T],W)⊂𝒰_x_0,T,Ω⊂ L^∞([0,T],Ω), such that, for all x_0,x_1∈ V, there exists a control u∈ L^∞([0,T],W) such that x_u(0)=x_0 and x_u(T)=x_1.Consider the control systemẋ(t) = x(t)(1-x(t))(x(t)-θ(t)) , θ̇(t) = u(t) ,where the control u is subject to the constraint | u|≤ 1. The state x(t) represents the density of the population of an ecological system and the state θ(t), of which we control the derivative, is called the Allee parameter of the system. Let θ̅∈(0,1) and T>0. Prove that the control system is locally controllable in time T around the equilibrium point (x,θ,u)=(θ̅,θ̅,0). Let us consider the famous example of the inverted pendulum (see Figure <ref>) of mass m, attached to a carriage of mass M whose acceleration u(t) is controlled, with constraint | u(t)|≤ 1. The control system isξ̈ =mlθ̇^2sinθ- mgcosθsinθ+u/M+msin^2θ, θ̈ =-mlθ̇^2sinθcosθ +(M+m)gsinθ-ucosθ/l(M+msin^2θ) ,i.e., this is a system in dimension 4, of state x=(ξ,ξ̇,θ,θ̇)^⊤ (see <cit.> for the derivation of those equations). The linearized control system at any equilibrium point (ξ̅,0,0,0)^⊤ is given by the pair of matricesA=[ 0 1 0 0; 0 0 -mg/M 0; 0 0 0 1; 0 0 (M+m)g/lM 0 ]and B=[ 0; 1/M; 0; -1/lM ].The Kalman condition is easily verified.Corollary <ref> implies that, for any T>0, this control system is locally controllable in time T around the equilibrium.Consider the control systemẋ_1(t) = x_2(t) + u_1(t) ,ẋ_2(t) = x_1(t)x_3(t) + u_2(t), ẋ_3(t) = -x_1(t)x_2(t) ,where the state is x(t)=(x_1(t),x_2(t),x_3(t))^⊤∈^3 and the control is u(t)=(u_1(t),u_2(t))∈^2. This system stands for the real-valued Maxwell-Bloch equations which model the interaction between light and matter and describe the dynamics of a real-valued two-state quantum system interacting with the electromagnetic mode of an optical resonator.(1) First, we see that all equilibrium points (x̅,u̅) of the system are given by the parametrized families ℱ_1 = {x̅=(0,b,c), u̅=(-b,0) | b,c∈} and ℱ_2 = {x̅=(a,0,c), u̅=(0,-ac) | a,c∈}. (2) Computing the linearized system at a point of ℱ_1 (resp., of ℱ_2),we check that the Kalman condition implies that b≠ 0 (resp., a≠ 0) is a necessary and sufficient condition for the controllability of this linearized system.(3) It follows from Corollary <ref> that the Maxwell-Bloch system is locally controllable around any equilibrium point that is not of the form x̅=(0,0,c), u̅=(0,0). Many applications of Corollary <ref> can be found in the literature. It can be noted that the proof, based on the implicit function theorem, is robust and withstands some generalizations in infinite dimension (possibly replacing the implicit function theorem with more appropriate results, such as the Kakutani fixed point theorem). The interested reader will easily find many examples and applications. We do not have room here to list or provide some of them.The return method. In Corollary <ref>, the sufficient condition is that the linearized system at the equilibrium point be controllable. Assume now that we are in a situation where the linearized system at the equibrium point is not controllable, and however we would like to prove, using alternative conditions, that the nonlinear control system is locally controllable. The idea of the so-called return method[The return method was invented by J.-M. Coron first with the aim of stabilizing control systems with smooth instationary feedbacks. Then it was applied to many problems of control of PDEs in order to establish controllability properties. We refer the reader to <cit.>.] is to assume that there exists a nontrivial loop trajectory of the control system, going from x_0 to x_0 in time T, along which the linearized control system is controllable. Then, Theorem <ref> implies that the control system is locally controllable around x_0.Note that the method is not restricted to equilibrium points. We have the following corollary. Let x_0∈^n. Assume that there exists a trajectory x̅(·) of the control system (<ref>), corresponding to a control u̅(·) on [0,T], such that x̅(0)=x̅(T)=x_0. Assume that the function u̅(·) is in the interior of L^∞([0,T],Ω) for the topology of L^∞([0,T],^m). If the linearized system along (x̅(·),u̅(·)) is controllable in time T, then the nonlinear control system (<ref>) is locally controllable in time T around the point x_0: there exists an open neighborhood V of x_0 in ^n and an open neighborhood U of u̅(·) in L^∞([0,T],^m), satisfying U⊂𝒰_x_0,T,Ω⊂ L^∞([0,T],Ω), such that, for every x_1∈ V, there exists a control u∈ U such that x_u(0)=x_0 and x_1=x_u(T)=E_x_0,T(u).Consider the Dubins car model ẋ_1(t) = cosθ(t),ẋ_2(t) = sinθ(t),θ̇(t) = u(t),with initial point x_1(0)=x_2(0)=0 and θ(0)=0 [2π]. The control is subject to the constraint | u|≤ M for some fixed M>0. Let us prove that this control system is locally controllable at (0,0,0 [2π]) in any time T>2π/M. Consider the reference trajectory given byx̅_1(t) = T/2πsin2π t/T,x̅_2(t) = T/2π( 1-cos2π t/T),θ̅(t)=2π t/T,u̅(t)=2π/T(the inequality T>2π/M implies that |u̅|<M). The linearized system along this trajectory is represented by the matricesA(t) = [00 -sin2π t/T;00 -cos2π t/T;000 ] ,B(t) = [ 0; 0; 1 ]and it is easy to check (using Theorem <ref>) that this system is controllable in any time T>0. The local controllability property in time T>2π/M follows. Actually, it can be proved that the Dubins system is not locally controllable at (0,0,0 [2π]) in time T∈[0,2π/M].In the previous paragraphs, we have derived, thanks to simple implicit function arguments, local controllability results. The local feature of these results is not a surprise, having in mind that, from a general point of view, it is expected that showing the global surjectivity of the nonlinear mapping E_x_0,T is difficult.In view of that, we next provide two further issues. §.§ Geometry of the accessible setIn view of better understanding why it is hopeless, in general, to get global controllability, it is useful to have a clear geometric picture of the accessible set. We have the following result, similar to Theorem <ref>. Let x_0∈^n and let T>0. We assume that: * Ω is compact;* there exists b>0 such that, for every admissible control u∈𝒰_x_0,T,Ω, one has ‖ x_u(t)‖≤ b for every t∈ [0,T];* there exists c>0 such that ‖ f(t,x,u)‖≤ c for every t∈[0,T], for every x∈^n such that ‖ x‖≤ b and for every u∈Ω;* the set of velocities V(t,x)={f(t,x,u) | u∈Ω} is convex, for all (t,x).Then the set Acc_Ω(x_0,t) is compact and varies continuously in time on [0,T] (for the Hausdorff topology).The second assumption (uniform boundedness of trajectories) is done to avoid blow-up of trajectories. It is satisfied for instance if the dynamics f is sublinear at infinity. The third assumption is done for technical reasons in the proof, because at the beginning we assumed that f is locally integrable, only, with respect to t. The assumption of convexity of V(t,x) is satisfied for instance for control-affine systems (that is, whenever f is affine in u) and if Ω is moreover convex. First of all, note that V(t,x) is compact, for all (t,x), because Ω is compact. Let us prove that Acc(x_0,t) is compact for every t∈[0,T]. It suffices to prove that every sequence (x_n) of points of Acc(x_0,t) has a converging subsequence. For every integer n, let u_n∈𝒰_x_0,t,Ω be a control steering the system from x_0 to x_n in time t and let x_n(·) be the corresponding trajectory. We have x_n=x_n(t)=x_0+∫_0^t f(s,x_n(s),u_n(s)) ds. Setting g_n(s)=f(s,x_n(s),u_n(s)) for s∈[0,t], using the assumptions, the sequence of functions (g_n(·))_n∈ is bounded in L^∞([0,t],^n), and therefore, up to some subsequence, it converges to some function g(·) for the weak star topology of L^∞([0,t],^n) (see <cit.>). For every τ∈[0,t], we set x(τ)=x_0+∫_0^τ g(s)ds. Clearly, x(·) is absolutely continuous on [0,t], and lim_n→ +∞ x_n(s)=x(s) for every s∈[0,t], that is, the sequence (x_n(·))_n∈ converges pointwise to x(·). The objective is now to prove that x(·) is a trajectory associated with a control u taking its values in Ω, that is, to prove that g(s)=f(s,x(s),u(s)) for almost every s∈[0,t].To this aim, for every integer n and almost every s∈[0,t], we set h_n(s)=f(s,x(s),u_n(s)), and we define the setV = { h(·)∈ L^2([0,t],^n) | h(s)∈ V(s,x(s)) for a.e. s∈[0,t] } .Note that h_n∈ V for every integer n. For all (t,x), the set V(t,x) is compact and convex, and, using the fact that, from any sequence converging strongly in L^2 we can substract a subsequence converging almost everywhere, we infer that V is convex and closed in L^2([0,t],^n) for the strong topology. Therefore V is closed as well in L^2([0,t],^n) for the weak topology (see <cit.>). But like (g_n)_n∈, the sequence (h_n)_n∈ is bounded in L^2, hence up to some subsequence it converges to some function h for the weak topology, and h must belong to V since V is weakly closed.Finally, let us prove that g=h almost everywhere. We have∫_0^tφ(s)g_n(s)ds = ∫_0^tφ(s)h_n(s) ds +∫_0^tφ(s)( g_n(s)-h_n(s))dsfor every φ∈ L^2([0,t],). By assumption, f is globally Lipschitz in x on [0,T]×B̅(0,b) ×Ω, and hence by the mean value inequality, there exists C>0 such that ‖ g_n(s)-h_n(s)‖≤ C‖ x_n(s)-x(s)‖ for almost every s∈[0,t]. The sequence (x_n) converge pointwise to x(·), hence, using the dominated convergence theorem, we infer that ∫_0^tφ(s)( g_n(s)-h_n(s)) ds → 0 as n→ +∞. Passing to the limit in (<ref>), we obtain ∫_0^tφ(s)g(s)ds = ∫_0^tφ(s)h(s) ds for every φ∈ L^2([0,t],) and therefore g=h almost everywhere on [0,t].In particular, we have g∈ V, and hence for almost every s∈[0,t] there exists u(s)∈Ω such that g(s)=f(s,x(s),u(s)). Applying a measurable selection lemma in measure theory (note that g∈ L^∞([0,t],^n)), u(·) can be chosen to be measurable on [0,T] (see <cit.>).In conclusion, the trajectory x(·) is associated on [0,t] with the control u taking its values in Ω, and x(t) is the limit of the points x_n. This shows the compactness of Acc_Ω(x_0,t).It remains to establish the continuity of the accessible set with respect to time. Like in Theorem <ref>, let t_1 and t_2 be two real numbers such that 0<t_1<t_2≤ T and let x_2∈Acc_Ω(x_0,t_2). By definition, there exists a control u taking its values in Ω, generating a trajectory x(·), such that x_2=x(t_2)=x_0+∫_0^t_2 f(t,x(t),u(t)) dt. The point x_1=x(t_1)=x_0+∫_0^t_1 f(t,x(t),u(t)) dt belongs to Acc_Ω(x_0,t_1), and using the assumptions on f, we have ‖ x_2-x_1‖≤Cst| t_2-t_1|. We conclude easily. §.§ Global controllability resultsOne can find global controllability results in the existing literature, established for particular classes of control systems. Let us provide here controllability results in the important class of control-affine systems.We say that a control system is control-affine whenever the dynamics f is affine in u, in other words the control system isẋ(t) = f_0(x(t))+∑_i=1^m u_i(t) f_i(x(t))where the mappings f_i:^n→^n, i=0,…,m are smooth. The term f_0(x) is called a drift. Here, there is a crucial insight coming from differential geometry. We consider the mappings f_i are vector fields on ^n. Such vector fields generate some flows, some integral curves, and at this point geometric considerations come into the picture.There are many existing results in the literature, providing local or global controllability results under conditions on the Lie brackets of the vector fields.We recall that the Lie bracket of two vector fields X and Y is defined either by [X,Y](x)=dY(x).X(x)-dX(x).Y(x), or, recalling that a vector field is a first-order derivation on C^∞(^n,) defined by (Xf)(x)=df(x).X(x) for every f∈ C^∞(^n,) (Lie derivative), by [X,Y]=XY-YX (it is obvious to check that it is indeed a first-order derivation). We also mention that, denoting by exp(tX) and exp(tY) the flows generated by the vector fields X and Y, the flows commute, i.e., exp(t_1X)∘exp(t_2Y)=exp(t_2Y)∘exp(t_1X) for all times t_1 and t_2, if and only if [X,Y]=0. If the Lie bracket is nonzero then the flows do not commute, but we have the asymptotic expansionexp(-tY)∘exp(-tX)∘exp(tY)∘exp(tX)(x) = x+t^2/2[X,Y](x)+o(t^2)as t→ 0. The left-hand side of that equality is the point obtained by starting at x, following the vector field X during a time t, then the vector field Y during a time t, then -X during a time t, and then -Y during a time t. What it says is that this loop is not closed! The lack of commutation is measured through the Lie bracket [X,Y]. For more details on Lie brackets, we refer the reader to any textbook of differential geometry. Without going further, we mention that the Campbell-Hausdorff formula gives a precise series expansion of Z, defined by exp(X)∘exp(Y)=exp(Z), in terms of iterated Lie brackets of X and Y. The first terms are Z=X+Y+1/2[X,Y]+⋯.Finally, we recall that the Lie algebra generated by a set of vector fields is the set of all possible iterated Lie brackets of these vector fields. For control-affine systems without drift, we have the following well-known Chow-Rashevski theorem (also called Hörmander condition, or Lie Algebra Rank Condition), whose early versions can be found in <cit.>.Consider a control-affine system without drift in ^n. Assume that Ω=^m (no constraint on the control) and that the Lie algebra generated by the vector fields f_1,…,f_m is equal to ^n (at any point). Then the system is globally controllable, in any time T.We sketch the proof in the case n=3 and m=2, assuming that rank(f_1,f_2,[f_1,f_2])=3 at any point. Let λ∈. We define the mappingφ_λ(t_1,t_2,t_3) = exp(λ f_1)exp(t_3 f_2)exp(- λ f_1) exp(t_2 f_2) exp (t_1 f_1)(x_0) .We have φ_λ(0)=x_0. Let us prove that, for λ≠ 0 small enough, φ_λ is a local diffeomorphism at 0. From the Campbell-Hausdorff formula, we infer that φ_λ(t_1,t_2,t_3)=exp(t_1f_1+(t_2+t_3)f_2+λ t_3[f_1,f_2]+ ⋯) , hence ∂_∂ t_1φ_λ(0)=f_1(x_0), ∂_∂ t_2φ_λ(0)=f_2(x_0) and ∂_∂ t_3φ_λ(0)=f_2(x_0)+λ[f_1,f_2](x_0)+o(λ). By assumption, it follows that dφ_λ is an isomorphism, and therefore φ_λ is a local diffeomorphism at 0. We conclude by an easy connectedness argument.We approach here the geometric control theory. The theorem above is one of the many existing results that can be obtained with Lie bracket considerations. We refer the reader to the textbook <cit.> for many results which are of a geometric nature. In particular this reference contains some material in order to treat the case of control-affine systems with drift (see also <cit.>). Note that, in presence of a drift f_0, an easy sufficient condition ensuring global controllability is that the Lie algebra generated by the controlled vector fields f_1,…,f_m be equal to ^n (at any point);indeed, the rough idea is that, taking large controls, in some sense the drift term can be compensated and then one can apply Theorem <ref>.The Heisenberg system in ^3ẋ(t) = u_1(t),ẏ(t)=u_2(t),ż(t)=u_1(t)y(t)-u_2(t)x(t),is represented by the two vector fields f_1 = ∂ x + y ∂ z and f_2 = ∂ y-x∂ z. We have [f_1,f_2]=-2∂ z and thus the Lie algebra condition is satisfied. Therefore this system is controllable.In practice, Lie brackets are often realized thanks to oscillating functions, like the sine function taken with a sufficiently large frequency (see <cit.>). One can find in the literature many results concerning the motion planning problem, which consists of designing simple enough controls realizing controllability. We refer to <cit.> for a survey on such techniques. CHAPTER: OPTIMAL CONTROL In Chapter <ref>, we have provided controllability properties for general classes of control systems. Considering some control problem of trying to reach some final configuration for the control system (<ref>), from some initial configuration, with an admissible control, it happens that, in general, there exists an infinite number of controls making the job (think of all possibilities of realizing a parallel parking, for instance). Among this infinite number of controls, we now would like to select (at least) one control, achieving the desired controllability problem, and moreover minimizing some cost criterion (for instance, one would like to realize the parallel parking by minimizing the time, or by minimizing the fuel consumption). This is then an optimal control problem.The main objective of this chapter is to formulate the Pontryagin maximum principle, which is the milestone of optimal control theory. It provides first-order necessary conditions for optimality, which allow one to compute or at least parametrize the optimal trajectories.Let us give the general framework that will be used throughout the chapter.Let n and m be two positive integers.We consider a control system in ^nẋ(t) = f(t,x(t),u(t))where f:×^n×^m→^n is of class C^1, and the controls are measurable essentially bounded functions of time taking their values in some measurable subset Ω of ^m (set of control constraints).Let f^0:×^n×^m→ and g:×^n→ be functions of class C^1. For every x_0∈^n, for every t_f≥ 0, and for every admissible control u∈𝒰_x_0,t_f,Ω (see Section <ref>), the cost of the trajectory x(·), solution of (<ref>), corresponding to the control u and such that x(0)=x_0, is defined byC_x_0,t_f(u) = ∫_0^t_f f^0(t,x(t),u(t)) dt+g(t_f,x(t_f)). Many variants of a cost can be given, anyway the one above is already quite general and covers a very large class of problems. If needed, one could easily add some term penalizing the initial point. Note also that the term g(t_f,x(t_f)) could as well be written in integral form and thus be put in the definition of the function f^0; however we prefer to keep this formulation that we find convenient in many situations. Let M_0 and M_1 be two measurable subsets of ^n. We consider the optimal control problem (denoted in shortin what follows) of determining a trajectory x(·), defined on [0,t_f] (where the final time t_f can be fixed or not in ), corresponding to an admissible control u∈𝒰_x(0),t_f,Ω, solution of (<ref>), such that x(0)∈ M_0 and x(t_f)∈ M_1 and minimizing the cost (<ref>) over all possible trajectories steering the control system from M_0 to M_1 in time t_f.This is a general nonlinear optimal control problem, but without any state constraints. We could restrict the set of trajectories by imposing some pointwise constraints on x(t) (a region of the state space may be forbidden). Such constraints are however not easily tractable in the Pontryagin maximum principle and make the analysis much more difficult (see Section <ref>). § EXISTENCE OF AN OPTIMAL CONTROLAlthough it is not very useful, let us state a general result ensuring the existence of an optimal solution of . In the theorem below, note that we can assume that f^0 and g are only continuous. Here, there is no additional difficulty in adding some state constraints.We considerand we assume that: * Ω is compact, M_0 and M_1 are compact;* M_1 is reachable from M_0, that is, there exists a trajectory (corresponding to an admissible control) steering the system from M_0 to M_1;* there exist b>0 and c>0 such that ‖ f(t,x,u)‖+| f^0(t,x,u)|≤ c for every t∈[0,b], every x∈^n satisfying ‖ x‖≤ b, and every u∈Ω; moreover, for every trajectory x(·) defined on [0,t_f] and steering the system from M_0 to M_1, one has t_f≤ b and ‖ x(t)‖≤ b for every t∈ [0,t_f] (no blow-up);* the epigraph of extended velocitiesṼ(t,x)={[ f(t,x,u); f^0(t,x,u)+γ ] | u∈Ω, γ≥ 0}is convex, for all (t,x).We assume moreover inthat the trajectories are subject to state constraints c_i(t,x(t))≤ 0, where the c_i, i∈{1,…,r} are continuous functions defined on ×^n.Thenhas at least one solution. If the final time is fixed inthen we assume that M_1 is reachable from M_0 exactly in time t_f. Note that it is easy to generalize this result to more general situations, for instance the sets M_0, M_1 and Ω could depend on t (see <cit.>, and more generally see <cit.> for many variants of existence results).Such existence results are however often difficult to apply in practice because of the strong assumption (<ref>) (not satisfied in general as soon as f is “too much" nonlinear). In practice, we often apply the Pontryagin maximum principle (that we will see next), without being sure a priori that there exist an optimal solution. If we can solve the resulting necessary conditions, then this often gives a way for justifying that indeed, a posteriori, there exists an optimal solution.The proof is similar to the one of Theorem <ref>.Let δ be the infimum of costs C(u) over the set of admissible controls u∈ L^∞(0,t(u);Ω) generating trajectories such that x(0)∈ M_0, x(t(u))∈ M_1 and satisfying the state constraints c_i(x(·))≤ 0, i=1,…,r. Let us consider a minimizing sequence of trajectories x_n(·) associated with controls u_n, that is, a sequence of trajectories satisfying all constraints and such that C(u_n)→δ as n→+∞. For every integer n, we setF̃_n(t)=[ f(t,x_n(t),u_n(t)); f^0(t,x_n(t),u_n(t)) ] = [ F_n(t); F^0_n(t) ]for almost every t∈[0,t(u_n)]. From the assumptions, the sequence of functions (F̃_n(·))_n∈ (extended by 0 on (t_n(u),b]) is bounded in L^∞(0,b;^n), and hence, up to some subsequence, it converges to some functionF̃(·)=( F(·), F^0(·))^⊤ for the weak star topology of L^∞(0,b;^n+1) (see <cit.>). Also, up to some subsequence, the sequence (t_n(u_n))_n∈ converges to some T≥ 0, and we have F̃(t)=0 for t∈(T,b]. Finally, by compactness of M_0, up to some subsequence, the sequence (x_n(0))_n∈ converges to some x_0∈ M_0. For every t∈[0,T], we set x(t)=x_0+∫_0^t F(s) ds , and then x(·) is absolutely continuous on [0,T]. Moreover, for every t∈[0,T], we have lim_n→ +∞ x_n(t)=x(t), that is, the sequence (x_n(·))_n∈ converges pointwise to x(·). As in the proof of Theorem <ref>, the objective is then to prove that the trajectory x(·) is associated with a control u taking its values in Ω, and that this control is moreover optimal.We seth̃_n(t)=( f(t,x(t),u_n(t)) , f^0(t,x(t),u_n(t)) )^⊤ for every integer n and for almost every t∈[0,t(u_n)]. If T>t(u_n), then we extend h̃_n on [0,T] byh̃_n(t)=( f(t,x(t),v) , f^0(t,x(t),v) )^⊤ for some arbitrary v∈Ω. Besides, we define (note that Ω is compact)β = max{| f^0(t,x,u)| | 0≤ t≤ b, ‖ x‖≤ b, u∈Ω}.For every (t,x)∈^1+n, we then slightly modify the definition of Ṽ(t,x) to make it compact (keeping it convex), by settingṼ_β(t,x)={[ f(t,x,u); f^0(t,x,u)+γ ] | u∈Ω, γ≥ 0, | f^0(t,x,u)+γ|≤β}.We define Ṽ = {h̃(·)∈ L^2([0,T],^n+1) | h(t)∈Ṽ_β(t,x(t)) for a.e. t∈[0,T] } . By construction, we have h̃_n∈Ṽ for every integer n.At this step, we need a lemma: The set Ṽ is convex and strongly closed in L^2([0,T],^n+1).Let us prove that Ṽ is convex. Let r̃_1,r̃_2∈Ṽ, and let λ∈[0,1]. By definition, for almost every t∈ [0,T], we have r̃_1(t)∈Ṽ_β(t,x(t)) and r̃_2(t)∈Ṽ_β(t,x(t)). Since Ṽ_β(t,x(t)) is convex, it follows that λr̃_1(t)+(1-λ)r̃_2(t)∈Ṽ_β(t,x(t)). Hence λr̃_1+(1-λ)r̃_2∈Ṽ.Let us prove that Ṽ is strongly closed in L^2([0,T],^n). Let (r̃_n)_n∈ be a sequence of Ṽ converging to r̃ for the strong topology of L^2([0,T],^n). Let us prove that r̃∈Ṽ. Up to some subsequence, (r̃_n)_n∈ converges almost everywhere to r̃, but by definition, for almost every t∈ [0,T] we have r̃_n(t)∈Ṽ_β(t,x(t)), and Ṽ_β(t,x(t)) is compact, hence r̃(t)∈Ṽ_β(t,x(t)) for almost every t∈ [0,T]. Lemma <ref> is proved. We now continue the proof of Theorem <ref>. By Lemma <ref>, the set Ṽ is convex and strongly closed and thus also weakly closed in L^2([0,T],^n+1) (see <cit.>). The sequence (h̃_n)_n∈ being bounded in L^2([0,T],^n+1), up to some subsequence, it converges weakly to some h̃∈Ṽ since this set is weakly closed.Let us prove that F̃=h̃ almost everywhere. We have∫_0^Tφ(t)F̃_n(t)dt = ∫_0^Tφ(t)h̃_n(t)dt + ∫_0^Tφ(t)(F̃_n(t)-h̃_n(t)) dtfor every φ∈ L^2(0,T). By assumption, the mappings f and f^0 are globally Lipschitz in x on [0,T]×B̅(0,b) ×Ω, hence there exists C>0 such that ‖F̃_n(t)-h̃_n(t)‖≤ C‖ x_n(t)-x(t)‖ for almost every t∈[0,T]. Since the sequence (x_n(·))_n∈ converges pointwise to x(·), by the dominated convergence theorem we infer that∫_0^Tφ(t)(F̃_n(t)-h̃_n(t)) dt → 0 as n→ +∞. Passing to the limit in (<ref>), it follows that ∫_0^Tφ(t)F̃(t)dt = ∫_0^Tφ(t)h̃(t)dt for every φ∈ L^2(0,T), and therefore F̃=h̃ almost everywhere on [0,T].In particular, F̃∈Ṽ, and hence for almost every t∈[0,T] there exist u(t)∈Ω and γ(t)≥ 0 such thatF̃(t)=( f(t,x(t),u(t)) , f^0(t,x(t),u(t))+γ(t))^⊤ . Applying a measurable selection lemma (noting that F̃∈ L^∞([0,T],^n+1)), the functions u(·) and γ(·) can moreover be chosen to be measurable on [0,T] (see <cit.>).It remains to prove that the control u is optimal for . First of all, since x_n(t_n(u_n))∈ M_1, by compactness of M_1 and using the convergence properties established above, we get that x(T)∈ M_1. Similarly, we get, clearly, that c_i(x(·))≤ 0, i=1,…,r. Besides, by definition C(u_n) converges to δ, and, using the convergence properties established above, C(u_n) converges as well to ∫_0^T ( f^0(t,x(t),u(t))+γ(t) )dt+g(T,x(T)). Since γ takes nonnegative values, this implies that ∫_0^Tf^0(t,x(t),u(t))dt +g(T,x(T))≤∫_0^T ( f^0(t,x(t),u(t))+γ(t) )dt +g(T,x(T)) ≤ C(v), for every admissible control v generating a trajectory steering the system from M_0 to M_1 and satisfying all constraints.Hence u is optimal and γ=0. Theorem <ref> is proved. § PONTRYAGIN MAXIMUM PRINCIPLE (PMP) §.§ General statementThe Pontryagin maximum principle (in short, PMP) states first-order necessary conditions for optimality.If (x(·),u(·)) is an optimal solution of on [0,t_f], then there exist an absolutely continuous function p(·): [0,t_f]⟶^n called adjoint vector and a real number p^0≤ 0, with (p(·),p^0)≠ (0,0), such that[With rigorous notations, we should write (<ref>) in the form ẋ=∂ H/∂ p^⊤ and ṗ=-∂ H/∂ x^⊤, or equivalently, ẋ=∇_p H and ṗ=-∇_x H (as these are vectors in ^n). But we keep the writing (<ref>),used in classical mechanics for Hamiltonian systems. In coordinates, this means that ẋ_i=∂ H/∂ p_i and ṗ_i=-∂ H/∂ x_i for every i∈{1,…,n}.]ẋ(t)=∂ H/∂ p(t,x(t),p(t),p^0,u(t)),ṗ(t)=-∂ H/∂ x(t,x(t),p(t),p^0,u(t)),for almost every t∈[0,t_f], where the function H:×^n×^n××^m→, called Hamiltonian of , is defined byH(t,x,p,p^0,u)=⟨ p,f(t,x,u)⟩+p^0f^0(t,x,u)and we have the maximization conditionH(t,x(t),p(t),p^0,u(t))=max_v∈Ω H(t,x(t),p(t),p^0,v)for almost every t∈[0,t_f].If the final time t_f is not fixed in , then we have moreovermax_v∈Ω H(t_f,x(t_f),p(t_f),p^0,v) = -p^0∂ g/∂ t(t_f,x(t_f)). Moreover, the adjoint vector can be chosen such that we have the so-called transversality conditions (if they make sense)p(0)T_x(0)M_0 p(t_f)-p^0∇_x g(t_f,x(t_f))T_x(t_f)M_1where the notation T_xM stands for the usual tangent space to M at the point x (these conditions can be written as soon as the tangent space is well defined). If (p(·),p^0) is a given adjoint vector satisfying the various conclusions stated in Theorem <ref>, then, for every λ>0, (λ p(·),λ p^0) is also an adjoint vector satisfying the statements.Note that we cannot take λ<0 since this would lead to a change of sign in the Hamiltonian and thus would impact the maximization condition (<ref>). Actually, the historical choice made by Pontryagin is to take p^0≤ 0 in the statement: this leads to the maximum principle (the choice p^0≥ 0 is valid as well but in that case leads to a minimum principle). A quadruple (x(·),p(·),p^0,u(·)) satisfying (<ref>) and (<ref>) is called an extremal. The PMP says that every optimal trajectory x(·), associated with a control u(·), is the projection onto ^n of an extremal (x(·),p(·),p^0,u(·)). * If p^0 < 0, the extremal is said to be normal. In that case, it is usual (but not mandatory) to normalize the adjoint vector so that p^0=-1.* If p^0=0, the extremal is said to be abnormal.The historical proof of the PMP stated in Theorem <ref> can be found in <cit.>. As in <cit.>, it is based on the use of needle-like variations combined with a Brouwer fixed point argument. It is interesting to note that there are other proofs, based on the Ekeland variational principle (see <cit.>), on the Hahn-Banach theorem (see <cit.>). A concise sketch of proof, based on an implicit function argument (and using needle-like variations) can be found in <cit.>. As discussed in <cit.>, all these different approaches of proof have their specificities. One approach or another may be preferred when trying to derive a PMP in a given context (for instance, the Ekeland approach is well adapted to derive versions of the PMP with state constraints, or in infinite dimension).In Section <ref>, we give a proof of the PMP in the simplified context where Ω=^m (no control constraint), or at least, under the assumption that the optimal control u belongs to the interior of L^∞([0,t_f],Ω). In this case, (<ref>) implies that∂ H/∂ u(t,x(t),p(t),p^0,u(t))=0almost everywhere on [0,t_f] and the corresponding statement is sometimes called “weak PMP" (see <cit.>). By the way, it is interesting to note that, in this context, a control u that is the projection of an abnormal extremal (p^0=0) must satisfy p(t_f)^⊤ dE_x_0,t_f(u)=0 (see Section <ref>), i.e., is a singularity of the end-point mapping; in other words, the linearized control system along (x_u(·),u(·)) is not controllable in time t_f.In the general case where the optimal control u may saturate the constraints, the proof is more difficult and requires more technical developments such as needle-like variations, or a version of the implicit function theorem under constraints.In the conditions of Theorem <ref>, we have moreover d/dtmax_v∈Ω H(t,x(t),p(t),p^0,v) = ∂ H/∂ t (t,x(t),p(t),p^0,u(t))for almost every t∈[0,t_f] (this can be proved by the Danskin theorem, using the fact that u takes its values in a compact subset of Ω).In particular ifis autonomous, that is, if f and f^0 do not depend on t, then H does not depend on t as well, and it follows from (<ref>) thatmax_v∈Ω H(x(t),p(t),p^0,v)=Cst∀ t∈[0,t_f].Note that this equality is valid for every (not only for almost every) time t∈[0,t_f] because the function t↦max_v∈Ω H(x(t),p(t),p^0,v) is Lipschitz.Note also that, in (<ref>), the maximum over Ω exists even when Ω is not compact. This is part of the result and this is due to the fact that we have assumed that there exists an optimal solution. If g does not depend on t then (<ref>) says that, roughly, if t_f is free then the (maximized) Hamiltonian vanishes at t_f. Note that ifis autonomous then this implies that H=0 along every extremal.If M_1={x∈^n | F_1(x)=⋯=F_p(x)=0}, where the functions F_i are of class C^1 on ^n, then (<ref>) implies that∃λ_1,…,λ_p∈ |p(t_f)=∑_i=1^pλ_i∇ F_i(x(t_f))+ p^0∇_x g(t_f,x(t_f)) .The minimal time problem corresponds to choose either f^0=1 and g=0, or f^0=0 and g(t,x)=t. In both cases it can be checked that the implied transversality conditions coincide.Note that if M_0 (or M_1) is the singleton {x_0}, which means that the initial point is fixed in , then the corresponding transversality condition is empty (since the tangent space is then reduced to the singleton {0}.At the opposite, if for instance M_1=^n, which means that the final point is free in , then the corresponding transversality condition yields that p(t_f)=p^0∇_x g(t_f,x(t_f)) (since the tangent space is then equal to ^n). In particular, in that case p^0 cannot be equal to 0, for otherwise we would get p^0=0 and p(t_f)=0, which contradicts the fact that (p(·),p^0) is nontrivial.§.§ Proof in a simplified contextIn this section, we consider a simplified version ofand we derive a weaker version of the PMP called “weak PMP". The simplified framework is the following: * M_0={x_0} and M_1={x_1}, where x_0 and x_1 are two given points of ^n. In other words, we consider a "point to point" control problem.* g=0 in the definition of the cost (<ref>).* The final time t_f=T is fixed.[We are used to denote the final time by T when it is fixed in , and by t_f when it is let free.]These three first simplifications are minor: it is easy to reduce a given optimal control problem to that case (see <cit.>). In contrast, the following one is major: * Ω=^m, i.e., there are no control constraints; or, if there are some control constraints, we assume that the optimal control u is in the interior of L^∞([0,T],Ω) for the topology of L^∞([0,T],^m).The latter assumption is the most important simplification. We will shortly comment further on the difficulties coming from control constraints. First of all, we note thatis equivalent to the optimization problemmin_E_x_0,T(v)=x_1C_x_0,T(v)where E_x_0,T is the end-point mapping (see Definition <ref>) and C_x_0,T is the cost defined by <ref>. In this form, this is a nonlinear optimization problem with n equality constraints, in the infinite dimensional space of controls v∈𝒰_x_0,T,^m⊂ L^∞([0,T],^m).Let u∈𝒰_x_0,T,^m be an optimal control (here, we assume its existence, without making any further assumption on the dynamics). In order to derive first-order necessary conditions for optimality, we apply the well known Lagrange multipliers rule, which we recover as follows. Let us consider Figure <ref>, in which we draw the range of the mapping F defined by F(v) = ( E_x_0,T(v), C_x_0,T(v) ) with E_x_0,T(v)∈^n in abscissa and C_x_0,T(v)∈ in ordinate. The range of F is thus seen as a subset of ^n×, whose shape is not important. We are interested in controls steering the system from x_0 to x_1; on the figure this corresponds to a point that is in the range of F, projecting onto x_1. Now the optimal control u corresponds on Figure <ref> to the point F(u), which projects onto x_1 and is at the boundary of the range of F. In other words, the necessary condition for optimality is:u optimal ⇒ F(u)∈∂ F(L^∞([0,T],Ω)).Indeed, if F(u) were not at the boundary of F(L^∞([0,T],Ω)) then this would imply that one can find another control, steering the system from x_0 to x_1 with a lower cost, which would contradict the optimality of u.At this step, we use the important simplification Ω=^m. Since F(u)∈∂ F(L^∞([0,T],^m)), it follows from an implicit function argument (more precisely, the surjective mapping theorem) thatdF(u): L^∞([0,T],^m)→^n× is not surjective. Indeed, otherwise, the surjective mapping theorem would imply that F be locally surjective: in other words there would exist a neighborhood of F(u) in ^n× contained in F(L^∞([0,T],^m)), which would contradict the fact that F(u)∈∂ F(L^∞([0,T],^m)). Therefore, Ran(dF(u)) is a proper subspace of ^n×.Note that, when there are some control constraints, the above argument works as well provided u belongs to the interior of L^∞([0,T],Ω) for the topology of L^∞([0,T],^m). The argument is however no more valid whenever the control saturates the constraint, that is, whenever for instance the trajectory contains some sub-arc such that u(t)∈∂Ω. At least, to make it work we would need rather to use an implicit function theorem allowing one to take into account some constraints. Here is actually the main technical difficulty that one has to deal with in order to derive the strong version of the PMP. A usual proof consists of developing needle-like variations (see <cit.>), but except this (important) technical point, the structure of the proof remains the same, in particular an implicit function argument can still be used (see the sketch of proof in <cit.>).Now, since Ran(dF(u)) is a proper subspace of ^n×, there must exist ψ̃=(ψ,ψ^0)∈^n×∖{(0,0)} such that ψ̃⊥RandF(u), i.e., ψ̃^⊤ dF(u)=0 (here, for convenience dF(u) is identified to a matrix with n+1 rows).In other words, we have obtained the usual Lagrange multipliers relationψ^⊤ dE_x_0,T(u)+ψ^0dC_x_0,T(u)=0 . Let us now exploit (<ref>) (or, more exactly, the equation ψ̃^⊤ dF(u)=0). We define a new coordinate x^0 and consider the differential equation ẋ^0(t)=f^0(t,x(t),u(t)), with the initial condition x^0(0)=0. Therefore we have x^0(T)=C_x_0,T(u). We define the augmented state x̃=(x,x^0)∈^n+1 and the augmented dynamics f̃(t,x̃,v) = [ f(t,x,v); f^0(t,x,v) ]. We consider the augmented control system in ^n+1ẋ̃̇(t)=f̃(t,x̃(t),v(t)).Note thatis then equivalent to the optimal control problem of steering the system (<ref>) from x̃_0=(x_0,0) to x̃_1=(x_1,x^0(T)) by minimizing x^0(T).Since F(v) = (E_x_0,T(v),C_x_0,T(v)), it follows that F is the end-point mapping for the augmented control system (<ref>). Note that the range of F (drawn on Figure <ref>) is the accessible set Ãc̃c̃(x̃_0,T) of the augmented control system.Using Proposition <ref>, the (Fréchet) differential dF(u):L^∞([0,T],^m)→^n is given bydF(u).δ u = ∫_0^T R̃(T,t)B̃(t)δ u(t) dt ∀δ u∈ L^∞([0,T],^m)where the (augmented) state transition matrix R̃(·,·) is defined as the solution of the Cauchy problem ∂_tR̃(t,s)=Ã(t)R̃(t,s), R̃(s,s)=I_n+1, withÃ(t)=∂f̃/∂x̃(t,x̃(t),u(t)),B̃(t)=∂f̃/∂ u(t,x̃(t),u(t)) .Since ψ̃^⊤ dF(u).δ u=0 for every δ u∈ L^∞([0,T],^m), it follows thatB̃(t)^⊤R̃(T,t)^⊤ψ̃= 0for almost every t∈[0,T]. We set p̃(t)=R̃(T,t)^⊤ψ̃. By derivating with respect to t the relation R̃(T,t)R̃(t,T)=I_n+1, is is easy to establish thatd/dtR̃(T,t) = -R̃(T,t) Ã(t). We infer that p̃(·) is the unique solution of the Cauchy problemṗ̃̇(t) = - Ã(t)^⊤p̃(t),p̃(T)=ψ̃.We are almost done. Let us now come back to the initial coordinates in ^n×. We set p̃(t)=[ p(t); p^0(t) ]. Since f̃ does not depend on the (slack) variable x^0, we have ∂f̃/∂ x^0=0 and therefore, using (<ref>), [ ṗ(t); ṗ^0(t) ] =- [ ∂ f/∂ x(t,x(t),u(t))^⊤ ∂ f^0/∂ x(t,x(t),u(t))^⊤;00 ][ p(t); p^0(t) ] ,with p(T)=ψ and p^0(T)=ψ^0. In particular, we have ṗ^0(t)=0 and thus p^0=ψ^0 is a constant. Defining the Hamiltonian by (<ref>), the latter equations give (<ref>), and from (<ref>) we infer (<ref>). This is the “weak PMP".In the above proof, we have constructed the adjoint vector so that (p(T),p^0)=(ψ,ψ^0) is a Lagrange multiplier. It is defined up to scaling (see Remark <ref> and the subsequent comments).§.§ Generalizations and additional commentsThe PMP withstands many possible generalizations. More general transversality conditions. In Theorem <ref>, we have given transversality conditions for "decoupled" terminal conditions x(0)∈ M_0 and x(t_f)∈ M_1. Assume that, instead, we have the coupled terminal conditions (x(0),x(t_f))∈ M, where M is a subset of ^n×^n. In this case, using a simple "copy-paste" of the dynamics, it is then easy to prove (see <cit.>) that the transversality conditions become (if they make sense)( -p(0),p(t_f)-p^0∇_x g(t_f,x(t_f)) )T_(x(0),x(t_f))M .An important case is the one of periodic terminal conditions x(0)=x(T): then M={(x,x) | x∈^n}, and, if moreover g=0 then p(0)=p(t_f).Another useful generalization is when M is a general closed subset of ^n×^n, but is not necessarily a manifold, at least, locally around (x(0),x(t_f)). In this case, one can still write transversality conditions, by using notions of nonsmooth analysis (see <cit.>), and there holds( p(0),-p(t_f)+p^0∇_x g(t_f,x(t_f)) ) ∈ N_M(x(0),x(t_f))where N_M(x,y) is the limiting normal cone to M at (x,y). This generalized condition can be useful to provide sign conditions on the adjoint vector, whenever the subset M is not smooth. Infinite time horizon. The statement of the PMP remains the same when t_f=+∞, under the assumption that the limit of the optimal trajectory x(t) exists when t→+∞; in particular, the result then asserts that the limit of p(t) exists (see <cit.>).State constraints, hybrid optimal control problems. Among the most well known and useful generalizations, one can think of the PMP forwith state constraints (see <cit.>), for nonsmooth(see <cit.>), hybrid(see <cit.>),settled on time scales (see <cit.>). There exist several possible proofs of the PMP (see <cit.>), based either on an implicit function argument (as we did here), or on a (Brouwer) fixed point argument (as in the classical book <cit.>), or on a Hahn-Banach separation argument (as in <cit.>, or on Ekeland's principle (see <cit.>). Each of them may or may not be adapted to such or such generalization.Let us note that, when dealing with state constraints, in full generality the adjoint vector becomes a measure. The generic situation that one has in mind is the case where this measure has only a finite number of atoms: in this favorable case the adjoint vector is then piecewise absolutely continuous, with possible jumps when touching the state constraint. Unfortunately the structure of the measure might be much more complicated, but such a discussion is outside of the scope of the present manuscript. We refer the reader to <cit.>.Although it is then not exact, it can be noted that state constraints may be tackled with usual penalization considerations, so as to deal rather with anwithout state constraint. In some cases where getting the true optimal trajectory is not the main objective, this may be useful. PMP in infinite dimension. We refer to Section <ref> in Part <ref> and to <cit.> for a generalization of the PMP in infinite dimension. To comment briefly on this extension, we notice that the argument to prove the weak PMP remains valid when replacing ^n with a Banach space X, at the exception of one notable difficulty: in the argument by contraposition, we have to ensure that Ran(dF(u)) is contained in a closed proper subspace of X×, i.e., its codimension is ≥ 1. Here is the main difference with finite dimension. Indeed, the fact that Ran(dF(u)) is a proper subspace of X× is not enough to ensure a separation argument: it could happen that Ran(dF(u))⊊ X× be dense in X×. Such a situation corresponds to approximate controllability, as we will see in Part <ref>, and in this case the PMP fails to be true (see Example <ref> in Section <ref>). In few words, a classical sufficient assumption under which the PMP is still valid in infinite dimension is that there is only a finite number of scalar conditions on the final state (finite codimension condition on M_1). Except this additional assumption on M_1, the statement in infinite dimension is exactly the same as in Theorem <ref>. Further comments: second-order conditions. Let us insist on the fact that the PMP is a first-order necessary condition for optimality.[This is an elaborated version of the first-order necessary condition ∇ f(x)=0 when minimizing a C^1 function over ^n!] As already stressed, the PMP states that every optimal trajectory x(·), associated with a control u(·), is the projection onto ^n of an extremal (x(·),p(·),p^0,u(·)). However, conversely, an extremal (i.e., a solution of the equations of the PMP) is not necessarily optimal. The study of the optimality status of extremals can be done with the theory of conjugate points. More precisely, as in classical optimization where extremal points are characterized by a first-order necessary condition (vanishing of some appropriate derivative), there exists in optimal control a theory of second-order conditions for optimality, which consists of investigating a quadratic form that is the intrinsic second-order derivative of the end-point mapping: if this quadratic form is positive definite then this means that the extremal under consideration is locally optimal (for some appropriate topology), and if it is indefinite then the extremal is not optimal; conversely if the extremal is optimal then this quadratic form is nonnegative. Times at which the index of this quadratic form changes are called conjugate times. The optimality status of an extremal is then characterized by its first conjugate time. We refer to <cit.> (see references therein) for a survey on theory and algorithms to compute conjugate times. Much could be written on conjugate time theory (which has nice extensions in the bang-bang case), but this is beyond of the scope of the present book. Numerical computation. The application of the PMP leads to a shooting problem, that is a boundary value problem consisting of computing extremals satisfying certain terminal conditions. It can be solved numerically by implementing a Newton method combined with an ODE integrator: this approach is called the shooting method.We do not have room enough, here, to describe numerical methods in optimal control. We refer to <cit.> for a thorough description of so-called direct methods, and to <cit.> for a survey on indirect methods and the way to implement them in practice (see also <cit.> and references cited therein). § PARTICULAR CASES AND EXAMPLESIn this section, we first specify the PMP for two important particular classes of examples: the minimal time problem for linear control systems, which yields the bang-bang principle; linear control systems with a quadratic cost, leading to the well-known “Linear Quadratic" (LQ) theory. Finally, we provide several examples of application of the PMP to nonlinear optimal control problems. §.§ Minimal time problem for linear control systemsLet us assume that the control system is linear, of the formẋ(t)=A(t)x(t)+B(t)u(t)+r(t)with the notations and regularity assumptions made in the introduction of Chapter <ref>. Let x_0∈^n be an arbitrary initial point, and let Ω be a compact subset of ^m. For any target point x_1∈^n, we investigate the problem of steering the system from x_0 to x_1 in minimal time, under the control constraint u(t)∈Ω.It can be noticed that, if x_1 is accessible from x_0, then there exists a minimal time trajectory steering the system from x_0 to x_1, in a minimal time denoted by t_f. Indeed, by Theorem <ref>, Acc_Ω(x_0,t) is a compact convex set depending continuously on t, thust_f = min{ t≥ 0 | x_1∈Acc_Ω(x_0,t) }.With the notations introduced at the beginning of Chapter <ref>, we have f(t,x,u)=A(t)x+B(t)u+r(t), f^0(t,x,u)=1 and g=0 (note that we could as well take f^0=0 and g(t,x)=t). The Hamiltonian of the optimal control problem is then H(t,x,p,p^0,u) = p^⊤ A(t)x+ p^⊤ B(t)u+ p^⊤ r(t) +p^0 .Let (x(·),u(·)) be an optimal trajectory on [0,t_f]. According to the PMP, there exist p^0≤ 0 and an absolutely continuous mapping p(·):[0,t_f]→^n, with (p(·),p^0)≠ (0,0), such that ṗ(t)=-A(t)^⊤ p(t) for almost every t∈[0,t_f], and the maximization condition yields⟨ B(t)^⊤ p(t),u(t)⟩=max_v∈Ω⟨ B(t)^⊤ p(t),v ⟩for almost every t∈[0,t_f].Since the function v↦⟨ B(t)^⊤ p(t),v⟩=p(t)^⊤ B(t)v is linear, we expect that the maximum over Ω be reached at the boundary of Ω (unless B(t)^⊤ p(t)=0). This is the contents of the bang-bang principle.Let us consider particular but important cases. Case m=1 (scalar control).Let us assume that m=1, and that Ω=[-a,a] with a>0. This means that the control must satisfy the constraints | u(t)|≤ a. In that case, B(t) is a vector of ^n, and φ(t)=p(t)^⊤ B(t) is called switching function. The maximization condition (<ref>) implies that u(t)=a sign(φ(t))as soon as φ(t)≠ 0. Here, we see that the structure of the optimal control u is governed by the switching function. We say that the control is bang-bang if the switching function φ does not vanish identically on any subset of positive measure of [0,t_f]. For instance, this is the case under the assumption: φ(t)=0⇒φ̇(t)≠ 0 (because then the zeros of φ are isolated). In that case, the zeros of the switching functions are called the switchings of the optimal control. According to the monotonicity of φ, we see that the optimal control switches between the two values ± a. This is the typical situation of a bang-bang control.In contrast, if the switching function φ vanishes, for instance, along a time subinterval I of [0,t_f], then the maximization condition (<ref>) does not provide any immediate information in order to compute the optimal control u. But we can then differentiate with respect to time (if this is allowed) the relation p(t)^⊤ B(t)=0, and try to recover some useful information. This is a usual method in order to prove by contradiction, when it is possible, that optimal controls are bang-bang. An important example where this argument is successful is the following result, that we let as an exercise (see <cit.>). Let us assume that A(t)≡ A, B(t)≡ B, r(t)≡ 0 (autonomous control system), and that the pair (A,B) satisfies the Kalman condition. Then any extremal control is bang-bang, and * has at most n-1 switchings on [0,+∞) if all eigenvalues of A are real;* has an infinite number of switchings on [0,+∞) if all eigenvalues of A have a nonzero imaginary part. In this case, for every N∈^*, there exists x_0∈^n for which the corresponding minimal time control, steering x_0 to 0, has more than N switchings.Case m=2 (two scalar controls u_1 and u_2).Let us assume that m=2. In that case, B(t)=(B_1(t),B_2(t)), where B_1(t) and B_2(t) are vectors of ^n. Let us show how to make explicit the extremal controls from the maximization condition of the PMP, for two important constraints very often considered in practice.* Assume that Ω=[-1,1]×[-1,1], the unit square of ^2. This means that the controls u_1 and u_2 must satisfy the constraints | u_1(t)|≤ 1 and | u_2(t)|≤ 1. As for the case m=1, we set φ_i(t)=p(t)^⊤ B_i(t), for i=1,2, and the maximization condition (<ref>) implies that u_i(t)=sign(φ_i(t)) as soon as φ_i(t)≠ 0, for i=1,2. * Assume that Ω=B̅(0,1), the closed unit ball of ^2. This means that the controls u_1 and u_2 must satisfy the constraints u_1(t)^2+u_2(t)^2≤ 1. Setting again φ_i(t)=p(t)^⊤ B_i(t), for i=1,2, the maximization condition (<ref>) can be written asφ_1(t) u_1(t) + φ_2(t) u_2(t) = max_v_1^2+v_2^2≤ 1⟨[ φ_1(t); φ_2(t) ] , [ v_1; v_2 ]⟩and it follows from the Cauchy-Schwarz inequality thatu_i(t) = φ_i(t)/√(φ_1(t)^2+φ_2(t)^2)for i=1,2, as soon as φ_1(t)^2+φ_2(t)^2≠ 0.In these two cases, the comments done previously are still in force in the degenerate case where the switching functions vanish identically on some subset of positive measure. We do not insist on such difficulties at this step. Note that what is done here with m=2 could be written as well for any value of m. §.§ Linear quadratic theoryIn this chapter we make an introduction to the well known LQ (linear quadratic) theory, which has many applications in concrete applications, such as Kalman filtering or regulation problems. We first study and solve the basic LQ problem, and then we provide an important application to the tracking problem. For other applications (among which the Kalman filter), see <cit.>.§.§.§ The basic LQ problemWe consider the optimal control problemẋ(t)=A(t)x(t)+B(t)u(t), x(0)=x_0 min∫_0^T (x(t)^⊤ W(t)x(t)+u(t)^⊤ U(t)u(t) ) dt +x(T)^⊤ Qx(T)where x_0∈^n and T>0 are fixed (arbitrarily), W(t) and Q are symmetric nonnegative matrices of size n, U(t) is a symmetric positive definite matrix of size m. The dependence in time of the matrices above is assumed to be L^∞ on [0,T]. The controls are all possible functions of L^2([0,T],^m).We call this problem the basic LQ problem. Note that the final point is let free. The matrices W(t), U(t), and Q, are called weight matrices.We assume that there exists α>0 such that∫_0^T u(t)^⊤ U(t) u(t) dt ≥α∫_0^T u(t)^⊤ u(t) dt ∀ u∈ L^2([0,T],^m).For instance, this assumption is satisfied if t↦ U(t) is continuous on [0,T]. In practice, the weight matrices are often constant. There exists a unique optimal solution to (<ref>). This theorem can be proved using classical functional analysis arguments, as in the proof of Theorem <ref>. The uniqueness comes from the strict convexity of the cost. For a proof, see <cit.>.Let us apply the PMP to the basic LQ problem. The Hamiltonian isH(t,x,p,p^0,u) = p^⊤ A(t)x+p^⊤ B(t)u+p^0(x^⊤ W(t)x+u^⊤ U(t)u)and the adjoint equation isṗ(t) = -A(t)^⊤ p(t)-2p^0W(t)x(t) . Since the final point is free, the transversality condition on the final adjoint vector yields p(T)=2p^0Qx(T), and hence necessarily p^0≠ 0 (otherwise we would have (p(T),p^0)=(0,0), which is a contradiction with the PMP). According to Remark <ref>, we choose, here, to normalize the adjoint vector so that p^0=-1/2 (this will be convenient when derivating the squares...). Now, since there is no constraint on the control, we have 0 = ∂ H/∂ u(t,x(t),p(t),p^0,u(t)) = B(t)^⊤ p(t) - U(t)u(t)and hence u(t)=U(t)^-1B(t)^⊤ p(t).Summing up, we have obtained that, for the optimal solution (x(·),u(·)) of the basic LQ problem, we have ẋ(t)= A(t) x(t) + B(t) U(t)^-1B(t)^⊤ p(t),x(0)=x_0, ṗ(t)= -A(t)^⊤ p(t) + W(t) x(t), p(T) = -Q x(T).At this step, things are already nice, and we could implement a shooting method in order to solve the above problem. But the structure of the problem is so particular that we are actually able, here, to express p(t) linearly in function of x(t). This property is very remarkable but also very specific to that problem. We claim that we can search p(t) in the form p(t) = E(t)x(t). Replacing in the above equations, we easily obtain a relation of the form R(t)x(t)=0, with R(t) = Ė(t)-W(t)+A(t)^⊤ E(t)+E(t)A(t)+E(t)B(t)U(t)^-1B(t)^⊤ E(t)and E(T)x(T)=-Qx(T). Therefore, “simplifying by x", we see that, if we assume that R(t)=0 by definition, then we can go back in the reasoning and infer, by Cauchy uniqueness, that p(t) = E(t)x(t). We have obtained the following result.The optimal solution x(·) of the basic LQ problem is associated with the controlu(t)=U(t)^-1 B(t)^⊤ E(t)x(t)where E(t)∈ M_n() is the unique solution on [0,T] of the Riccati matrix differential equationĖ(t) =W(t)-A(t)^⊤ E(t)-E(t)A(t)- E(t)B(t)U(t)^-1B(t)^⊤ E(t)E(T) =-QActually, there is a difficulty for finishing the proof of that theorem, by proving that the unique solution E(·) of the Cauchy problem (<ref>), which is a priori defined in a neighborhood of T, is indeed well defined over the whole interval [0,T]. Indeed, such a Riccati differential equation may produce blow-up, and the well-posedness over the whole [0,T] is not obvious. We do not prove that fact here, and we refer the reader, e.g., to <cit.> for a proof (which uses the optimality property of u(·)).It can be noted that E(t) is symmetric (this is easy to see by Cauchy uniqueness). The result above is interesting because, in that way, the problem is completely solved, without having to compute any adjoint vector for instance by means of a shooting method. Moreover the optimal control is expressed in a feedback form, u(t)=K(t)x(t), well adapted to robustness issues.This is because of that property that the LQ procedures are so much used in practical problems and industrial issues. We are next going to give an application to tracking.§.§.§ Tracking problemLet us consider the general control system ẋ(t) = f(t,x(t),u(t)), with initial condition x(0)=x_0, with the regularity assumptions done at the beginning of Chapter <ref>. Let t↦ξ(t) be a trajectory in ^n, defined on [0,T], and which is arbitrary (in particular, this is not necessarily a solution of the control system). We assume however that ξ(·) is Lipschitz (or, at least, absolutely continuous). The objective is to design a control u generating a trajectory x(·) that tracks the trajectory ξ(·) in the “best possible way" (see Figure <ref>).We proceed as follows.We set e(t)=x(t)-ξ(t), and we will try to design u so that e(·) remains as small as possible. Using a first-order expansion, we haveė(t) = f(t,ξ(t)+e(t),u(t))-ξ̇(t) = A(t)e(t) + B(t)u(t) + r(t)with A(t) = ∂ f/∂ x(t,ξ(t),0),B(t) = ∂ f/∂ u(t,ξ(t),0), r(t) = f(t,ξ(t),0)-ξ̇(t) + o(e(t),u(t)) .It seems reasonable to seek a control u minimizing the costC(u) = ∫_0^T( z(t)^⊤ W(t) z(t) + u(t)^⊤ U(t) u(t)) dt + z(T)^⊤ Qz(T)for the control systemż(t) = A(t)z(t) + B(t)u(t) + r_1(t), z(0) = x_0-ξ(0),with r_1(t)=f(t,ξ(t),0)-ξ̇(t), where W(t), U(t) and Q are weight matrices that are chosen by the user. We hope that the resulting control will be such that the term o(z(t),u(t)) is small. In any case, the choice above produces a control, which hopefully tracks the trajectory ξ(t) as closely as possible. Note that, when linearizing the system, we have linearized at u=0, considering that u will be small. We could have linearized along a given u̅(t): we then obtain one of the many possible variants of the method.Let us now solve the above optimal control problem. In order to absorb the perturbation term r_1, we consider an augmented system, by adding one dimension. We setz_1=[ z; 1 ],A_1=[ A r_1; 0 0 ],B_1=[ B; 0 ] ,Q_1=[ Q 0; 0 0 ],W_1=[ W 0; 0 0 ],and hence we want to minimize the costC(u) = ∫_0^T( z_1(t)^⊤ W_1(t) z_1(t) + u(t)^⊤ U(t) u(t) ) dt + z_1(T)^⊤ Q_1z_1(T)for the control system ż_1(t)=A_1(t)z_1(t)+B_1(t)u(t), with z_1(0) fixed. In this form, this is a basic LQ problem, as studied in the previous section.According to Theorem <ref>, there exists a unique optimal control, given by u(t)=U(t)^-1B_1(t)^⊤ E_1(t)z_1(t), where E_1(t) is the solution of the Cauchy problem Ė_1=W_1-A_1^⊤ E_1-E_1A_1-E_1B_1U^-1B_1^⊤ E_1, E_1(T)=-Q_1. SettingE_1(t)=[ E(t) h(t); h(t)^⊤ α(t) ]with E(t) square matrix of size n, h(t)∈^n and α(t)∈, we obtain the following result. The optimal (in the sense above) tracking control isu(t)=U(t)^-1B(t)^⊤ E(t)(x(t)-ξ(t)) + U(t)^-1B(t)^⊤ h(t)where[Ė =W-A^⊤ E-EA-EBU^-1B^⊤ E, E(T)=-Q,;ḣ =-A^⊤ h-E(f(t,ξ,0) -ξ̇)-EBU^-1B^⊤ h, h(T)=0 . ]It is interesting to note that the control is written in a feedback form u(t)=K(t)(x(t)-ξ(t))+H(t).As said at the beginning of the section, there are many other applications of the LQ theory, and many possible variants. For instance, one can easily adapt the above tracking procedure to the problem of output tracking: in that case we track an observable. It is also very interesting to let the horizon of time T go to +∞. In that case, we can expect to obtain stabilization results. This is indeed the case for instance when one considers a linear autonomous control system (regulation over an infinite horizon); the procedure is referred to as LQR in practice and is very much used for stabilization issues.In practice, we often make the choice of constant diagonal weight matrices W(t)=w_0I_n, U(t)=u_0I_m, and Q=q_0I_n, with w_0≥ 0, u_0>0 and q_0≥ 0. If w_0 is chosen much larger than u_0, then it is expected that ‖ x(t)-ξ(t)‖ will remain small (while paying the price of larger values of u(t)). Conversely if u_0 is chosen much larger than w_0 then it is expected that u(t) will take small values, while the tracking error ‖ x(t)-ξ(t)‖ may take large values. Similarly, if q_0 is taken very large then it is expected (at least, under appropriate controllability assumptions) that x(T) will be close to ξ(T). A lot of such statements, with numerous possible variants, may be established. We refer to <cit.> for (many) more precise results.§.§ Examples of nonlinear optimal control problems [Zermelo problem] Let us consider a boat moving with constant speed along a river of constant width ℓ, in which there is a current c(y) (assumed to be a function of class C^1). The movement of the center of mass of the boat is governed by the control systemẋ(t) =vcos u(t)+c(y(t)),x(0)=0, ẏ(t) =vsin u(t), y(0)=0,where v>0 is the constant speed, and the control is the angle u(t) of the axis of the boat with respect to the axis (0x) (see Figure <ref>). We investigate three variants of optimal control problems with the objective of reaching the opposite side: the final condition is y(t_f)=ℓ, where the final time t_f is free. 1. Assuming that c(y)≥ v for every y∈[0,ℓ] (strong current), compute the optimal control minimizing the drift x(t_f).2.Compute the minimal time control.3.Compute the minimal time control for the problem of reaching a precise point M=(x_1,ℓ) of the opposite side. 1. Reaching the opposite side by minimizing the drift x(t_f).We choose f^0=0 and g(t,x,y)=x. The Hamiltonian is H = p_x(vcos u+c(y))+p_yvsin u .The adjoint equations are ṗ_x=0, ṗ_y=-p_xc'(y). In particular p_x is constant. Since the target is M_1={y=ℓ}, the transversality condition on the adjoint vector yields p_x=p^0. The maximization condition of the Hamiltonian leads tocos u(t) = p_x/√(p_x^2+p_y(t)^2),sin u(t) = p_y(t)/√(p_x^2+p_y(t)^2),for almost every t, provided that the function φ(t)=p_x^2+p_y(t)^2 does not vanish on any subset of positive measure. This condition is proved by contradiction: if φ(t)≡ 0 on I, then p_x=0 and p_y(t)=0 on I, but then also p^0=p_x=0, and we get a contradiction (because the adjoint (p_x,p_y,p^0) must be non trivial). Finally, since t_f is free and the problem is autonomous, we get that H=0 along any extremal, that is, H=v√(p_x^2+p_y^2)+p_xc(y)=0 along any extremal.We must have p^0≠ 0. Indeed, otherwise, p^0=0 implies that p_x=0, and from H=0 we infer that p_y=0 as well. This is a contradiction. Hence, we can take p^0=-1, and therefore p_x=-1.From H=0, we have √(1+p_y^2)=c(y)/v, and hence cos u=-v/c(y). Since c(y)≥ v, this equation is solvable, and we getu(t)=Arccos( -v/c(y(t))).Note that we have thus determined the optimal control in a feedback form, which is the best possible one (in practice, such a control can be made fully automatic, provided one can measure the position y at any time). The assumption c(y)≥ v means that the current is strong enough. Without this assumption, the optimal control problem consisting of minimizing the drift x(t_f) would be ill-posed: there would not exist any optimal solution (at least, in finite time), because if, for some y, we have c(y)<v, then, along this y, the boat can go against the current towards x=-∞.We also realize that, if we had not made this assumption, then the above equation would not be solvable. This remark provides a way for showing that the optimal control problem has no solution, by contradiction (recall that the PMP says that if a trajectory is optimal then it must satisfy the various conditions stated in the PMP).The optimal trajectory, minimizing the lateral deport, is represented on Figure <ref>. It is interesting to note that any other trajectory is necessarily at the right of that optimal trajectory. In particular, this gives the reachable set (in any time): the reachable set consists of all points such that 0≤ y≤ℓ that are at the right of the optimal trajectory. 2. Reaching the opposite side in minimal time.We choose f^0=1 and g=0. Now, the Hamiltonian is H = p_x(vcos u+c(y))+p_yvsin u+p^0,the adjoint equations are the same as previously, as well as the extremal controls provided that φ(t)≠ 0. The transversality condition on the adjoint vector is different: we now obtain p_x=0. It follows that p_y is constant. Besides, we still have H=0 along any extremal. We claim that p_y≠ 0. Indeed, otherwise, H=0 implies that p^0=0, and then (p_x,p_y,p^0)=(0,0,0), which is a contradiction. Hence, we get that cos u(t)=0 and sin u(t)=sign(p_y), and thus u(t)=π/2 for every time t (the sign of u is given by the fact that, at least at the beginning, the boat must leave the initial riverbank with ẏ>0). Actually, the fact that the minimal time control is u=π/2 is obvious by inspecting the equation in y. Indeed, since t_f=∫_0^t_fdt=∫_0^ℓdy/ẏ, it easily follows that, in minimal time, we must have ẏ=1.Note that we do not need to assume, here, that the current is strong enough. The calculations above are valid, whatever the function c(y) may be.3. Reaching a precise point of the opposite side in minimal time.This is a variant of the second problem, in which we lose the transversality condition on the adjoint vector, because the target point is fixed. The constant p_x is thus undetermined at this step. We still have that H=0 along any extremal. By contradiction, it is still true that the function φ cannot vanish on any subset of positive measure (indeed otherwise p_x=0 and p_y=0, and from H=0 we get that p^0=0: contradiction).This third variant is interesting because both the normal case p^0≠ 0 and the abnormal case p^0=0 may occur. Let us analyze them. * Normal case: p^0=-1. In that case, using that H=v√(p_x^2+p_y^2)+p_xc(y)-1=0 along any extremal, we getcos u(t)= p_xv/1-p_xc(y(t)) .Note that, for this equation to be solvable, p_x must be such that | p_x v|≤| 1-p_xc(y(t))| for every time t. We have thus obtained all possible optimal controls, parametrized by p_x. The parameter p_x is the “shooting parameter", that is, the degree of freedom that is required in order to tune the final condition x(t_f)=x_1, i.e., in order to reach the target point M.All optimal trajectories are represented on Figure <ref> in the case of a strong current. To go further, we could specify a current function c(y), and either implement a shooting method, or try to make explicit computations if this is possible. * Abnormal case: p^0=0. Using H=0, we get cos u=-v/c(y). In the case where the current is strong enough, we see that we recover exactly the solution of the first variant, that is, the optimal trajectory with minimal drift.Then, two cases may occur: either the target point M is different from the end-point of the minimal drift trajectory, and then, the trajectory is not solution of our problem and the abnormal does not occur; or, by chance, the target point M exactly coincides with the end-point of the minimal drift trajectory, and then (under the assumption c(y)≥ v) the abnormal case indeed occurs, and the optimal trajectory coincides with the minimal drift trajectory.This example is interesting because it gives a very simple situation where we may have an abnormal minimizer and how to interpret it.[Optimal control of damaging insects by predators.] In order to eradicate as much as possible a population x_0>0 of damaging pests, we introduce in the ecosystem a population y_0>0 of (nondamaging) predator insects killing the pests.First part. In a first part, we assume that the predator insects that we introduce are infertile, and thus cannot reproduce themselves. The control consists of the continuous introduction of predator insects. The model isẋ(t)= x(t) ( a-b y(t)), x(0)=x_0, ẏ(t)= -cy(t) + u(t),y(0)=y_0,where a>0 is the reproduction rate of pests, b>0 is the predation rate, c>0 is the natural death rate of predators. The control u(t) is the rate of introduction of new predators at time t. It must satisfy the constraint 0≤ u(t)≤ M where M>0 is fixed. Let T>0 be fixed. We want to minimize, at the final time T, the number of pests, while minimizing as well the number of introduced predators. We choose to minimize the costx(T)+∫_0^Tu(t)dt. Throughout, we denote by p=(p_x,p_y) and by p^0 the adjoint variables.First, we claim that x(t)>0 and y(t)>0 along [0,T], for every control u.Indeed, since u(t)≥ 0, we have ẏ(t)≥ -cy(t), hence y(t)≥ y_0e^-ct>0. For x(t), we argue by contradiction: if there exists t_1∈ [0,T] such that x(t_1)=0, then x(t)=0 for every time t by Cauchy uniqueness; this raises a contradiction with the fact that x(0)=x_0>0.The Hamiltonian of the optimal control problem isH=p_xx(a-by)+p_y(-cy+u)+p^0u and the adjoint equations are ṗ_x=-p_x(a-by), ṗ_y=bp_xx+cp_y. The transversality conditions yield p_x(T)=p^0 and p_y(T)=0. It follows that p^0≠ 0 (otherwise we would have (p(T),p^0)=(0,0), which is a contradiction). In what follows, we set p^0=-1.We haved/dtx(t)p_x(t)=x(t)p_x(t)(a-by(t))-x(t)p_x(t)(a-by(t))=0, hence x(t)p_x(t)=Cst=-x(T) because p_x(T)=p^0=-1. It follows that ṗ_y=-bx(T)+cp_y, and since p_y(T)=0, we infer, by integration, that p_y(t) = b/cx(T) ( 1-e^c(t-T) ). The maximization condition of the PMP yields u(t)={[0 if p_y(t)-1<0;M if p_y(t)-1>0 ].unless the function t↦ p_y(t)-1 vanishes identically on some subinterval. But this is not possible because we have seen above that the function p_y is decreasing. We conclude that the optimal control is bang-bang. Moreover, at the final time we have p_y(T)-1=-1, hence, by continuity, there exists ε>0 such that p_y(t)-1<0 along [T-ε,T], and hence u(t)=0 along a subinterval containing the final time. We can be more precise: we claim that, actually, the optimal control has at most one switching along [0,T] (and is 0 at the end).Indeed, the function p_y is decreasing (because x(T)>0, as we have seen at the beginning), hence the function t↦ p_y(t)-1, which is equal to -1 at t=T, vanishes at most one time.If there is such a switching, necessarily it occurs at some time t_1∈[0,T] such that p_y(t_1)=1, which yieldst_1 = T+1/cln(1-c/bx(T)).Note that this switching can occur only if t_1>0 (besides, we have t_1<T), hence, only if x(T)>c/b1/1-e^-cT. By integrating backwards the equations, we could even express an implicit condition on the initial conditions, ensuring that this inequality be true, hence, ensuring that there is a switching. Second part. We now assume that the predators that we introduce are fertile, and reproduce themselves with a rate that is proportional to the number of pests. The control is now the death rate of predators. In order to simplify, we assume that the variables are normalized so that all other rates are equal to 1. The model isẋ(t)= x(t) (1-y(t)), x(0)=x_0, ẏ(t)= -y(t)(u(t)-x(t)), y(0)=y_0,where the control u(t) satisfies the constraint 0< α≤ u(t)≤β. First, as before we have x(t)>0 and y(t)>0 along [0,T], for every control u.All equilibrium points of the system are given by x_e=u_e, y_e=1, for every α≤ u_e≤β. In the quadrant, we have a whole segment of equilibrium points.Let us investigate the problem of steering the system in minimal time t_f to the equilibrium point x(t_f)=a, y(t_f)=1.The Hamiltonian isH=p_xx(1-y)-p_yy(u-x)+p^0 and the adjoint equations are ṗ_x=-p_x(1-y)-p_yy, ṗ_y=p_xx+p_y(u-x). The transversality condition on the final time gives H(t_f)=0, and since the system is autonomous, it follows that the Hamiltonian is constant along any extremal, equal to 0.The maximization condition of the PMP is 0≤ u≤ Mmax (-p_yyu), which gives, since y(t)>0,u(t)={[α if p_y(t)>0;β if p_y(t)<0 ].unless the function t↦ p_y(t) vanishes identically along a subinterval. If this is the case then p_y(t)=0 for every t∈ I. Derivating with respect to time, we get that xp_x=0 and thus p_x=0 along I. Therefore, along I, we have H=p^0, and since H=0 we infer that p^0=0, which raises a contradiction. Therefore, we conclude that the optimal control is bang-bang.Along an arc where u=α (resp., u=β), we compute d/dtF_α(x(t),y(t))=0, whereF_α(x,y)=x+y-α lnx-lny(resp., F_β), that is, F_α(x(t),y(t)) is constant along such a bang arc.It can be noted that, formally, this integral of the movement can be obtained by computing dy/dx=ẏ/ẋ=-y/1-yα-x/x and by integrating this separated variables one-form.Considering a second-order expansion of F_α at the point (α,1),F_α(α+h,1+k)=α-α ln α+1+1/2(h^2/α+k^2)+o(h^2+k^2),we see that F_α has a strict local minimum at the point (α,1). Moreover, the function F_α is (strictly) convex, because its Hessian[ a/x^2 0; 0 1/y^2 ] is symmetric positive definite at any point such that x>0, y>0. It follows that the minimum is global.For any controlled trajectory (with a control u), we have d/dt F_α(x(t),y(t))=(u(t)-α)(1-y(t)) . Let us prove that there exists ε>0 such that u(t)=α, for almost every t∈[t_f-ε,t_f] (in other words, the control u is equal to α at the end).Indeed, at the final time, we have either p_y(t_f)=0 or p_y(t_f)≠0.If p_y(t_f)=0, then, using the differential equation in p_y, we have ṗ_y(t_f)=p_x(t_f)a. We must have p_x(t_f)≠ 0 (otherwise we would get a contradiction, noticing as previously that H(t_f)=p^0=0). Hence ṗ_y(t_f)≠ 0, and therefore the function p_y has a constant sign along some interval [t_f-ε,t_f[. Hence, along this interval, the control is constant, either equal to α or to β. It cannot be equal to α, otherwise, since the function F_α is constant along this arc, and since this arc must reach the point (α,1), this constant would be equal to the minimum of F_α, which would imply that the arc is constant, equal to the point (α,1): this is a contradiction because we consider a minimal time trajectory reaching the point (α,1).If p_y(t_f)≠ 0, then the function p_y has a constant sign along some interval [t_f-ε,t_f[, and hence, along this interval, the control is constant, equal either to α or to β. With a similar reasoning as above, we get u=α.Let us now provide a control strategy (which can actually be proved to be optimal) in order to steer the system from any initial point (x_0,y_0) to (α,1).First, in a neighborhood of the point (α,1), the level sets of the function F_α look like circles. Farther from that point, the level sets look more and more like rectangle triangles, asymptotic to the coordinate axes. Similarly for the level sets of F_β, with respect to the point (β,1) (see Figure <ref>).Let us start from a point (x_0,y_0), which is located on the segment joining the two points (α,1) and (β,1), that is, such that α<x_0<β and y_0=1. We start with the control u=α, and we remain along a level set of the function F_α (thus, “centered" at the point (α,1)).At some time, we switch on the control u=β, and then we remain along a level set of F_β (thus, “centered" at the point (α,1)) which passes through the target point (α,1).Now, if we start from any other point (x_0,y_0), we can determine on Figure <ref> a sequence of arcs along the level sets, respectively, of F_α and of F_β, that steers the system from the starting point to the target point. [The Brachistochrone Problem.] The objective is to determine what is the optimal shape of a children's slide (between two given altitudes) so that, if a ball slides along it (with zero initial speed), it arrives at the other extremity in minimal time.This problem can be modeled as the following optimal control problem. In the Euclidean plane, the slide is modeled as a continuous curve, starting from the origin, and arriving at a given fixed point (x_1,y_1), with x_1>0. We consider a ball of mass m>0 sliding along the curve. We denote by (x(t),y(t)) its position at time t. The ball is subject to the gravity force mg⃗ and to the reaction force of the children's slide. At time t, we denote by u(t) the (oriented) angle between the unit horizontal vector and the velocity vector (ẋ(t),ẏ(t)) of the ball (which is collinear to the tangent to the curve). See Figure <ref>. Seeking the curve is equivalent to seeking the angle u(t). Therefore, we stipulate that u is a control. By projecting the equations given by the fundamental principle of dynamics onto the tangent to the curve, we obtain the following control system:ẋ(t)= v(t)cos u(t), x(0)=0, x(t_f)=x_1,ẏ(t)= -v(t)sin u(t), y(0)=0, y(t_f)=y_1,v̇(t)= gsin u(t),v(0)=0, v(t_f) free.The control is u(t)∈, g>0 is a constant. We want to minimize the final time t_f.First of all, noticing that ẏ=-1/g vv̇, we get, by integration, that y(t)=-1/2g v(t)^2, for every control u. This implies that any final point such that y_1>0 is not reachable. Therefore, from now on, we will assume that y_1≤ 0.Because of the relation between y(t) and v(t), we can reduce the optimal control problem (<ref>) to the minimal time control problem for the following system:ẋ(t)= v(t)cos u(t), x(0)=0, x(t_f)=x_1>0 fixed,v̇(t)= gsin u(t),v(0)=0, v(t_f) fixed.Note that, since y(t_f)=y_1 is fixed, it follows that v(t_f)=±√(-2gy_1).Let us apply the Pontryagin maximum principle to the optimal control problem (<ref>). The Hamiltonian is H=p_xvcos u+p_vgsin u+p^0. The adjoint equations are ṗ_x=0 et ṗ_v=-p_xcos u. In particular, p_x is constant. Since the final time is free and the problem is autonomous, we have H=0 along any extremal. The maximization condition of the Hamiltonian yieldscos u(t) = p_x v(t)/√((p_xv(t))^2+(gp_v(t))^2), sin u(t) = g p_v(t)/√((p_xv(t))^2+(gp_v(t))^2),provided that φ(t)=(p_xv(t))^2+(gp_v(t))^2≠ 0.We have d/dt(p_xv(t))=p_xgsin u(t) et d/dt(gp_v(t))=-p_xgcos u(t). As a consequence, if φ(t)=0 for every t in some subset I of positive measure, then p_x=0. Therefore φ(t)=(gp_v(t))^2 and thus p_v(t)=0 for every t∈ I. Since H=0, we infer that p^0=0. We have obtained (p_x,p_v(t),p^0)=(0,0,0), which is a contradiction. We conclude that the function φ never vanishes on any subset of [0,t_f] of positive measure. Therefore the above expression of the controls is valid almost everywhere.The maximized Hamiltonian is H=√((p_xv(t))^2+(gp_v(t))^2)+p^0. Since H=0 along any extremal, we infer, by contradiction, that p^0≠ 0 (indeed otherwise we would infer that φ≡ 0, which leads to a contradiction). Hence, from now on, we take p^0=-1.Since H=0 along any extremal, we get that (p_xv(t))^2+(gp_v(t))^2=1, and therefore cos u(t)=p_x v(t) and sin u(t)=gp_v(t).If p_x were equal to 0, then we would have cos u(t)=0 and thus ẋ(t)=0, and then we would never reach the point x_1>0. Therefore p_x≠ 0.Let us now integrate the trajectories. We have v̇=g^2p_v and ṗ_v=-p_x^2v and thus v̈+g^2p_x^2v=0, and since v(0)=0 we get that v(t)=Asin(gp_xt). Since H=0 and v(0)=0, we have g^2p_v(0)^2=1 hence p_v(0)=±1/g, and thus v̇(0)=± g. We infer that A=±1/p_x, and hence that v(t)=±1/p_xsin(gp_xt). Now, ẋ=vcos u=p_xv^2 and y=-1/2gv^2, and by integration we get x(t)= 1/2p_xt-1/4gp_x^2sin(2gp_xt),y(t)= -1/2gp_x^2sin^2(gp_xt) = -1/4gp_x^2( 1-cos(2gp_xt) ).Note that, since ẋ=p_xv^2, we must have p_x>0.Representing in the plane the parametrized curves (x(t),y(t)) (with parameters p_x and t), we get cycloid curves.Let us now prove that there is a unique optimal trajectory joining (x_1,y_1), and it has at most one cycloid arch.Let us first compute p_x and t_f in the case where y_1=0. If y_1=0 then sin(gp_xt_f)=0 hence gp_xt_f=π+kπ with k∈, but since t_f must be minimal, we must have k=0 (and this is what will imply that optimal trajectories have at most one arch). Hence p_x=t_f/2x_1. Since 2gp_xt_f=2π, we have x_1=x(t_f)=t_f/2p_x, and thus t_f=√(2π x_1/g) and p_x=√(π/2gx_1).On Figure <ref>, we have represented all optimal trajectories joining points (x_1,0), with x_1>0. Now, we note that, if a trajectory is optimal on the interval [0,t_f], then, for any t_1∈]0,t_f[, it is optimal as well on the interval [0,t_1] for the problem of reaching the point (x(t_1),y(t_1)) in minimal time (this is the dynamic programming principle). From that remark, we deduce that any optimal trajectory of the problem (<ref>) is the truncation of an optimal trajectory reaching a point of the abscissa axis. In particular, we get the desired uniqueness property, and the fact that such a trajectory has at most one point at which ẏ=0 (hence, at most one arch). See Figure <ref>.Moreover, if ẏ(t)=0 then 2gp_x t=π+kπ with k∈, and necessarily (by optimality) k=0, hence t=π/2gp_x. Therefore the set of points where ẏ(t)=0 is the parametrized curve x(p_x)=π/4gp_x^2, y(p_x)=-1/2gp_x^2, that is the graph y=-2/πx. Therefore, we have proved that the optimal trajectory (x(t),y(t)) reaching (x_1,y_1) is such that * y(t) passes through a minimum if y_1>-2/πx_1,* y(t) is decreasing if y_1<-2/πx_1.If we investigate the variant of the optimal control problem (<ref>), for which we minimize the final time t_f with y(t_f) free, then in the reduced problem we have moreover that v(t_f) is free, and hence we gain the transversality condition p_v(t_f)=0, hence v̇(t_f)=0. This gives 2gp_x t_f=π, in other words, we find exactly the final points of the previously computed optimal trajectories, stopping when ẏ=0.This means that, if y(t_f) is free (with x_1 fixed), we minimize the time t_f by choosing, on Figure <ref>, the arc of cycloid starting from the origin and reaching x=x_1 with an horizontal tangent. CHAPTER: STABILIZATION In this chapter, our objective will be to stabilize a possibly unstable equilibrium point by means of a feedback control.Let n and m be two positive integers. In this chapter we consider an autonomous control system in ^nẋ(t) = f(x(t),u(t))where f:^n×^m→^n is of class C^1 with respect to (x,u), and the controls are measurable essentially bounded functions of time taking their values in some measurable subset Ω of ^m (set of control constraints).Let (x̅,u̅)∈^n×^m be an equilibrium point, that is, f(x̅,u̅)=0, such that u̅∈Ω (interior of Ω). Our objective is to be able to design a feedback control u(x) stabilizing locally the equilibrium (x̅,u̅), that is, such that the closed-loop system ẋ(t)=f(x(t),u(x(t))) be locally asymptotically stable at x̅.§ STABILIZATION OF AUTONOMOUS LINEAR SYSTEMS §.§ Reminders on stability notionsConsider the linear system ẋ(t)=Ax(t), with A a n× n matrix. The point 0 is of course an equilibrium point of the system (it is the only one if A is invertible). We have the following well-known result (easy to prove with simple linear algebra considerations). * If there exists a (complex) eigenvalue λ of A such that (λ)>0, then the equilibrium point 0 is unstable, meaning that there exists x_0∈^n such that the solution of ẋ(t)=Ax(t), x(0)=x_0 satisfies ‖ x(t)‖→+∞ as t→+∞.* If all (complex) eigenvalues of A have negative real part, then 0 is asymptotically stable, meaning that all solutions of ẋ(t)=Ax(t) converge to 0 as t→+∞.* The equilibrium point 0 is stable (i.e., not unstable)[See also the general definition <ref> further.] if and only if all eigenvalues of A have nonpositive real part and if an eigenvalue λ is such that (λ)=0 then λ is a simple root[Equivalently, (A-λ I_n)=(A-λ I_n)^2, or, equivalently, the Jordan decomposition of A does not have any strict Jordan block.] of the minimal polynomial of A. The matrix A is said to be Hurwitz if all its eigenvalues have negative real part. We are next going to see two classical criteria ensuring that a given matrix (with real coefficients) is Hurwitz. These criteria are particularly remarkable because they are purely algebraic (polynomial conditions on the coefficients of the matrix), and they do not require the calculation of the roots of the characteristic polynomial of the matrix (which is impossible to achieve algebraically in general, for degrees larger than 5, as is well known from Galois theory). Routh criterion. We consider the complex polynomialP(z)=a_0z^n+a_1z^n-1+⋯+a_n-1z+a_nwith real coefficients a_i, and we are going to formulate some conditions under which all roots of this polynomial have negative real part (in that case we also say that P is Hurwitz). Note that A is Hurwitz if and only if its characteristic polynomial χ_A is Hurwitz. The Routh table is defined as follows:a_0a_2a_4a_6⋯completed by 0 a_1a_3a_5a_7⋯completed by 0 b_1b_2b_3 b_4⋯whereb_1=a_1a_2-a_0a_3/a_1,b_2=a_1a_4-a_0a_5/a_1,… c_1 c_2 c_3 c_4⋯wherec_1=b_1a_3-a_1b_2/b_1,c_2=b_1a_5-a_1b_3/b_1,… ⋮⋮⋮⋮The process goes on as long as the first element of the row is not equal to 0. The process stops when we have built n+1 rows.The Routh table is said to be complete if it has n+1 rows whose first coefficient is not equal to 0. We have the two following theorems (stated in <cit.>), which can be proved by means of (nonelementary) complex analysis. All roots of P have negative real part if and only if the Routh table is complete and the elements in the first column have the same sign.If the Routh table is complete then P has no purely imaginary root, and the number of roots with positive real part is equal to the number of sign changes in the first column. Hurwitz criterion. We set a_n+1=a_n+2=⋯=a_2n-1=0, and we define H = [a_1a_3a_5⋯⋯ a_2n-1;a_0a_2a_4⋯⋯ a_2n-2;0a_1a_3⋯⋯ a_2n-3;0a_0a_2⋯⋯ a_2n-4;00a_1⋯⋯ a_2n-5;⋮⋮⋱⋮;000*⋯a_n ]where *=a_0 or a_1 according to the parity of n. Let (H_i)_i∈{1,…,n} be the principal minors of H, defined byH_1=a_1,H_2=|[ a_1 a_3; a_0 a_2 ]|,H_3=|[ a_1 a_3 a_5; a_0 a_2 a_4; 0 a_1 a_3 ]|, …, H_n=det H . <cit.> If a_0>0, then P is Hurwitz if and only if H_i>0 for every i∈{1,…,n}.Assume that a_0>0.If all roots of P have nonpositive real part, then a_k≥ 0 and H_k≥ 0, for every k∈{1,…, n}.If n≤ 3, a_k≥ 0 and H_k≥ 0 for every k∈{1,2,3}, then all roots of P have nonpositive real part.A necessary condition for stability is that a_k≥ 0 for every k∈{1,…,n}. This condition is however not sufficient (take P(z)=z^4+z^2+1).§.§ Pole-shifting theorem The linear autonomous control system ẋ(t)=Ax(t)+Bu(t), with x(t)∈^n, u(t)∈^m, A a n× n matrix and B a n× m matrix, is said to be feedback stabilizable if there exists a m× n matrix K (called gain matrix) such that the closed-loop system with the (linear) feedback u(t)=Kx(t),ẋ(t)=(A+BK)x(t)is asymptotically stable, i.e., equivalently, A+BK is Hurwitz.This concept is invariant under similar transforms A_1=PAP^-1, B_1=PB, K_1=KP^-1.If (A,B) satisfies the Kalman condition rank K(A,B)=n, then for every real polynomial P of degree n whose leading coefficient is 1, there exists a m× n matrix K such that the characteristic polynomial χ_A+BK of A+BK is equal to P.[Actually, the converse statement is also true.] If the linear control system ẋ(t)=Ax(t)+Bu(t) is controllable then it is stabilizable. To prove the corollary, it suffices to take for instance P(X)=(X+1)^n and to apply the pole-shifting theorem. We prove the result first in the case m=1. It follows from Theorem <ref> (Brunovski normal form) that the system is similar toA=[01⋯0;⋮⋱⋱⋮;0⋯01; -a_n -a_n-1⋯ -a_1 ],B = [ 0; ⋮; 0; 1 ] .Setting K=(k_1 ⋯ k_n) and u=Kx, we haveA+BK=[ 0 1 ⋯ 0; ⋮ ⋱ ⋱ ⋮; 0 ⋯ 0 1; k_1-a_n k_2-a_n-1 ⋯ k_n-a_1 ]and thus χ_A+BK(X) = X^n+(a_1-k_n)X^n-1+⋯+(a_n-k_1). Therefore, for every polynomial P(X)=X^n+α_1X^n-1+⋯+α_n, it suffices to choose k_1=a_n-α_n,…,k_n=a_1-α_1.Let us now prove that the general case m≥ 1 can be reduced to the case m=1. We have the following lemma. If (A,B) satisfies the Kalman condition, then there exists y∈^m and a m× n matrix C such that (A+BC,By) satisfies the Kalman condition. The proof of this lemma is done hereafter.It follows from Lemma <ref> that, for every polynomial P of degree n whose leading coefficient is 1, there exists a 1× n matrix K_1 such that χ_A+BC+ByK_1=P, and therefore, defining the m× n matrix K=C+yK_1, we have χ_A+BK=P, and Theorem <ref> is proved.Let y∈^m be such that By≠ 0. Let x_1=By. Claim 1: There exists x_2∈ Ax_1+Ran(B) (and thus there exists y_1∈^m such that x_2=Ax_1+By_1) such that (Span(x_1,x_2))=2.Indeed, otherwise, Ax_1+Ran(B)⊂ x_1, hence Ax_1∈ x_1 and Ran(B)⊂ x_1. Therefore Ran(AB) = ARan(B)⊂ Ax_1⊂ x_1 and by immediate iteration, Ran(A^kB)⊂ x_1, for every integer k. This implies thatRan(B,AB,…,A^n-1B)=Ran(B)+ Ran(AB)+⋯+Ran(A^n-1B)⊂ x_1which contradicts the Kalman condition.Claim 2: For every k≤ n, there exists x_k∈ Ax_k-1+Ran(B) (and thus there exists y_k-1∈^m such that x_k=Ax_k-1+By_k-1) such that (E_k)=k, where E_k=Span(x_1,…,x_k).Indeed, otherwise, Ax_k-1+Ran(B)⊂ E_k-1, and hence Ax_k-1⊂ E_k-1 and Ran(B)⊂ E_k-1. Let us then prove that AE_k-1⊂ E_k-1. Indeed, note that Ax_1=x_2-By_1∈ E_k-1+Ran(B)⊂ E_k-1, and similarly for Ax_2, etc, Ax_k-2=x_k-1-By_k-1∈ E_k-1+Ran(B)⊂ E_k-1, and finally, Ax_k-1∈ E_k-1.Therefore Ran(AB)=A Ran(B)⊂ A E_k-1⊂ E_k-1, and similarly we have Ran(A^iB)⊂ E_k-1 for every integer i. Hence Ran(B,AB,…,A^n-1B)⊂ E_k-1, which contradicts the Kalman condition.We have thus built a basis (x_1,…,x_n) of ^n. We define the m× n matrix C by the relations Cx_1=y_1, Cx_2=y_2, …, Cx_n-1=y_n-1, and Cx_n arbitrary. Then (A+BC,x_1) satisfies the Kalman condition since (A+BC)x_1=Ax_1+By_1=x_2, …, (A+BC)x_n-1=Ax_n-1+By_n-1=x_n. Lemma <ref> is proved. To stabilize a linear control system in practice, one has the following solutions: * If n is not too large, one can apply the Routh or Hurwitz criteria and thus determine an algebraic necessary and sufficient condition on the coefficients of K ensuring the desired stabilization property. Note that the characteristic polynomial of A+BK can be computed with a formal computations software like Maple.* There exist many numerical routines in order to compute numerical gain matrices. In the Matlab Control Toolbox, we quote acker.m, based on Ackermann's formula (see <cit.>), limited however to m=1 and not very reliable numerically. Better is to use place.m, which is a robust pole-shifting routine (see <cit.>) based on spectral considerations (but in which the desired poles have to be distinct two by two).* Another way consists of applying the LQ theory, elements of which have been given in Section <ref>, by taking an infinite horizon of time T=+∞ as quickly mentioned at the end of that section (LQR stabilization). § STABILIZATION OF INSTATIONARY LINEAR SYSTEMSFor instationary linear systems ẋ(t)=A(t)x(t)+B(t)u(t), the situation is much more complicated and there is no simple and definitive theory as in the autonomous case.Let us explain which difficulties appear, by considering the system ẋ(t)=A(t)x(t) without any control. A priori one could expect that, if the matrix A(t) is Hurwitz for every t, then the system is asymptotically stable. This is however wrong. The statement remains wrong even under stronger assumptions on A(t), such as assuming that there exists ε>0 such that, for every time t, every (complex) eigenvalue λ(t) of A(t) satisfies (λ(t))≤-ε. Indeed, for example, takeA(t) = [-1+acos^2t1-asin t cos t; -1-asin t cos t-1+asin^2t ]with a∈[1,2) arbitrary. Thenx(t) = e^(a-1)t[ cos t , -sin t ]^⊤ is a solution of ẋ(t)=A(t)x(t), and does not converge to 0 whenever a≥ 1. Besides, it can be shown that if a<1 then the system is asymptotically stable.Let us explain the reason of this failure. A simple way to understand is the following (not so much restrictive) case. Let us assume that, for every t, A(t) is diagonalizable, and that there exists P(t) invertible such that P(t)^-1A(t)P(t)=D(t), with D(t) diagonal, and P(·) and D(·) of class C^1. Setting x(t)=P(t)y(t), we get immediately thatẏ(t) = ( D(t) - P(t)^-1Ṗ(t) ) y(t) .If the term P(t)^-1Ṗ(t) were equal to 0 (as it is the case in the autonomous case), then, obviously, the asymptotic stability would hold true as soon as the eigenvalues (diagonal of D(t)) would have negative real parts. But, even if D(t) is Hurwitz, the term P(t)^-1Ṗ(t) can destabilize the matrix and imply the failure of the asymptotic stability.In other words, what may imply the divergence is the fact that the eigenvectors (making up the columns of P(t)) may evolve quickly in time, thus implying that the norm of Ṗ(t) be large.To end up however with a positive result, it can be noted that, if the matrix A(t) is slowly varying in time, then the norm of the term P(t)^-1Ṗ(t) is small, and then if one is able to ensure that this norm is small enough with respect to the diagonal D(t), then one can ensure an asymptotic stability result. This is the theory of slowly time-varying linear systems (see <cit.>).§ STABILIZATION OF NONLINEAR SYSTEMS §.§ Local stabilization by linearization Reminders. Consider the continuous dynamical system ẋ(t)=f(x(t)), where f:^n→^n is of class C^1. We denote by x(·,x_0) the unique solution of this system such that x(0,x_0)=x_0. We assume that x̅ is an equilibrium point, that is, f(x̅)=0. The equilibrium point x̅ is said to be stable if, for every ε>0, there exists δ>0 such that, for every initial point x_0 such that ‖ x_0-x̅‖≤δ, one has ‖ x(t,x_0)-x̅‖≤ε for every t≥ 0. It is said to be locally asymptotically stable (in short, LAS) if it is stable and if moreover x(t,x_0)→x̅ as t→+∞ for every x_0 in some neighborhood of x̅. If the neighborhood is the whole ^n then we speak of global asymptotic stability (in short, GAS). If an asymptotic stability result is established in some neighborhood V of x̅, then we say that x̅ is GAS in V.Let A be the Jacobian matrix of f at the equilibrium point x̅. If all eigenvalues of A have negative real parts (that is, if A is Hurwitz), then x̅ is LAS. If A has an eigenvalue with a positive real part then x̅ is not LAS. The above linearization theorem is an easy first result (see Example <ref> further for a proof), not saying anything however, for the moment, on the size of the neighborhoods of stability.Application: local stabilization of nonlinear control systems. Consider the general nonlinear control system (<ref>), ẋ=f(x,u), and an equilibrium point (x̅,u̅)∈^n×Ω, as settled at the beginning of Chapter <ref>. Setting x(t)=x̅+δ x(t) and u(t)=u̅+δ u(t) and keeping the terms of order 1, we obtain (as already discussed for controllability issues) the linearized system at (x̅,u̅),δẋ(t)=Aδ x(t)+Bδ u(t)whereA=∂ f/∂ x(x̅,u̅) andB=∂ f/∂ u(x̅,u̅).If one can stabilize the linearized system, that is, find a matrix K of size m× n such that A+BK is Hurwitz (and take δ u=Kδ x), then Theorem <ref> implies a local stabilization result for the nonlinear control system (<ref>). We thus have the following theorem. If the pair (A,B) satisfies the Kalman condition, then there exists a matrix K of size m× n such that the feedback u = K(x-x̅)+u̅ stabilizes asymptotically the control system (<ref>) locally around (x̅,u̅): the closed-loop system ẋ(t)=f(x(t),K(x(t)-x̅)+u̅) is LAS at x̅. Note that the stability neighborhood must be small enough so that the closed-loop control u takes its values in the set Ω. Consider the inverted pendulum system given in Example <ref>.Applying the Routh criterion (Theorem <ref>) and then Theorem <ref>, we establish that a sufficient condition on K=(k_1,k_2,k_3,k_4) to stabilize the inverted pendulum locally at the unstable equilibrium (ξ̅,0,0,0)^⊤ isk_1>0,k_4-k_2L>0,k_3-k_1L-(m+M)g>0,k_2((k_4-k_2L)(k_3-k_1L-(m+M)g)-MLgk_2) > k_1(k_4-k_2L)^2. Consider the Maxwell-Bloch system given in Example <ref>. Let us stabilize locally the system at the equilibrium point x̅=(a,0,0), u̅=(0,0), where a≠ 0 is fixed.By Example <ref>, (2), the linearized system (with c=0 and a≠ 0) at this pointis controllable and thus stabilizable with a linear feedback of matrix K.We seek a particular matrix K stabilizing the system, of the form K = [ k_1 0 0; 0 k_2 0 ]. By the Routh criterion, it is easy to see that A+BK is Hurwitz if and only if k_1<0 and k_2<0. We infer from Theorem <ref> that the Maxwell-Bloch system is locally stabilizable around x̅, with feedbacks u_1=k_1(x_1-a), u_2=k_2x_2 with k_1,k_2<0. §.§ Global stabilization by Lyapunov theoryReminders: Lyapunov and LaSalle theorems. Consider the continuous dynamical system ẋ(t)=f(x(t)), where f:^n→^n is of class C^1. We assume that x̅ is an equilibrium point, that is, f(x̅)=0. Let us recall two important results of Lyapunov theory, providing more knowledge on the stability neighborhoods, with the concept of Lyapunov function.Let 𝒟 be an open subset of ^n containing the equilibrium point x̅. The function V:𝒟→ is called a Lyapunov function at x̅ on 𝒟 if * V is of class C^1 on 𝒟;* V(x̅)=0 and V(x)>0 for every x∈𝒟∖{x̅};* ⟨∇ V(x),f(x)⟩≤ 0 for every x∈𝒟. If the inequality is strict on 𝒟∖{x̅} then the Lyapunov function V is said to be strict.Note that, along a given trajectory of the dynamical system, one hasd/dtV(x(t))=⟨∇ V(x(t)),f(x(t))⟩ .Therefore if V is a Lyapunov function then the value of V is nonincreasing along any trajectory. A Lyapunov function can be seen as a potential well, ensuring stability.If there exists a Lyapunov function V at x̅ on 𝒟 then x̅ is stable, and if V is strict then x̅ is LAS. If V is strict and proper[V is said to be proper whenever V^-1([0,L]) is a compact subset of 𝒟, for every L∈ V(𝒟); in other words, the inverse image of every compact is compact. When 𝒟=^n, V is proper if and only if V(x)→+∞ as ‖ x‖→+∞.] then x̅ is GAS in 𝒟. When a Lyapunov function is not strict then one can be more specific and infer that trajectories converge to some invariant subset. The following result is even more general and does not assume the existence of an equilibrium point. Assume that V:𝒟→[0,+∞) is a proper C^1 function such that ⟨∇ V(x),f(x)⟩≤ 0 for every x∈𝒟. Let ℐ be the largest subset of {x∈𝒟| ⟨∇ V(x),f(x)⟩=0}that is invariant under the flow (in positive time) of the dynamical system. Then all trajectories starting in 𝒟 converge to ℐ, in the sense that d(x(t),ℐ)→ 0 (Euclidean distance) as t→+∞.It is interesting to formulate the LaSalle principle in the particular case where the invariant set ℐ is reduced to a singleton (which must then be an equilibrium point). The statement is then as follows.Assume that V is a proper Lyapunov function at x̅ on 𝒟 and that, if x(·) is a solution of the system such that ⟨∇ V(x(t)),f(x(t))⟩=0 for every t≥ 0, then x(t)=x̅. Then the equilibrium point x̅ is GAS in 𝒟.Let g:→ be a function of class C^1 such that g(0)=0 and xg(x)>0 if x≠ 0, and satisfying ∫_0^+∞ g=+∞ and ∫_-∞^0 g=-∞. By considering the Lyapunov function V(x,y)=1/2y^2+∫_0^x g(s)ds, it is easy to prove that the point x=ẋ=0 is GAS for the system ẍ+ẋ+g(x)=0 (which has to be written as a first-order system).Consider the system in ^2ẋ = α x - y - α x (x^2+y^2), ẏ = x + α y - α y (x^2+y^2),with α>0 arbitrary. The equilibrium point (0,0) is not stable. With the LaSalle principle, it is easy to prove that the unit circle x^2+y^2=1 is globally attractive in ^2∖{(0,0)}, in the sense that a trajectory, starting from any point different from (0,0), converges to the circle in infinite time. Indeed, note that, setting V(x,y)=1/2(x^2+y^2), we haved/dtV(x(t),y(t)) = α ( x(t)^2+y(t)^2 ) ( 1-x(t)^2-y(t)^2 )and one can see that d/dtV(x(t),y(t)) is positive inside the unit disk (except at the origin), and negative outside of the unit disk. It is then easy to design (by translation) Lyapunov functions inside the punctured unit disk, and outside of the unit disk, and to conclude by the LaSalle principle. [Lyapunov lemma and applications] Let A be a n× n real matrix, whose eigenvalues have negative real parts. Then there exists a symmetric positive definite n× n matrix P such that A^⊤ P+PA=-I_n. Indeed, it suffices to take P=∫_0^+∞e^tA^⊤e^tA dt.Clearly, the function V(x)=⟨ x,Px⟩ is then a strict Lyapunov function for the system ẋ(t)=Ax(t). We recover the fact that 0 is GAS.Using a first-order expansion and norm estimates, it is then easy to prove Theorem <ref> and even to obtain stability neighborhoods thanks to V. Application to the stabilization of nonlinear control systems. As Theorem <ref> implied Theorem <ref>, the Lyapunov and LaSalle theorems can be applied to control systems, providing knowledge on the stability neighborhoods.For instance, we get the following statement: consider the nonlinear control system (<ref>) and assume that there exists a function V:𝒟→^+ of class C^1, taking positive values in 𝒟∖{x̅}, such that for every x∈𝒟 there exists u(x)∈Ω such that ⟨∇ V(x),f(x,u(x))⟩ < 0; then the feedback control u stabilizes the control system, globally in 𝒟. Many other similar statements can be easily derived, based on the Lyapunov or LaSalle theorems. There exists an elaborate theory of control Lyapunov functions (see, e.g., <cit.>). A difficulty in this theory is to ensure a nice regularity of the feedback control. We do not discuss further this difficult question but we mention that it has raised a whole field of intensive researches (see <cit.>).To illustrate the role of the Lyapunov functions in stabilization, and as an application of the idea described above, we next provide a spectacular and widely used (and however very simple) strategy in order to design stabilizing controls thanks to Lyapunov functions.Consider the control-affine system in ^n ẋ(t)=f(x(t))+∑_i=1^mu_i(t)g_i(x(t))where f and the g_i's are smooth vector fields in ^n. Let x̅ be such that f(x̅)=0, i.e., x̅ is an equilibrium point of the uncontrolled system (that is, with u_i=0).We assume that there exists a proper Lyapunov function V at x̅ on ^n for the uncontrolled system, i.e., satisfying * V(x̅)=0 and V(x)>0 for every x∈^n∖{x̅};* V is proper;* L_fV(x)= ⟨∇ V(x),f(x)⟩≤ 0 for every x∈^n;[The notation L_fV is called the Lie derivative of V along f. It is defined by L_fV(x)=dV(x).f(x)=⟨∇ V(x),f(x)⟩, which is the derivative of V along the direction f at x.]* the set {x∈^n | L_fV(x)=0 andL_f^kL_g_iV(x)=0∀ i∈{1,…,m}∀ k∈}is reduced to the singleton {x̅}.Then the equilibrium point x̅ is GAS in ^n for the control system in closed-loop with the feedback control defined by u_i(x)=-L_g_iV(x), i=1,…,m.Let F(x)=f(x)-∑_i=1^m L_g_iV(x)g_i(x) be the dynamics of the closed-loop system. First of all, we note that F(x̅)=0, that is, x̅ is an equilibrium point for the closed-loop system. Indeed, V is smooth and reaches its minimum at x̅, hence ∇ V(x̅)=0, and therefore L_g_iV(x̅)=0 for i=1,…,m. Moreover, we have f(x̅)=0. Besides, we haveL_FV(x)=⟨∇ V(x),F(x)⟩ = L_fV(x)-∑_i=1^m (L_g_iV(x))^2 ≤ 0and if L_FV(x(t))=0 for every t≥ 0, then L_fV(x(t))=0 and L_g_iV(x(t))=0, i=1,…,m. Derivating with respect to t, we infer that0=d/dtL_g_iV(x(t))=L_fL_g_iV(x(t))since L_g_iV(x(t))=0. Therefore, clearly, we get that L_f^kL_g_iV(x(t))=0, for every i∈{1,…,m} and for every k∈. By assumption, it follows that x(t)=x̅, and the conclusion follows from the LaSalle principle. The idea of the Jurdjevic-Quinn method, that can be seen in the above proof, is very simple. The uncontrolled system has a Lyapunov function, which may be not strict. Then, to get an asymptotic stabilization result, we compute the derivative of V along solutions of the control system, we see that the control enters linearly into the resulting expression, and we design the controls so as to get the desired decrease. It is remarkable that the Jurdjevic-Quinn method also allows one to design globally stabilizing feedback controls, which satisfy moreover some constraints. For instance, in the framework of Theorem <ref>, let us add the requirement that | u_i|≤ 1, i=1,…,m. Then, with the feedbacku_i= sat(-1,-L_g_iV(x),1) = {[-1if-L_g_iV(x)≤ -1;-L_g_iV(x)if -1≤ -L_g_iV(x)≤ 1; 1if -L_g_iV(x)≥ 1 ].the equilibrium point x̅ is GAS. Indeed, the above proof is easily adapted, and the dynamics F(x) of the closed-loop system is locally Lipschitz.The Jurdjevic-Quinn method is much used, for instance, for the stabilization of satellites along a given orbit. We next give some applications in mathematical biology (control of populations in Lotka-Volterra systems). Consider the controlled predator-prey systemẋ=x(1-y+u), ẏ=-y(1-x).and the equilibrium point (x=1,y=1). Prove that the function V(x,y)=x-1-ln(x)+y-1-ln(y) satisfies all assumptions of Theorem <ref>, and deduce a feedback control such that the equilibrium point is GAS in x>0, y>0. Note that the function x↦ x-1-ln(x) is nonnegative on (0,+∞) and vanishes only at x=1.[Generalized Lotka-Volterra system]Consider the generalized Lotka-Volterra systemṄ_i = N_i ( b_i + ∑_j=1^n a_ij N_j ), i=1,…,n.Consider the equilibrium point N̅=(N̅_1,…,N̅_n)^T defined by b+AN̅=0, where b=(b_1,…,b_n)^⊤ and A is the square matrix of coefficients a_ij. Let c_1,…,c_n be some real numbers. Let C be the diagonal matrix whose coefficients are the c_i's. We setV(N) = ∑_i=1^n c_i ( N_i-N̅_i - N̅_ilnN_i/N̅_i) .An easy computation shows thatd/dt V(N(t)) = ∑_i=1^n c_i (N_i-N̅_i) (b_i+(AN)_i)where (AN)_i is the i^th component of the vector AN. By noticing that b_i+(AN̅)_i=0, we easily deduce thatd/dt V(N(t)) = 1/2⟨ N-N̅ , (A^⊤ C+CA) (N-N̅) ⟩ .If there exists a diagonal matrix C such that A^⊤ C+CA be negative definite, then we infer that N̅ is GAS.[Note that a necessary condition for A^⊤ C+CA to be negative definite is that the diagonal coefficients a_ii of A be negative. If at least one of them is zero then A^⊤ C+CA is not definite.] Assume for instance that A be skew-symmetric, and take C=I_n. Then V(N(t)) is constant. We introduce some controls, for instance, in the n-1 first equations:Ṅ_i = N_i ( b_i + ∑_j=1^n a_ij N_j + α_i u_i ), i=1,…,n-1.Then, we compute d/dt V(N(t)) = ∑_i=1^n-1α_i (N_i(t)-N̅_i) u(t). It is then easy to design a feedback control stabilizing globally (by the LaSalle principle) the system to the equilibrium N̅, under the assumption that at least one of the coefficients a_in, i=1,…,n-1, be nonzero, and that A be invertible.In the particular case n=2, it is actually possible to ensure moreover that u(t)≥ 0, by playing with the periodicity of the trajectories (as in Example <ref>). We consider the bilinear control system in ^2ẋ_1(t) = x_2(t) , ẋ_2(t) = -x_1(t) + u(t) x_1(t)where the control is subject to the constraint | u|≤ 1.Let us stabilize globally this system to (0,0). Setting V(x_1,x_2)=1/2(x_1^2+x_2^2), we have d/dtV(x_1(t),x_2(t)) = u(t)x_1(t)x_2(t). We choose the feedback u(x_1,x_2) = sat(-1,-x_1x_2,1).To prove the asymptotic stability we apply the LaSalle invariance principle. If V̇≡ 0 then either x_1≡ 0 (and then by derivation we also have x_2≡ 0), or x_2≡ 0 and then by derivation we also find x_1≡ 0. In all cases the maximal invariant set is {0,0}, which yields the conclusion.Consider the control systemẋ(t) = -y(t) + v(t)cosθ(t) , ẏ(t) = x(t) + v(t)sinθ(t) , θ̇(t) = u(t),where the controls u and v are subject to the constraints | u|≤ 1 and | v|≤ 1. Let us stabilize globally this system to (0,0,0). Setting V=1/2(x^2+y^2+θ^2), we have V̇ = v(xcosθ+ysinθ)+θ u. We choose the feedback controlsv = -sat(-1,xcosθ+ysinθ,1) et u = -sat(-1,θ,1)If V̇=0 along a trajectory then θ=0, u=0, 0=xcosθ+ysinθ=x, v=0, and thus also 0=ẋ=-y. Therefore the invariant set in the LaSalle principle is reduced to the equilibrium point. This yields the conclusion.PART:Control in infinite dimension In this part we introduce the control theory for infinite-dimensional systems, that is, control systemswhere the state x(t) evolves in an infinite-dimensional Banach space. Controlled partial differential equations enter into this category.As we will see, the tools used to analyze such systems significantly differ from the ones used in finite dimension. The techniques are strongly use tools of functional analysis. The reader should then be quite well acquainted with such knowledge, and we refer to the textbook <cit.> on functional analysis.The study of the control of nonlinear partial differential equations is beyond the scope of the present monograph, and we refer the reader to <cit.> for a complete survey. Throughout this part, except at the end, we will focus on linear autonomous infinite-dimensional control systems of the form ẋ(t)=Ax(t)+Bu(t) where A and B are operators (which can be viewed, at a first step, as infinite-dimensional matrices).Since such systems involve partial differential equations, throughout this part the state x(t) will be rather denoted as y(t). For PDEs settled on some domain Ω, y is a function of t and x, where t is the time and x the spatial variable. The control system considered throughout isẏ = Ay + Buwhere ẏ means ∂_t y(t,x) when y is a function of (t,x).The first step is to define the concept of a solution, which in itself is far from being obvious, in contrast to the finite-dimensional setting. In infinite dimension the exponential of tA is is replaced with the concept of semigroup. Hence, in this part, a whole chapter is devoted to semigroup theory, with the objective of giving a rigorous sense to the solution of (<ref>) with y(0)=y_0,y(t) = S(t)y_0 + ∫_0^t S(t-s)Bu(s)dswhere S(t) is a semigroup, generalizing e^tA.There are plenty of ways to introduce the theory of controlled PDEs. Here, one of our main objectives is to provide the general framework in which the Hilbert Uniqueness Method (HUM) of J.-L. Lions can be stated.CHAPTER: SEMIGROUP THEORY The objective of this chapter is to establish that, in an appropriate functional setting, there is a unique solution of the Cauchy problemẏ(t) = Ay(t)+f(t) ,y(0) = y_0 ,where A is a linear operator on a Banach space X, and where y(t) and f(t) evolve in X, which is given byy(t) = S(t) y_0 + ∫_0^t S(t-s)f(s)dswhere (S(t))_t≥ 0 is the semigroup generated by the operator A.In finite dimension (that is, if X=^n), this step is easy and one has S(t)=e^tA, with the usual matrix exponential. In infinite dimension this step is far from being obvious and requires to define rigorously the concept of (unbounded) operator and of semigroup. The reader can keep in mind the example where the operator A is the Dirichlet-Laplacian, defined on a domain Ω of ^n.Most results of the present chapter are borrowed from the textbooks <cit.> and <cit.> on semigroup theory and from <cit.>, and are given without proof. Let us recall several basic notions of functional analysis that are instrumental in what follows (see <cit.>).Let X be a Banach space, endowed with a norm denoted by ‖ ‖_X, or simply by ‖ ‖ when there is no ambiguity. Let Y be another Banach space.The norm of a bounded (i.e., continuous) linear mapping g:X→ Y is denoted as well by ‖ g‖ and is defined as usual by‖ g‖ = sup_x∈ X∖{0}‖ g(x)‖_Y/‖ x‖_X.The set of bounded linear mappings from X to Y is denoted by L(X,Y).The notation X' stands for the (topological) dual of the Banach space X, that is, the vector space of all linear continuous mappings ℓ:X→ (in other words, X'=L(X,)). Endowed with the norm of linear continuous forms defined above, it is a Banach space. The duality bracket is defined as usual by ⟨ℓ,x⟩_X',X = ℓ(x), for every ℓ∈ X' and every x∈ X.In what follows, the word operator is a synonym for mapping. By definition, an unbounded linear operator A from X to Y is a linear mapping A:D(A)→ Y defined on a vector subspace D(A)⊂ X called the domain of A. The operator A is said to be bounded[Note that the terminology is paradoxal, since an unbounded linear operator can be bounded! Actually, "unbounded operator" usually underlies that A is defined on a domain D(A) that is a proper subset of X.] if D(A)=X and if there exists C>0 such that ‖ Ax‖_Y≤ C‖ x‖_X for every x∈ D(A).The operator A:D(A)⊂ X→ Y is said to be closed whenever its graphG(A) = { (x,Ax) | x∈ D(A) }is a closed subset of X× Y. By the closed graph theorem, A is a continuous linear mapping from X to Y if and only if D(A)=X and G(A) is closed.Let A:D(A)⊂ X→ Y be a densely defined linear operator (that is, D(A) is dense in X). The adjoint operator A^*:D(A^*)⊂ Y'→ X' is defined as follows. We setD(A^*)={ z∈ Y' | ∃ C≥ 0 such that ∀ x∈ D(A)|⟨ z,Ax⟩_Y',Y|≤ C‖ x‖_X}.Then D(A^*) is a vector subspace of Y'. For every z∈ D(A^*), we define the linear form ℓ_z:D(A)→ by ℓ_z(x)=⟨ z,Ax⟩_Y',Y for every x∈ D(A). By definition of D(A^*) we have |ℓ_z(x)|≤ C‖ x‖_X for every x∈ D(A). Since D(A) is dense in X, it follows that the linear form ℓ_z can be extended in a unique way to a continuous linear form on X, denoted by ℓ̃_z∈ X' (classical continuous extension argument of uniformly continuous mappings on complete spaces). Then we set A^*z=ℓ̃_z. This defines the unbounded linear operator A^*:D(A^*)⊂ Y'→ X', called the adjoint of A. The fundamental property of the adjoint is that⟨ z, Ax⟩_Y',Y = ⟨ A^*z, x⟩_X',Xfor every x∈ D(A) and every z∈ D(A^*). Note that: * A is bounded if and only if A^* is bounded, and in this case their norms are equal;* A^* is closed;* D(A^*) is not necessarily dense in Y' (even if A is closed), however if A is closed and if Y is reflexive then D(A^*) is dense in Y'.In the case where X is a Hilbert space, we identify X' with X. A densely defined linear operator A:D(A)⊂ X→ X is said to be self-adjoint (resp. skew-adjoint) whenever D(A^*)=D(A) and A^*=A (resp., A^*=-A). Note that self-adjoint and skew-adjoint operators are necessarily closed.More generally, given two Hilbert spaces X and Z such that Z↪ X, i.e., Z is continuously embedded in X, we have X'↪ Z'. Now we can decide, by the Riesz theorem, to identify X with X'; in this case, we have the triple Z↪ X↪ Z' (but then we cannot identify Z with Z'). Then, for any x∈ X⊂ Z' and any z∈ Z⊂ X, we have ⟨ x,z⟩_Z',Z = (x,z)_X, i.e., the duality bracket ⟨ , ⟩_Z',Z, standing for the application of a linear continuous mapping on Z (that is an element of Z') to an element of Z is identified to the scalar product on X, when both elements are in X. We say that X is the pivot space.Throughout this part, we consider a Banach space X. An operator on X will mean an (unbounded) linear operator A:D(A)⊂ X→ X. In practice, most of unbounded operators used to model systems of the form (<ref>) are operators A:D(A)⊂ X→ X that are closed and whose domain D(A) is dense in X.When an integral is considered over the Banach space X (like in (<ref>)), it is understood that it is in the usual sense of the Bochner integral (see <cit.>).§ HOMOGENEOUS CAUCHY PROBLEMSWe first focus on the homogeneous Cauchy problem, that is (<ref>) with f=0, with the objective of giving a sense to the unique solution y(t)=S(t)y_0.§.§ Semigroups of linear operatorsIn the sequel, the notation id_X stands for the identity mapping on X. A C_0 semigroup of bounded linear operators on X is a one-parameter family (S(t))_t≥ 0 of bounded linear mappings S(t)∈ L(X) such that * S(0)=id_X;* S(t+s)=S(t)S(s) for all (t,s)∈[0,+∞)^2 (semigroup property);* lim_t→ 0, t>0 S(t)y=y for every y∈ X.The linear operator A:D(A)⊂ X→ X, defined byAy=lim_t→ 0^+S(t)y-y/ton the domain D(A) that is the set of y∈ X such that the above limit (computed in X, i.e., with the norm ‖·‖_X) exists, is called the infinitesimal generator of the semigroup (S(t))_t≥ 0.The semigroup is said to be a group if the second property holds true for all (t,s)∈^2. <cit.> Let (S(t))_t≥ 0 be a C_0 semigroup. Then * the mapping t∈ [0,+∞)↦ S(t)y is continuous for every y∈ X;* A is closed and D(A) is dense in X;* S(t)y∈ D(A) and Ṡ(t)y=AS(t)y for all y∈ D(A) and t>0.This proposition shows that the notion of C_0 semigroup is adapted to solve the homogeneous Cauchy problem.Actually, more generally, semigroups are defined with the two first items of Definition <ref> (see <cit.>). The additional third property characterizes so-called C_0 semigroups (also called strongly continuous semigroups). It is a simple convergence property. If the C_0 semigroup (S(t))_t≥ 0 satisfies the stronger (uniform convergence) property lim_t→ 0, t>0 ‖ S(t)-id_X‖=0, then it is said to be uniformly continuous. The following result is however proved in <cit.>: A linear operator A:D(A)→ X is the infinitesimal generator of a uniformly continuous semigroup if and only if A is bounded and D(A)=X. In that case, moreover, S(t) = e^tA = ∑_n=0^+∞t^n/n!A^n. This result shows that, as soon as a given control process in infinite dimension involves unbounded operators (i.e., D(A)⊊ X, like for instance a PDE with a Laplacian), then the underlying semigroup is not uniformly continuous. Actually as soon as an operator involves a derivation then it is unbounded. In what follows we focus on C_0 semigroups.<cit.> Let (S(t))_t≥ 0 be a C_0 semigroup. There exist M≥ 1 and ω∈ such that ‖ S(t)‖≤ Me^ω t∀ t≥ 0 .We say that (S(t))_t≥ 0∈𝒢(M,ω). The infimum ω^* of all possible real numbers ω such that (<ref>) is satisfied for some M≥ 1 is the growth bound of the semigroup, and is given byω^* = inf_t>01/tln‖ S(t)‖.This proposition shows that C_0 semigroups have at most an exponential growth in norm. This property is similar to what happens in finite dimension.Let A:D(A)→ X be a linear operator on X defined on the domain D(A)⊂ X. The resolvent set ρ(A) of A is defined as the set of complex numbers λ such that λ id_X - A:D(A)→ X is invertible and (λ id_X - A)^-1:X→ X is bounded (we say it is boundedly invertible). The resolvent of A is defined by R(λ,A)=(λ id_X-A)^-1, for every λ∈ρ(A). Notice the so-called resolvent identity, often instrumental in some proofs:R(λ,A)-R(μ,A)=(μ-λ)R(λ,A)R(μ,A)∀(λ,μ)∈ρ(A)^2 .If (S(t))_t≥ 0∈𝒢(M,ω) then {λ∈ | λ>ω}⊂ρ(A), andR(λ,A)y=(λ id_X-A)^-1y=∫_0^+∞e^-λ t S(t) ydtfor every x∈ X and every λ∈ such that λ>ω (Laplace transform). Indeed, integrating by parts, one has λ∫_0^+∞e^-λ t S(t)dt = id_X + A ∫_0^+∞e^-λ t S(t)dt and thus(λ id_X - A) ∫_0^+∞e^-λ t S(t)dt = id_X . Note that, using the expression (<ref>) of R(λ,A) with the Laplace transform, it follows that, if (S(t))_t≥ 0∈𝒢(M,ω) then ‖ R(λ,A)‖≤M/λ-ω for every λ∈ such that λ>ω. This can be iterated, by derivating R(λ,A) with respect to λ, using the resolvent formula and the Laplace transform, and this yields the estimates ‖ R(λ,A)^n‖≤M/(λ-ω)^n for every n∈^* and for every λ∈ such that λ>ω. Actually, we have the following general result.<cit.> A linear operator A:D(A)⊂ X→ X is the infinitesimal generator of a C_0 semigroup (S(t))_t≥ 0∈𝒢(M,ω) if and only if the following conditions are satisfied: * A is closed and D(A) is dense in X;* (ω,+∞)⊂ρ(A) and ‖ R(λ,A)^n‖≤M/(λ-ω)^n for every n∈^* and every λ∈ such that λ >ω. Particular case: contraction semigroups. Let (S(t))_t≥ 0 be a C_0 semigroup. Assume that (S(t))_t≥ 0∈𝒢(M,ω) for some M≥ 1 and ω∈. If ω≤ 0 and M=1 then (S(t))_t≥ 0 is said to be a semigroup of contractions. Semigroups of contractions are of great importance and cover many applications. They are mostly considered in many textbooks (such as <cit.>) and in that case Theorem <ref> takes the more specific following forms, which are the well-known Hille-Yosida and Lumer-Phillips theorems. A linear operator A:D(A)⊂ X→ X is the infinitesimal generator of a C_0 semigroup of contractions if and only if the following conditions are satisfied: * A is closed and D(A) is dense in X;* (0,+∞)⊂ρ(A) and ‖ R(λ,A)‖≤1/λ for every λ>0. The latter condition can equivalently be replaced by: {λ∈ | λ>0 }⊂ρ(A) and ‖ R(λ,A)‖≤1/λ for every λ∈ such that λ>0.Let A:D(A)→ X generating a C_0 semigroup (S(t))_t≥ 0∈𝒢(1,ω). Then the operator A_ω=A-ω id_X (having the same domain) is the infinitesimal generator of S_ω(t)=e^-ω tS(t) which is a semigroup of contractions (and conversely). In particular, we obtain the following corollary. A linear operator A:D(A)⊂ X→ X is the infinitesimal generator of a C_0 semigroup (S(t))_t≥ 0∈𝒢(1,ω) if and only if the following conditions are satisfied: * A is closed and D(A) is dense in X;* (ω,+∞)⊂ρ(A) and ‖ R(λ,A)‖≤1/λ-ω for every λ>ω. Before providing the statement of the Lumer-Phillips theorem, which is another characterization of C_0 semigroups of contractions, let us recall some important definitions.For every y∈ X, we define F(y) = {ℓ∈ X'| ⟨ℓ,y⟩_X',X = ‖ y‖_X^2 = ‖ℓ‖_X'^2 }. It follows from the Hahn-Banach theorem that F(y) is nonempty. In the important case where X is a Hilbert space, one has y∈ F(y) (identifying X' with X). The operator A:D(A)⊂ X→ X is said to be: * dissipative if for every y∈ D(A) there exists an element ℓ∈ F(y) such that ⟨ℓ,Ay⟩_X',X≤ 0; * m-dissipative if it is dissipative and Ran(id_X-A)=(id_X-A)D(A)=X.If X is a Hilbert space, then A is dissipative if and only if (y,Ay)_X≤ 0 for every y∈ D(A), where ( , )_X is the scalar product of X.Other names are often used in the existing literature (see <cit.>): A is dissipative if and only if A is accretive, if and only if -A is monotone, and A is m-dissipative if and only if -A is maximal monotone (the letter m stands for maximal). If A:D(A)→ X is m-dissipative then Ran(λ id_X-A)=X for every λ>0.Let A:D(A)→ X be a m-dissipative operator. If X is reflexive[There is a canonical injection ι:X→ X” (the bidual of the Banach space X), defined by ⟨ι y,ℓ⟩_X”,X' = ⟨ℓ,y⟩_X',X for every y∈ X and every ℓ∈ X', which is a linear isometry, so that X can be identified with a subspace of X”. The Banach space X is said to be reflexive whenever ι(X)=X”; in this case, X” is identified with X with the isomorphism ι.] then A is closed and densely defined (i.e., D(A) is dense in X).Let A:D(A)⊂ X→ X be a densely defined closed linear operator. Then A is the infinitesimal generator of a C_0 semigroup of contractions if and only if A is m-dissipative. Note that it is not necessary to assume that A is closed and densely defined in this theorem if X is reflexive. A statement which is very often useful is the following one (see <cit.>). Let A:D(A)⊂ X→ X be a densely defined operator. If A is closed and if both A and A^* are dissipative then A is the infinitesimal generator of a C_0 semigroup of contractions; the converse is true if X is moreover reflexive.If A is skew-adjoint then it is closed and dissipative. Actually, A is skew-adjoint if and only if A and -A are m-dissipative (see <cit.>).Let Ω be an open subset of ^n. The Dirichlet-Laplacian operator _D is defined on D(_D) = { f∈ H^1_0(Ω) |f∈ L^2(Ω) } by _Df =f for every f∈ D(_D), whereis the usual Laplacian differential operator. Note that H^1_0(Ω)∩ H^2(Ω)⊂ D(_D) and that, in general, the inclusion is strict. However, if Ω is an open bounded subset with C^2 boundary, or if Ω is a convex polygon of ^2, then D(_D) = H^1_0(Ω)∩ H^2(Ω) (see <cit.>, see also <cit.>).The operator _D is self-adjoint and dissipative in X=L^2(Ω), hence by Proposition <ref> it generates a semigroup of contractions, called the heat semigroup.The Dirichlet-Laplacian can be as well defined in the space X=H^-1(Ω) (which is the dual of H^1_0(Ω) with respect to the pivot space L^2(Ω)). In that case, assuming that Ω is an open bounded subset of ^d, one has D(_D) = H^1_0(Ω), and _D:H^1_0(Ω)→ H^-1(Ω) is an isomorphism (see <cit.>).Anticipating a bit, let us study the operator underlying the wave equation. With the notations of Example <ref>, we define the operatorA = [ 0 id_H^1_0(Ω);_D 0 ]on the domain D(A) = D(_D) × H^1_0(Ω), in the Hilbert space X=H^1_0(Ω)× L^2(Ω). Then it is easy to see that A is closed, densely defined, skew-adjoint and thus m-dissipative, as well as -A. Hence A and -A both generate a semigroup of contractions, therefore A generates a group of contractions. The fact that it is a group reflects the fact that the wave equation is time-reversible.Let X be a Hilbert space, let (e_n)_n∈^* be a Hilbert basis of X, and let (λ_n)_n∈^* be a sequence of real numbers such that sup_n≥ 1λ_n<+∞ (this is satisfied if λ_n→ -∞ as n→+∞). We define the operatorAy = ∑_n=1^+∞λ_n (y,e_n)_Xe_non D(A) = { y∈ X | ∑_n=1^+∞λ_n^2(y,e_n)_X^2 <+∞}.Let us prove that A is self-adjoint and generates the C_0 semigroup defined byS(t) y =∑_n=1^+∞e^λ_n t (y,e_n)_X e_n. Firstly, noting that X_p⊂ D(A) for every p∈^*, with X_p = { y∈ X | ∃ p∈^*∀ n≥ p(y,e_n)_X=0 } and that ∪_p∈^* X_p is dense in X, it follows that D(A) is dense in X.Secondly, let us prove that A is closed. Let (y_p)_p∈^* be a sequence of D(A) such that y_p→ y∈ X and A y_p→ z∈ X as p tends to +∞. In particular (y_p)_p∈^* is bounded in X and thus there exists M>0 such that ∑_n=1^Nλ_n^2 (y_p,e_n)_X^2≤ M, for every p∈^* and every N∈^*. Letting p tend to +∞ then yields that ∑_n=1^+∞λ_n^2 (y,e_n)_X^2≤ M, and thus y∈ D(A). It remains to prove that z=Ay. Since Ay_p=∑_n=1^+∞λ_n(y_p,e_n)_Xe_n, and since, for every n∈^*, (Ay_p,e_n)_X = λ_n (y_p,e_n)_X converges to λ_n(y,e_n)_X=(Ay,e_n)_X, it follows that Ay_p converges weakly to Ay in X. Then by uniqueness of the limit it follows that z=Ay.Now, let us prove that λ id_X-A is boundedly invertible if and only if inf_n≥ 1|λ-λ_n|>0. If inf_n≥ 1|λ-λ_n|>0, we setA_λ y=∑_n=1^+∞1/λ-λ_n (y,e_n)_X e_nfor every y∈ X. Clearly, A_λ:X→ X is linear and bounded, and one has RanA_λ⊂ D(A) and (λ id_X-A)A_λ=A_λ(λ id_X-A)=id_X, hence λ∈ρ(A) and A_λ = (λ id_X-A)^-1. Conversely, if (λ id_X-A) is boundedly invertible, then for every n∈^* there exists y_n∈ X such that (λ id_X-A)y_n=e_n, and the sequence (y_n)_n∈^* is bounded. One has y_n=1/λ-λ_ne_n and hence inf_n≥ 1|λ-λ_n|>0.It follows from these arguments that if inf_n≥ 1|λ-λ_n|>0 thenR(λ,A)^p y = ∑_n=1^+∞1/(λ-λ_n)^p (y,e_n)_X e_nand hence ‖ R(λ,A)^p ‖≤sup_n≥ 11/|λ-λ_n|^p = ( sup_n≥ 11/|λ-λ_n|)^p . Let ω≥sup_n≥ 1λ_n. Then for every λ>ω one has inf_n≥ 1|λ-λ_n|≥λ-ω and hence sup_n≥ 1λ-ω/|λ-λ_n|≤ 1. Then, the conclusion follows from the Hille-Yosida theorem. This example can be applied to a number of situations. Indeed any self-adjoint operator having a compact inverse (like the Dirichlet-Laplacian) is diagonalizable and the general framework of this example can then be applied.In order to model the 1D transport equation ∂_t y+∂_x y=0, with 0≤ x≤ 1, we define X=L^2(Ω) and the operator Ay=-∂_x y on the domain D(A)={ y∈ X | ∂_x y∈ X, y(0)=0}. It is easy to prove that A is closed and densely defined, and that A and A^* are dissipative (it can also be seen that A-λ id_X is surjective), and hence A generates a C_0 semigroup of contractions.Recall that (as said in the introduction of the chapter), given a densely defined operator A:D(A)→ X, the adjoint A^*:D(A^*)→ X' is a closed operator, and if moreover A is closed and X is reflexive then D(A^*) is dense in X'. Concerning the semigroup properties, note that, given a C_0 semigroup (S(t))_t≥ 0 on X, the adjoint family (S(t)^*)_t≥ 0 is a family of bounded operators on X' satisfying the semigroup property but is not necessarily a C_0 semigroup.<cit.> If X is reflexive and if (S(t))_t≥ 0 is a C_0 semigroup on X with generator A then (S(t)^*)_t≥ 0 is a C_0 semigroup on X' with generator A^*.§.§ The Cauchy problem§.§.§ Classical solutionsLet A:D(A)⊂ X→ X be a densely defined linear operator on the Banach space X. Consider the Cauchy problemẏ(t) =Ay(t),t>0,y(0) =y_0∈ D(A).As a consequence of Proposition <ref> we have the following result. Assume that A is the infinitesimal generator of a C_0 semigroup (S(t))_t≥ 0 on X. Then the Cauchy problem (<ref>) has a unique solution y∈ C^0([0,+∞);D(A))∩ C^1((0,+∞);X) given by y(t)=S(t)y_0 for every t≥ 0. Moreover, the differential equation ẏ(t)=Ay(t) makes sense in X. This solution is often called strong solution in the existing literature. Let Ω be a bounded open subset of ^n having a Lipschitz boundary (to be able to define a trace).Having in mind Example <ref>, let us apply Theorem <ref> to the Dirichlet heat equation. The Cauchy problem∂_t y= yin Ω, y_|∂Ω=0, y(0)=y_0∈ H^1_0(Ω),has a unique solution y∈ C^0([0,+∞);H^1_0(Ω))∩ C^1((0,+∞);H^-1(Ω)). Moreover, there exist M≥ 1 and ω<0 such that ‖ y(t,·)‖_L^2(Ω)≤ Me^ω t‖ y_0(·)‖_L^2(Ω).Let Ω be a bounded open subset of ^n having a Lipschitz boundary. Having in mind Example <ref>, let us apply Theorem <ref> to the Dirichlet wave equation. The Cauchy problem∂_tt y= yin Ω, y_|∂Ω=0, y(0)=y_0∈ H^1_0(Ω), ∂_t y(0)=y_1∈ L^2(Ω),has a unique solution y∈ C^0([0,+∞);H^1_0(Ω))∩ C^1((0,+∞);L^2(Ω)) ∩ C^2((0,+∞);H^-1(Ω)).Moreover, we have conservation of energy: ‖∂_ty(t)‖_H^-1(Ω)^2+‖ y(t)‖_L^2(Ω)^2 =‖ y_1‖_H^-1(Ω)^2+‖ y_0‖_L^2(Ω)^2 (by integration by parts).If ∂Ω is C^2 and if y_0∈ H^1_0(Ω)∩ H^2(Ω) and y_1∈ H^1_0(Ω) then y∈ C^0([0,+∞); H^2(Ω)∩ H^1_0(Ω))∩ C^1((0,+∞);H^1_0(Ω)) ∩ C^2((0,+∞);L^2(Ω))and ‖∂_ty(t)‖_L^2(Ω)^2+ ‖ y(t)‖_H^1_0(Ω)^2 =‖ y_1‖_L^2(Ω)^2+ ‖ y_0‖_H^1_0(Ω)^2.Concerning the regularity of the solutions of the heat equation (Example <ref>) and of the wave equation (Example <ref>), we can actually be much more precise, by expanding the solutions as a series with the eigenfunctions of the Dirichlet-Laplacian. Indeed, let (ϕ_j)_j∈^* be a Hilbert basis of L^2(Ω), consisting of eigenfunctions of the Dirichlet-Laplacian, corresponding to the eigenvalues (λ_j)_j∈^*. For the heat equation of Example <ref>, if y_0=∑_j=1^+∞a_jϕ_j∈ L^2(Ω) then y(t,x)=∑_j=1^+∞a_je^λ_jtϕ_j(x) is a function of (t,x) of class C^∞ for t>0 (see <cit.>), and for every t>0 fixed, the function x↦ y(t,x) is (real) analytic on the open set Ω (see <cit.>). This reflects the smoothing effect of the heat equation.For the wave equation, there is no smoothing effect, but smoothness or analyticity properties can also be established for appropriate initial conditions (see <cit.>).These remarks show that the regularity properties obtained by the general semigroup theory may be much improved when using the specific features of the operator under consideration (see also Remark <ref> further). If y_0∈ X∖ D(A) then in general y(t)=S(t)y_0∉ D(A), and hence y(t) is not a solution of (<ref>) in the above sense. Actually, y(t) is solution of ẏ(t)=Ay(t) in a weaker sense, by replacing A with an extension on A to X, as we are going to see next.§.§.§ Weak solutionsThe objective of this section is to define an extension of the Banach space X, and extensionsof C_0 semigroups on X which will provide weaker solutions. A good reference for this part is the textbook <cit.>.Let (S(t))_t≥ 0∈𝒢(M,ω) be a C_0 semigroup on X, of generator A:D(A)→ X. Let β∈ρ(A) (if X is real, consider such a real number β). Let X_1 denote the Banach space D(A), equipped with the norm ‖ y‖_X_1=‖ (β id_X-A)y‖_X, and let X_-1 denote the completion of X with respect to the norm ‖ y‖_X_-1=‖ (β id_X-A)^-1y‖_X =‖ R(β,A)y‖_X. Note that, by definition, β id_X-A:X_1→ X and (β id_X-A)^-1:X→ X_1 are surjective isometries (unitary operators). It is then easy to see that the norm ‖ ‖_1 on X_1 is equivalent to the graph norm ‖ y‖_G=‖ y‖_X+‖ Ay‖_X. Therefore, from the closed graph theorem, (X_1,‖ ‖_1) is a Banach space, and we clearly get an equivalent norm by considering any other β'∈ρ(A).Similarly, the space X_-1 does not depend on the specific value of β∈ρ(A), in the sense that we get an equivalent norm by considering any other β'∈ρ(A). Indeed, for all (β,β')∈ρ(A)^2, we have (β id_X-A)(β' id_X-A)^-1=id_X+(β-β')(β' id_X-A)^-1, hence(β' id_X-A)^-1 = (β id_X-A)^-1 + (β-β') (β id_X-A)^-1 (β' id_X-A)^-1(resolvent identity), and moreover (β id_X-A)^-1 and (β' id_X-A)^-1 commute. The conclusion follows easily. The injections X_1↪ X ↪ X_-1 are, by definition, continuous and dense. They are moreover compact as soon as β id_X-A has a compact inverse (i.e., as soon as A has compact resolvents). Let Ω be an open bounded subset of ^n having a C^2 boundary, and consider the Dirichlet-Laplacian _D on X=L^2(Ω) defined on D(_D)=H^1_0(Ω)∩ H^2(Ω). Then X_1=D(_D)=H^1_0(Ω)∩ H^2(Ω) and, as will follow from Theorem <ref> below, X_-1=(H^1_0(Ω)∩ H^2(Ω))', where the dual is taken with respect to the pivot space X=L^2(Ω). Let us now provide a general theorem allowing one to identify the space X_-1. Since A^* is closed, D(A^*) endowed with the norm ‖ z‖_X_1=‖ (β id_X'-A^*)z‖_X' where β∈ρ(A^*)=ρ(A), is a Banach space. If X is reflexive then X_-1 is isomorphic to D(A^*)', where the dual is taken with respect to the pivot space X.We begin by recalling the following general fact: if E and F are two Banach spaces with a continuous injection E↪ F, then we have a continuous injection F'↪ E'. From this general fact, since D(A^*)⊂ X' with a continuous injection, it follows that X”⊂ D(A^*)' (with a continuous injection). Using the canonical injection from X to X”, it follows that every element of X is (identified with) an element of D(A^*)'. Let us prove that ‖ y ‖_X_-1 = ‖ y ‖_D(A^*)' for every y∈ X. By definition, we have‖ y‖_X_-1 = ‖ (β id_X-A)^-1y‖_X = sup{⟨ f, (β id_X-A)^-1y ⟩_X',X | f∈ X', ‖ f‖_X'≤ 1 }= sup{⟨ (β id_X'-A^*)^-1 f, y ⟩_X',X | f∈ X', ‖ f‖_X'≤ 1 } .Using the canonical injection of X in X” (and not yet the fact that X is reflexive), y can be considered as an element of X”, and then‖ y‖_X_-1 = sup{⟨ y, (β id_X'-A^*)^-1 f ⟩_X”,X' | f∈ X', ‖ f‖_X'≤ 1 } .Besides, by definition we have‖ y‖_D(A^*)' = sup{⟨ y,z ⟩_D(A^*)',D(A^*) | z∈ D(A^*), ‖ z‖_D(A^*)≤ 1 } .In this expression we make a change of variable: for every z∈ D(A^*) such that ‖ z‖_D(A^*)≤ 1, there exists f∈ X' such that z=(β id_X'-A^*)^-1f, and since ‖ z‖_D(A^*) = ‖ (β id_X'-A^*)z‖_X'=‖ f‖_X' it follows that ‖ f‖_X'≤ 1. Therefore‖ y‖_D(A^*)' = sup{⟨ y,(β id_X'-A^*)^-1f ⟩_D(A^*)',D(A^*) | f∈ X', ‖ f‖_X'≤ 1 } .In the above duality bracket, since y∈ X” and (β id_X'-A^*)^-1f∈ X' we can as well use the duality bracket ⟨ , ⟩_X”,X'. Hence ‖ y ‖_X_-1 = ‖ y ‖_D(A^*)'.To conclude the proof, it remains to note that X_1 is dense in X and that X≃ X” is dense in D(A^*)'. It is a general fact that X_1 is dense in X (see Proposition <ref>). The fact that X is dense in D(A^*)' is ensured by the reflexivity assumption: indeed, since X is reflexive it follows that D(A^*) is dense in X' (with a continuous injection), and hence X≃ X” is dense in D(A^*)'.The operator A:D(A)=X_1→ X can be extended to an operator A_-1:D(A_-1)=X→ X_-1, and the C_0 semigroup (S(t))_t≥ 0 on X extends to a semigroup (S_-1(t))_t≥ 0 on X_-1, generated by A_-1.Note that the operator A:D(A)→ X is continuous when one endows D(A) with its norm, because ‖ Ay‖_X ≤‖ y‖_G≤ C‖ y‖_1 as already said.By definition of the norm in X_-1, we have easily‖ Ay‖_X_-1 = ‖ (β id_X-A)^-1Ay‖_X= ‖ y - β(β id_X-A)^-1 y‖_X ≤‖ y‖_X + |β|‖(β id_X-A)^-1 y‖_Xfor every y∈ D(A), and since (β id_X-A)^-1 is bounded it follows that there exists some constant C_1>0 such that ‖ Ay‖_X_-1≤ C_1 ‖ y‖_X for every y∈ D(A). Therefore the operator A has a unique extension A_-1:D(A_-1)=X→ X_-1 that is continuous for the respective norms. The fact that D(A_-1)=X with equivalent norms follows from the equality‖ y‖_X = ‖ (β id_X-A)^-1(β id_X-A)y‖_X = ‖ (β id_X-A)y‖_X_-1 = ‖ y‖_D(A_-1)for every y∈ D(A), and by density this holds true for every y∈ X. Note that, by density, if A is m-dissipative then A_-1 is m-dissipative as well, and hence (S_-1(t))_t≥ 0 is a C_0 semigroup of contractions. The following result follows from Theorem <ref>, giving an answer to the question raised in Remark <ref>. For every y_0∈ X, the Cauchy problem ẏ(t)=A_-1y(t), y(0)=y_0 has a unique solution y∈ C^0([0,+∞);X)∩ C^1((0,+∞);X_-1) given by y(t)=S(t)y_0=S_-1(t)y_0 for every t≥ 0. Note that, here, the differential equation ẏ(t)=A_-1y(t) is written in X_-1. In particular, the derivative is computed in X_-1, with the norm ‖·‖_X_-1.In other words, with respect to Theorem <ref>, for a given y_0∈ X, y(t)=S(t)y_0 (often called mild solution in the existing literature) is still a solution of ẏ(t)=Ay(t) (now written in X_-1) provided A is replaced with its extension A_-1. Note that this weaker solution is a strong solution for the operator A_-1 in the Banach space X_-1. For these reasons, we shall not insist on naming solutions "strong", "mild" or "weak". What is important is to make precise the Banach spaces in which we are working.Note that the above concept of weak solution corresponds to solutions sometimes defined by transposition. Indeed, if X is reflexive then X_-1≃ D(A^*)' (see Theorem <ref>), and hence, considering the differential equation ẏ(t)=A_-1y(t) in the space X_-1 means that⟨ẏ(t),φ⟩_D(A^*)',D(A^*) = ⟨ A_-1 y(t),φ⟩_D(A^*)',D(A^*)∀φ∈ D(A^*) .This concept of solution by transposition is often encountered in the existing literature (see, e.g., <cit.> for control issues). Let Ω⊂^n be a bounded open set with C^2 boundary.* The Cauchy problem ∂_ty= y in Ω, y_|∂Ω=0, y(0)=y_0∈ L^2(Ω), has a unique solution y∈ C^0([0,+∞);L^2(Ω))∩ C^1((0,+∞);(H^1_0(Ω)∩ H^2(Ω))') .Moreover, there exist M≥ 1 and ω∈ (actually, ω<0) such that ‖ y(t)‖_L^2(Ω)≤ Me^ω t‖ y_0‖_L^2(Ω) for every t≥ 0. * Consider the Cauchy problem ∂_tty= y in Ω, y_|∂Ω=0, y(0)=y_0, ∂_ty(0)=y_1. * If y_0∈ H^-1(Ω) and y_1∈ (H^1_0(Ω)∩ H^2(Ω))', then there is a unique solutiony∈ C^0([0,+∞);H^-1(Ω))∩ C^1((0,+∞); (H^1_0(Ω)∩ H^2(Ω))' ) . * If y_0∈ L^2(Ω) and y_1∈ H^-1(Ω), then there is a unique solutiony∈ C^0([0,+∞);L^2(Ω))∩ C^1((0,+∞); H^-1(Ω)) .§.§ Scale of Banach spacesWe can generalize the previous framework and adopt a more abstract (and probably simpler, at the end) point of view.The construction of X_1 and of X_-1 can indeed be iterated, and leads to a sequence of Banach spaces (X_n)_n∈ (called "tower of Sobolev spaces" in <cit.>). For positive integers n, the operator A_n:D(A_n)=D(A^n+1)→ D(A^n) is the restriction of A to D(A^n+1).The construction can even be generalized in order to obtain a continuous scale of Banach spaces (X_α)_α∈, with the property that if α_1>α_2 then the canonical injection X_α_1↪ X_α_2 is continuous and dense, and is compact as soon as the resolvent of A is compact. We refer to <cit.> for this general construction (where these spaces are called rigged spaces) and for further properties (see also <cit.>).The Banach space X_α, with α an arbitrary real number, can be defined for instance by symbolic calculus with real powers of the resolvent of A and complex integrals (see <cit.>), or by Banach spaces interpolation (see <cit.>), or by Fourier transform when it is possible (see <cit.>), or with a Hilbert basis when X is a Hilbert space and A is diagonalizable (see Remark <ref> below). For instance, the construction with the fractional powers of the resolvent goes as follows, in few words (see <cit.>), provided A generates a C_0 semigroup.Given β∈ρ(A) with (β)>ω, given any α>0 we define[This formula extrapolates the Laplace transform formula 1/z^α = 1/Γ(α)∫_0^+∞ t^α-1e^-tzdt valid for any z∈ such that (z)>0.](β id_X-A)^-α = 1/Γ(α)∫_0^+∞ t^α-1e^-β t S(t)dtand then we define (β id_X-A)^α = ((β id_X-A)^-α)^-1 on the domain Ran((β id_X-A)^-α). We also define (β id_X-A)^0=id.We denote by X_α=Ran((β id_X-A)^-α)=(β id_X-A)^-α(X)the Banach space endowed with the norm ‖ y‖_X_α = ‖ (β id_X-A)^α y‖_X.The Banach space X_-α is defined as the completion of X for the norm ‖ y‖_X_-α = ‖ (β id_X-A)^-α y‖_X. Accordingly, we set X_0=X, endowed with the norm of X. We have thus defined the scale of Banach spaces (X_α)_α∈. The construction does not depend on the specific choice of β∈ρ(A).In this general framework, the operator A_α:D(A_α)=X_α+1→ X_α(with α∈), which is either the restriction or the extension of A:D(A)→ X (with X_0=X) according to the sign of α, generates the C_0 semigroup (S_α(t))_t≥ 0.Hereafter, when it is clear from the context, we skip the index α in S_α(t) or in A_α, when referring to the restriction or extension of S(t) or of A to X_α.Note that, for any α_1,α_2∈, (β id_X-A)^α_1-α_2 : X_α_1→ X_α_2 is a surjective isometry (unitary operator), where A denotes here (without the index) the appropriate restriction or extension of the operator A.The spaces X_α are interpolation spaces between the spaces X_n with integer indices. It can be noted that there exists C>0 such that, for every n∈, for every α∈[n,n+1], we have ‖ y‖_X_α≤ C ‖ y‖_X_n^n+1-α‖ y‖_X_n+1^α-n∀ y∈ X_n+1(see <cit.>). This is an interpolation inequality, as in <cit.>.Replacing the operator A with any real power of β id_X-A, we infer from those inequalities the following more general interpolation inequalities (see <cit.>): given any real numbers α<β<γ, there exists C(γ-α)>0 (only depending on γ-α) such that‖ y‖_X_β≤ C(γ-α) ‖ y‖_X_α^γ-β/γ-α‖ y‖_X_γ^β-α/γ-α∀ y∈ X_γ . When X is reflexive, the operator A^*:D(A^*)→ X' generates the C_0 semigroup (S(t)^*)_t≥ 0 (see Proposition <ref>). By the construction above where A is replaced with A^*, there exists a scale of Banach spaces denoted by (X^*_α)_α∈, with X^*_0=X'. Similarly as in Theorem <ref>, we have X_-α = (X^*_α)'∀α∈where the dual is taken with respect to the pivot space X.There exist plenty of other constructions of (different) interpolation spaces, such as Favard or abstract Hölder spaces (see <cit.>).Cauchy problems. The above general construction allows one to generalize in a wide sense the concept of strong or weak solution. As a consequence of Theorem <ref>, we have the following result. The Cauchy problemẏ(t)=Ay(t),y(0)=y_0∈ X_α,has a unique solutiony∈ C^0([0,+∞);X_α)∩ C^1((0,+∞);X_α-1) given by y(t)=S(t)y_0 for every t≥ 0. Here, we have skipped the index α, but it is understood that A=A_α-1 in (<ref>); the differential equation is written in X_α-1 and the derivative is computed with respect to the norm ‖·‖_X_α-1. It is interesting to perform the construction of the scale of Banach spaces in the following important case of diagonalizable operators.Assume that X is a Hilbert space and that 𝒜:D(𝒜)→ X is a self-adjoint positive operator with 𝒜^-1 compact (for instance, the negative of the Dirichlet-Laplacian on a bounded domain).Then there exists a normalized Hilbert basis (e_j)_j∈^* of eigenvectors of 𝒜, associated with eigenvalues (λ_j)_j∈^*. One has𝒜y = ∑_j=1^+∞λ_j (e_j,y)_X e_jon D(𝒜) = { y∈ X| ∑_j=1^+∞λ_j^2 (e_j,y)_X^2 < +∞}and then 𝒜^α is defined for every α∈ in a spectral way by𝒜^α y = ∑_j=1^+∞λ_j^α (e_j,y)_X e_jon D(𝒜^α) = { y∈ X| ∑_j=1^+∞λ_j^2α (e_j,y)_X^2 < +∞}.Note that we have used a calligraphic 𝒜, to avoid the confusion with the operator A=-𝒜 that one can consider in the differential equation ẏ(t)=Ay(t).Let us consider the negative of the Dirichlet-Laplacian 𝒜=-_D defined on D(𝒜)={ y ∈ H^1_0(Ω) |y ∈ L^2(Ω)}, with X=L^2(Ω), where Ω is a bounded open subset of ^n with C^2 boundary. We have D(𝒜)=H^2(Ω)∩ H^1_0(Ω) (see Example <ref>) and X_-1=(H^1_0(Ω)∩ H^2(Ω))' (see Example <ref>) where the dual is taken with respect to the pivot space X=L^2(Ω). We can define 𝒜^1/2=√(-) in a spectral way as above. Assuming that the boundary of Ω is of class C^∞, the spaces D(𝒜^j/2), for j∈, called Dirichlet spaces, are the Sobolev spaces with (the so-called) Navier boundary conditions, defined byD(𝒜^1/2) = { y ∈ H^1(Ω)|y_|∂Ω=0 }= H^1_0(Ω) , D(𝒜) = { y ∈ H^2(Ω)| y_|∂Ω=0 } = H^1_0(Ω)∩ H^2(Ω) ,D(𝒜^3/2)= { y ∈ H^3(Ω)| y_|∂Ω=(y)_|∂Ω=0 } , D(𝒜^2)= { y ∈ H^4(Ω)| y_|∂Ω=(y)_|∂Ω=0 } , D(𝒜^5/2)= { y ∈ H^5(Ω)| y_|∂Ω=(y)_|∂Ω=(^2y)_|∂Ω=0 } , D(𝒜^3)= { y ∈ H^6(Ω)| y_|∂Ω=(y)_|∂Ω=(^2y)_|∂Ω=0 } ,etc; in other words,D(𝒜^j/2) = { y ∈ H^j(Ω)| y_|∂Ω=(y)_|∂Ω=⋯ = (^[j-1/2]y)_|∂Ω=0 }for every j∈^*, where [ ] is the floor function. Moreover, the operator 𝒜^j/2:D(𝒜^j/2)→ L^2(Ω) is an isomorphism (see <cit.> for other properties).It can be noted that ‖𝒜^j/2y‖_L^2(Ω) ={[‖(-)^j/2 y‖_L^2(Ω) if j is even,; ‖(-)^j/2 y‖_H^1_0(Ω)=‖(-)^(j+1)/2 y‖_L^2(Ω)if j is odd. ].Omitting the indices, we have the scale of Hilbert spaces⋯ D(𝒜)D(𝒜^1/2)L^2(Ω)D(𝒜^1/2)'D(𝒜)' ⋯with D(𝒜^1/2)'=H^-1(Ω) and D(𝒜)'=(H^1_0(Ω)∩ H^2(Ω))' (with respect to the pivot space L^2(Ω)). All mappings 𝒜^1/2=√(-), between the corresponding spaces, are isometric isomorphims.As in the previous remark, we can even define X_α=D(𝒜^α) (and their duals) in a spectral way, for any α∈, thus obtaining the scale (X_α)_α∈ of Dirichlet spaces associated with the Dirichlet-Laplacian. By interpolation theory (see <cit.>), for every α∈[0,1), we have X_α=H^2α_0(Ω) if α≠ 1/4 and X_1/4=H^1/2_00(Ω) (Lions-Magenes space).Using Proposition <ref>, if X is reflexive then all these results can be stated as well for the adjoint operator A^* and the adjoint C_0 semigroup S(t)^*.§ NONHOMOGENEOUS CAUCHY PROBLEMSLet y_0∈ X. We consider the Cauchy problemẏ(t)=Ay(t)+f(t),y(0)=y_0,where A:D(A)→ X generates a C_0 semigroup (S(t))_t≥ 0 on X. If y_0∈ D(A) and f∈ L^p_loc([0,+∞),D(A)) with 1≤ p≤ +∞, then (<ref>) has a unique solution y∈ C^0([0,+∞);D(A))∩ W^1,p_loc([0,+∞),X) (often referred to as strong solution of (<ref>)) given byy(t)=S(t)y_0+∫_0^t S(t-s)f(s)ds.Morever, the differential equation (<ref>) makes sense in X.The function y defined by (<ref>) is clearly a solution of (<ref>). To prove uniqueness, let y_1 and y_2 be two solutions. Then z=y_1-y_2 is solution of ż(t)=Az(t), z(0)=0. Since d/ds S(t-s)z(s) = -S(t-s)Az(s) + S(t-s)Az(s) = 0 for every s∈[0,t], it follows that 0=S(t)z(0)=S(0)z(t)=z(t).Note that, if f∈ L^p_loc([0,+∞),X), then (<ref>) still makes sense.Note also that, using the extension of A (and of S(t)) to X_-1, Proposition <ref> implies that, if y_0∈ X and f∈ L^p_loc([0,+∞),X), then (<ref>) has a unique solution y∈ C^0([0,+∞),X)∩ W^1,p_loc([0,+∞),X_-1) given as well by the Duhamel formula (<ref>) (and often referred to as mild solution of (<ref>)), and the differential equation (<ref>) is written in X_-1 (see, e.g., <cit.>). Moreover, for every T>0 there exists K_T>0 (not depending on y_0 and f) such that‖ y(t)‖_X ≤ K_T ( ‖ y_0‖_X + ‖ f‖_L^p([0,T],X) ). More generally, using the general scale of Banach spaces (X_α)_α∈ mentioned previously, we have the following result (see <cit.> or <cit.>). If f∈ L^p_loc([0,+∞),X_α) for some α∈ and 1≤ p≤ +∞, then for every y_0∈ X_α the Cauchy problem (<ref>) has a unique solution y∈ C^0([0,+∞),X_α)∩ W^1,p_loc([0,+∞),X_α-1)given as well by (<ref>) (called strong solution in X_α in <cit.>). Here, we have A=A_α-1 in the equation (<ref>) which is written in X_α-1 almost everywhere, and the integral in (<ref>) is done in X_α (with S(·)=S_α(·) in the integral). Proposition <ref> corresponds to α=1.The regularity stated in Proposition <ref> is sharp in general. Given y_0∈ X_α+1, the condition f∈ C^0([0,+∞);X_α) does not ensure that y∈ C^0([0,+∞);X_α+1)∩ C^1((0,+∞);X_α) (unless the semigroup is analytic, see <cit.>).Indeed, for y_0=0 and for a given y_1∈ X_α, the solution of the Cauchy problem ẏ(t)=Ay(t)+S(t)y_1, y(0)=0 is y(t)=∫_0^t S(t-s)S(s)y_1ds = tS(t)y_1. Hence, if y_1∈ X_α∖ X_α+1 then S(t)y_1 may not belong to X_α+1. It can be however noted that if f is more regular in time then the solution gains some regularity with respect to the space variable. More precisely we have the following (sometimes useful) result (see <cit.>). If y_0∈ X_α+1 and f∈ W^1,p_loc([0,+∞),X_α) then (<ref>) has a unique solution y∈ C^0([0,+∞);X_α+1)∩ C^1((0,+∞),X_α)∩ W^2,p_loc([0,+∞),X_α-1)given by (<ref>). The assumption on f can even be weakened if X_α-1 is reflexive, and then it suffices to assume that f is Lipschitz continuous with values in X_α-1.Let Ω be a bounded open subset of ^n with C^2 boundary. Let us consider the Cauchy problem∂_t y= y+fin Ω, y_|∂Ω=0, y(0)=y_0∈ L^2(Ω),with f∈ L^2((0,+∞)×Ω). The general theory implies that there exists a unique solutiony∈ C^0([0,+∞),L^2(Ω))∩ H^1([0,+∞), (H^2(Ω)∩ H^1_0(Ω))') .Actually, by using a spectral expansion as in Remark <ref>, it is easy to prove that y∈ L^2([0,+∞),H^1_0(Ω))∩ H^1([0,+∞),H^-1(Ω)) ,which is more precise because this set is contained in C^0([0,+∞),L^2(Ω)). Moreover, if y_0∈ H^1_0(Ω), then we have the improved regularity y∈ L^2([0,+∞),H^2(Ω)∩ H^1_0(Ω))∩ H^1([0,+∞),L^2(Ω))⊂ C^0([0,+∞),H^1_0(Ω))(see also <cit.> where these regularity properties are established by using Galerkin approximations for more general elliptic operators). CHAPTER: LINEAR CONTROL SYSTEMS IN BANACH SPACESThroughout the chapter, we consider the linear autonomous control systemẏ(t) =Ay(t)+Bu(t)y(0) =y_0where the state y(t) belongs to a Banach space X, y_0∈ X, the control u(t) belongs to a Banach space U, A:D(A)→ X is the generator of a C_0 semigroup (S(t))_t≥ 0∈𝒢(M,ω) on X, and B∈ L(U,X_-1). The space X_-1 has been defined in the previous chapter.The control operator B is said to be bounded if B∈ L(U,X), and is called unbounded otherwise (this is the standard wording, although it is a bit ambiguous since B is bounded as an operator from U in X_-1).Unbounded operators appear naturally when dealing with boundary or pointwise control systems. Other choices could be made for the control operator, and we can more generally assume that B∈ L(U,X_-α) for some α≥ 0. We will comment on that further, with the concept of degree of unboundedness.A priori if u∈ L^1_loc(0,+∞;U) then Bu∈ L^1_loc(0,+∞;X_-1), and since y_0∈ X, it follows from the results of Section <ref> that (<ref>) has a unique solution y∈ C^0([0,+∞);X_-1)∩ W^1,1_loc(0,+∞;X_-2), given byy(t;y_0,u)=S(t)y_0+L_tuwhereL_tu=∫_0^t S(t-s)Bu(s)ds.Moreover, the differential equation in (<ref>) is written in X_-2. The integral (<ref>) is done in X_-1.Note that, of course, if B∈ L(U,X) is bounded, then the regularity moves up a rung: (<ref>) has a unique solution y∈ C^0([0,+∞);X)∩ W^1,1_loc(0,+∞;X_-1) given as well by (<ref>), and the differential equation is written in X_-1.For a general (unbounded) control operator B∈ L(U,X_-1), it is desirable to have conditions under which all solutions of (<ref>) take their values in X, that is, under which the situation is as when the control operator is bounded.Such control operators will be said to be admissible. The admissibility property says that the control system is well posed in X (note that it is always well posed in X_-1).Of course, the notion of admissibility depends on the time-regularity of the inputs u. Since it will be characterized by duality, it is necessary, here, to fix once for all the class of controls.In what follows, and in view of the Hilbert Uniqueness Method, we will actually deal with controls u∈ L^2([0,T],U) (for some T>0 arbitrary). Of course, we have L^2([0,T],U)⊂ L^1([0,T],U). Also, the duality will be easier to tackle in L^2 (although easy modifications can be done in what follows to deal with L^p, at least for 1< p≤ +∞, see <cit.> for exhaustive results).Hence, from now on, the space of controls is L^2([0,T],U).In this chapter, after having defined admissible operators, we will introduce different concepts of controllability, and show that they are equivalent, by duality, to some observability properties. Finally, we will explain the Hilbert Uniqueness Method (in short, HUM) introduced by J.-L. Lions in <cit.> in order to characterize the spaces where exact controllability holds true.Most of this chapter is borrowed from <cit.> (see also <cit.>).§ ADMISSIBLE CONTROL OPERATORSAs said previously, we have a priori the inclusion Ran(L_T)⊂ X_-1, for every T>0, and the fact that L_T∈ L(L^2([0,T],U),X_-1). §.§ Definition A control operator B∈ L(U,X_-1) is said to be admissible for the C_0 semigroup (S(t))_t≥ 0 if there exists T>0 such that Ran(L_T)⊂ X.The following properties are equivalent: * There exists T>0 such that Ran(L_T)⊂ X.* For every T>0, one has Ran(L_T)⊂ X.* For every T>0, one has L_T∈ L(L^2([0,T],U),X).* All solutions (<ref>) of (<ref>), with y_0∈ X and u∈ L^2([0,T],U), take their values in X. Assume that Ran(L_T)⊂ X. Let us prove that Ran(L_T)⊂ X for every t>0.Let t∈(0,T) arbitrary. For every control u∈ L^2([0,t],U), we define the control ũ∈ L^2([0,T],U) by ũ(s) = 0 for s∈[0,T-t] and ũ(s) = u(s-T+t) for s∈[T-t,T]. Then, we have L_T ũ= ∫_T-t^T S(T-s)B u(s-T+t) ds = ∫_0^t S(t-τ)B u(τ)dτ = L_t u (with τ = s-T+t). It follows that if Ran(L_T)⊂ X then Ran(L_T)⊂ X, for every t∈(0,T).Before proving the statement for t>T, let us note that, for every u∈ L^2(0,2T;U), we have L_2Tu = ∫_0^2T S(2T-t)Bu(t)dt = ∫_0^T S(2T-t)Bu(t)dt +∫_T^2T S(2T-t)Bu(t)dt = S(T)∫_0^T S(T-t)Bu(t)dt +∫_0^T S(T-s)Bu(s+T)ds = S(T) L_T u_1 + L_T u_2, with the controls u_1 and u_2 defined by u_1(t) = u(t) and u_2(t)=u(t+T) for almost every t∈[0,T]. It follows that if Ran(L_T)⊂ X then Ran(L_2T)⊂ X, and by immediate iteration, this implies as well that Ran(L_kT)⊂ X for every k∈^*.Now, let t>T arbitrary, and let k∈^* be such that kT>t. Since Ran(L_kT)⊂ X, it follows from the first part of the proof that Ran(L_T)⊂ X.It remains to prove that, if Ran(L_T)⊂ X, then L_T∈ L(L^2([0,T],U),X). Note first that the operator L_T is closed. Indeed, we have L_T u = (β id_X-A)∫_0^T S(T-t) (β id_X-A)^-1 B u(t) dt, for every u∈ L^1([0,T],U), with β∈ρ(A) arbitrary. By definition of X_-1, the operator (β id_X-A)^-1 B is linear and continuous from U to X. Since A is closed (according to Proposition <ref>), it follows that L_T is closed. A priori, the graph of L_T is contained in X_-1. Under the assumption that Ran(L_T)⊂ X, this graph is contained in X. Moreover, this graph is closed because the operator L_T is closed. Then the fact that L_T∈ L(L^2([0,T],U),X) follows from the closed graph theorem. Note that, obviously, every bounded control operator B∈ L(U,X) is admissible. The question is however nontrivial for an unbounded control operator.Classical examples of bounded control operators are obtained when one considers a controlled PDE with an internal control, that is, a control system of the form ẏ(t)=Ay(t) + χ_ω u, with A:D(A)→ X=L^2(Ω), where Ω is a domain of ^n and ω is a measurable subset of Ω.Unbounded control operators appear for instance when one considers a control acting along the boundary of Ω (see further for examples). Note that, if B is admissible, then the solution y of (<ref>) takes its values in X, and the equation ẏ(t)=Ay(t)+Bu(t) is written in the space X_-1, almost everywhere on [0,T]. The solution y has the regularity y∈ C^0([0,T];X)∩ H^1([0,T],X_-1) whenever u∈ L^2([0,T],U). Note also that, in the term L_Tu, the integration is done in X_-1, but the result is in X whenever B is admissible.As said in the introduction, we have assumed that the class of controls is L^2([0,T],U). We can define as well the concept of admissibility within the class of controls L^p([0,T],U), for some p≥ 1, but we obtain then a different concept, called p-admissibility (for instance if X is reflexive and p=1 then every admissible operator is necessarily bounded, see <cit.>). Here, we restrict ourselves to p=2 (in particular in view of HUM), which is the most usually encountered case. §.§ Dual characterization of the admissibilityLet us compute the adjoint of the operator L_T∈ L(L^2([0,T],U),X_-1), and then derive a dual characterization of the admissibility property. Assume that X and U are reflexive. The adjoint L_T^* satisfies L_T^*∈ L(D(A^*),L^2([0,T],U')), and is given by (L_T^*z)(t) = B^* S(T-t)^* z for every z∈ D(A^*) and for almost every t∈[0,T].Since X is reflexive, we have X_-1=D(A^*)' (see Theorem <ref>). Since L_T is a linear continuous operator from L^2([0,T],U) to D(A^*)', the adjoint L_T^* is a linear continuous operator from D(A^*)” to L^2([0,T],U)'.On the one part, note that L^2([0,T],U)'=L^2([0,T],U') because U is reflexive. On the other part, let us prove that D(A^*) is reflexive (and hence, that D(A^*)”=D(A^*)). According to the Kakutani theorem (see <cit.>), it suffices to prove that the closed unit ball of D(A^*) is compact for the weak topology σ(D(A^*),D(A^*)'). We have, for some β∈ρ(A),B_D(A^*) = { z∈ D(A^*) | ‖ z‖_D(A^*) = ‖ (βid_X'-A^*) z‖_X'≤ 1 }= { (βid_X'-A^*)^-1 f | f∈ X', ‖ f‖_X'≤ 1 }= (βid_X'-A^*)^-1 B_X'and since X' is reflexive, the closed unit ball B_X' is compact for the weak topology σ(X',X”). Hence D(A^*) is reflexive.Therefore, L_T^*∈ L(D(A^*),L^2([0,T],U')).Let u∈ L^2([0,T],U) and z∈ D(A^*). We have, by definition, and using the duality brackets with respect to the pivot space X,⟨ L_Tu,z⟩_D(A^*)',D(A^*) = ⟨ L_T^*z,u⟩_L^2([0,T],U'),L^2([0,T],U).Note that, here, we have implicitly used the fact that U is reflexive. Now, noticing that B∈ L(U,D(A^*)') and hence that B^*∈ L(D(A^*),U'), we have⟨ L_Tu,z⟩_D(A^*)',D(A^*) = ⟨∫_0^T S(T-t)Bu(t)dt , z⟩_D(A^*)',D(A^*)= ∫_0^T ⟨ S(T-t)Bu(t) , z⟩_D(A^*)',D(A^*)dt = ∫_0^T ⟨ B^*S(T-t)^*z,u(t)⟩_U',Udt = ⟨ t↦ B^*S(T-t)^*z, t↦ u(t)⟩_L^2([0,T],U)',L^2([0,T],U)and the conclusion follows. The following proposition, providing a dual characterization of admissibility, is an immediate consequence of Lemmas <ref> and <ref>. Indeed in the admissible case we have L_T∈ L(L^2([0,T],U),X) and equivalently L_T^*∈ L(X',L^2([0,T],U)').[Note that, in the case where the operator B∈ L(U,X) is bounded, we always have L_T∈ L(L^2([0,T],U),X) and hence L_T^*∈ L(X',L^2([0,T],U)'). Moreover, if U is reflexive then L(X',L^2([0,T],U)')=L(X',L^2([0,T],U')).] Assume that X and U are reflexive.The control operator B∈ L(U,X_-1) (with X_-1≃ D(A^*)') is admissible if and only if, for some T>0 (and equivalently, for every T>0) there exists K_T>0 such that∫_0^T ‖ B^*S(T-t)^*z‖_U'^2dt ≤ K_T ‖ z‖_X'^2∀ z∈ D(A^*).The inequality (<ref>) is called an admissibility inequality. Establishing such an inequality is a way to prove that a control operator is admissible. Showing such energy-like inequalities is a classical issue in PDEs (Strichartz inequalities for instance).Once again, we stress that the admissibility property means that the control system ẏ(t)=Ay(t)+Bu(t) is well posed in X, which means here that, for a control u∈ L^2([0,T],U) and an initial condition y(0)∈ X, the corresponding solution y(t) stays in X indeed (and does not go in a wider space like X_-1). The concept of well-posedness of a PDE is in general a difficult issue. In finite dimension, this kind of difficulty does not exist, but in the infinite-dimensional setting, showing the admissibility of B may already be a challenge (at least, for an unbounded control operator). Examples are provided further.The inequality (<ref>) says that the operator B^*∈ L(D(A^*),U') is an admissible observation operator for the semigroup S^*(t) (see <cit.>). Note that the inequality (<ref>) is stated for every z∈ D(A^*). Of course, the norm ‖ z‖_X'^2 has a sense for z belonging to the larger space X', and it is natural to ask whether the inequality (<ref>) can be written for every z∈ X'.The question has been studied in <cit.>. If z∈ X' then S(T-t)^*z∈ X' and then we cannot apply B^* to this element. Actually, we can replace B^* with its Λ-extension, defined byB^*_Λ z=lim_λ→ +∞B^*λ(λid_X-A^*)^-1zalso called strong Yosida extension in <cit.> (note that (id_X-A^*)^-1z∈ D(A^*) for every z∈ X') and defined on the domain D(B^*_Λ) which is the set of z for which the above limit exists. Then Proposition <ref> still holds true with B^* replaced with B^*_Λ, with the inequality (<ref>) written for every z∈ X'.Actually, in the context of Lemma <ref>, L_T^* is given by (L_T^*z)(t)=B_Λ^*S(T-t)^*z, for every z∈ X' and for almost every t∈[0,T].§.§ Degree of unboundedness of the control operator Recall that a control operator B is said to be a bounded if B∈ L(U,X). When X⊊Ran(B), it is said to be unbounded (although B is bounded as an operator from U to X_-1, if we have assumed that B∈ L(U,X_-1)).Now more generally, using the scale of Banach spaces (X_α)_α∈ constructed in Section <ref>, we can define “unbounded control" operators such that B∈ L(U,X_-α), for some α>0.This leads to the notion of degree of unboundedness of a control operator (see <cit.>, see also <cit.>).The degree of unboundedness α(B)≥ 0 of the control operator B (with respect to the spaces X and U) is the infimum of the (not necessarily closed!) set of α≥ 0 such that B∈ L(U,X_-α), i.e., (β id_X-A)^-αB∈ L(U,X) for some arbitrary β∈ρ(A) satisfying (β)>ω, where (β id_X-A)^-α is defined by (<ref>).In other words, B∈ L(U,X_-α(B)-ε) for every ε>0 (but not necessarily for ε=0).Equivalently,[This second definition of α(B) is given in <cit.>. The equivalence with the first one follows from <cit.>.] α(B) is equal to the infimum of the set of α≥ 0 for which there exists C_α>0 such that‖ (λid_X-A)^-1B‖≤C_α/λ^1-α∀λ>ω .When X is reflexive, α(B) is the infimum of the set of α≥ 0 such that B^*∈ L(X_α^* , U') (where X_α^*=D((β id_X-A^*)^α) thanks to (<ref>)), or equivalently, of the set of α≥ 0 for which there exists C_α>0 such that‖ B^* z‖_U'≤ C_α‖ (β id_X-A^*)^α z‖_X'∀ z∈ X_α^* .Note that (<ref>) may fail for α=α(B). Throughout the chapter, as in most of the existing literature, we consider control operators such that α(B)≤ 1. This covers the most usual applications. Internal controls are bounded control operators (thus, α(B)=0). For boundary controls, in general we have 0<α(B)≤ 1 (see further, and see <cit.> for many examples).* Assume that X and U are reflexive Banach spaces. If B is admissible then B∈ L(U,X_-1/2) (and thus α(B)≤ 1/2).* Assume that X is a Hilbert space and that A is self-adjoint. If B∈ L(U,X_-1/2) then B is admissible. Let us prove the first point (adapted from <cit.>).First of all, by an obvious change of variable, we have L_t+s=S(s)L_t+L_s, for all s,t≥ 0. It follows that L_n=(S(n-1)+⋯+S(1)+id_X)L_1, for every n∈^*. Since (S(t))_t≥ 0∈𝒢(M,ω), we have ‖ S(k)‖≤ Me^kω for every integer k, and therefore, we infer that ‖ L_n‖≤ K_n‖ L_1‖, with K_n=Me^ω n-1/e^ω-1 if ω>0, K_n=Mn if ω=0, and K_n=M1/1-e^ω if ω<0. Here, the norm ‖·‖ stands for the norm of bounded operators from L^2_loc(0,+∞;U') to X (note that B is assumed to be admissible).Besides, for 0<t_1<t_2 arbitrary, by taking controls that are equal to 0 on (t_1,t_2), we easily prove that ‖ L_t_1‖≤‖ L_t_2‖.Now, for an arbitrary T>0, let n∈ be such that n≤ T<n+1. Writing L_T=S(T-n)L_n+L_T-n, we get that ‖ L_T‖≤ Me^ω(T-n)‖ L_n‖+‖ L_1‖.It finally follows that ‖ L_T‖≤ Ke^ω T if ω>0, ‖ L_T‖≤ KT if ω=0, and ‖ L_T‖≤ K if ω<0, for some constant K>0 that does not depend on T.By duality, we have the same estimates on the norm of L_T^*.For every z∈ D(A^*), we define (Ψ z)(t)=B^*S(t)^*z. It follows from the above estimates (by letting T tend to +∞) that, for every α>ω, the function t↦ e^-α t(Ψ z)(t) belongs to L^2(0,+∞;U').Let us consider the Laplace transform of Ψ z.On the one part, we have, by an easy computation as in (<ref>), ℒ(Ψ z)(s) = ∫_0^+∞ e^-st (Ψ z)(t)dt = B^*(s id_X-A^*)z for every s∈ρ(A) such that (s)>ω. On the other part, writing ℒ(Ψ z)(s) = ∫_0^+∞ e^(α-s)t e^-α t(Ψ z)(t)dt and applying the Cauchy-Schwarz inequality, we get‖ℒ(Ψ z)(s)‖_U'≤1/√(2((s)-α))‖ t↦ e^-α t(Ψ z)(t)‖_L^2(0,+∞;U'). The first point is proved. Let us prove the second point (adapted from <cit.>). By definition, there exists C>0 such that ‖ B^*z‖_U'^2≤ C ‖ (β id_X-A^*)^1/2 z‖_X'^2 = C (z,(β id_X-A)z)_Xfor every z∈ D(A) (we have used that A=A^*), where (·,·)_X is the scalar product in X. Applying this inequality to z(t)=S^*(T-t)ψ, multiplying by e^2β t, and integrating over [0,T], we get∫_0^T e^2β t‖ B^*S^*(T-t)ψ‖_U'^2dt ≤ C ∫_0^T e^2β t (z(t),(β id_X-A)z(t))_X dt .Since ż(t)=-Az(t) and z(T)=ψ, we have1/2d/dt( e^2β t‖ z(t)‖_X^2 )=e^2β t( β‖ z(t)‖_X^2 - (z(t),Az(t))_X ) =e^2β t ( z(t), (β id_X-A)z(t) )_Xand therefore we get∫_0^T e^2β t‖ B^*S^*(T-t)ψ‖_U'^2dt ≤C/2∫_0^T d/dt( e^2β t‖ z(t)‖_X^2 ) dt≤C/2 e^2β T‖ψ‖_X^2.The lemma follows.§.§ Examples §.§.§ Dirichlet heat equation with internal controlLet Ω⊂^n be a bounded open set with C^2 boundary, and let ω⊂Ω be an open subset. Consider the internally controlled Dirichlet heat equation∂_ty= y+χ_ω uin Ω, y_|∂Ω=0, y(0)=y_0∈ L^2(Ω).We set X=L^2(Ω), and we consider the operator A=_D:D(A)→ X, where D(A)=X_1=H^1_0(Ω)∩ H^2(Ω). The operator A is self-adjoint, and X_-1=D(A^*)'=(H^1_0(Ω)∩ H^2(Ω))' with respect to the pivot space L^2(Ω). The control operator B is defined as follows: for every u∈ U=L^2(ω), Bu∈ L^2(Ω) is the extension of u by 0 to the whole Ω. It is bounded and therefore admissible (by Lemma <ref>), which means (by Proposition <ref>) that, for every T>0, there exists K_T>0 such that∫_0^T ∫_ωψ(t,x)^2dxdt≤ K_T‖ψ(0)‖^2_L^2(Ω)for every solution of ∂_tψ=ψin Ω, ψ_|∂Ω=0, with ψ(0)∈ H^2(Ω)∩ H^1_0(Ω). The above inequality can also be established by using that ψ(t)=S(t)ψ(0) with S(t)∈ L(X). §.§.§ Heat equation with Dirichlet boundary controlLet Ω⊂^n be a bounded open set with Lipschitz boundary. Consider the heat equation with Dirichlet boundary control∂_ty= yin Ω, y_|∂Ω=u, y(0)=y_0∈ L^2(Ω).We set U=L^2(∂Ω), X=H^-1(Ω), and we consider the self-adjoint operator A=_D:D(A)→ X where D(A)=X_1=H^1_0(Ω).We have X_-1=D(A^*)'=D(A)' with respect to the pivot space H^-1(Ω). Note that if ∂Ω is regular enough then X_-1 is the dual of A^-1(H^1_0(Ω)) = {y∈ H^3(Ω) | y_|∂Ω=( y)_|∂Ω=0} with respect to the pivot space L^2(Ω) (see Example <ref>).Let us express the control operator B∈ L(U,D(A^*)').We preliminarily recall (see Example <ref>, with 𝒜=-_D=-A) that, for every f∈ H^-1(Ω), ‖ f‖_H^-1(Ω) = ‖ (-_D)^-1/2 f‖_L^2(Ω) and (f,g)_H^-1(Ω) = ( (-_D)^-1/2f , (-_D)^-1/2 g)_L^2(Ω) = -(A^-1f, g)_L^2(Ω)for all f∈ H^-1(Ω) and g∈ L^2(Ω). Taking a solution y regular enough, associated with a control u (for instance, of class C^1), since the differential equation ẏ=Ay+Bu is written in X_-1, by definition we have⟨ẏ,ϕ⟩_X_-1,X_1 = ⟨ Ay,ϕ⟩_X_-1,X_1+ ⟨ Bu,ϕ⟩_X_-1,X_1∀ϕ∈ X_1 .The duality bracket is considered with respect to the pivot space X=H^-1(Ω), hence ⟨ẏ,ϕ⟩_X_-1,X_1 = -(ẏ,A^-1ϕ)_L^2(Ω) and ⟨ Ay,ϕ⟩_X_-1,X_1 = (Ay,ϕ)_H^-1(Ω) = -(y,ϕ)_L^2(Ω).Besides, using (<ref>) and integrating by parts (Green formula), we have(∂_ty,ψ)_L^2(Ω) = (y,ψ)_L^2(Ω) - ( u , ∂ψ/∂ν)_L^2(∂Ω)∀ψ∈ H^1_0(Ω)∩ H^2(Ω) .Taking ψ=-A^-1ϕ, we have ψ=-ϕ on Ω and, by identification,⟨ Bu,ϕ⟩_X_-1,X_1 = ( u, ∂/∂ν(A^-1ϕ) )_L^2(∂Ω)∀ u∈ L^2(∂Ω)∀ϕ∈ H^1_0(Ω) ,which defines B by transposition: since ⟨ Bu,ϕ⟩_X_-1,X_1 = (u, B^*ϕ)_U, we infer that B^*ϕ = ∂/∂ν_|∂Ω(A^-1ϕ) for every ϕ∈ H^1_0(Ω)=D(A^*).When ∂Ω is C^2, we can express the operator B by using the Dirichlet map D (see <cit.> and <cit.>), which is the linear operator defined as follows:given any v∈ L^2(∂Ω), Dv is the unique solution in the sense of distributions of the Laplace equation (Dv)=0 in Ω such that (Dv)_|∂Ω=v.Note that Dv∈ C^∞(Ω) by hypoellipticity. Integrating by parts (Green formula), we have ( Dv , φ )_L^2(Ω) = ( v , ∂φ/∂ν)_L^2(∂Ω) for every φ∈ H^1_0(Ω)∩ H^2(Ω), and thus, setting ϕ=_Dφ = Aφ, we have ( Dv , ϕ )_L^2(Ω) = ( v , ∂/∂ν(A^-1ϕ) )_L^2(∂Ω) for every ϕ∈ L^2(Ω).This implies that D:L^2(∂Ω)→ L^2(Ω) (but D is not surjective).Then ⟨ Bu,ϕ⟩_X_-1,X_1 = ( Du , ϕ )_L^2(Ω) = - ( Du , Aϕ )_H^-1(Ω) = - ⟨ ADu,ϕ⟩_X_-1,X_1 and therefore B=-ADwhere we consider the extension A:L^2(Ω)→ (H^2(Ω)∩ H^1_0(Ω))' (dual with respect to L^2(Ω)). Note that X_1/2 = L^2(Ω) and that X_-1/2 = (H^2(Ω)∩ H^1_0(Ω))'. In particular, we have B∈ L(U,X_-1/2) (and B^*∈ L(X_1/2,U)), and thus α(B)≤ 1/2.Actually, by using the finer fact that D:L^2(∂Ω)→ H^1/2(Ω) (see <cit.>, see also <cit.>, it can be proved that α(B)=1/4.It follows from Lemma <ref> that B is admissible; equivalently, by Proposition <ref>, for every T>0, there exists K_T>0 such that∫_0^T‖∂ψ/∂ν_|∂Ω (t)‖_L^2(∂Ω)^2dt ≤ K_T‖ψ(0)‖^2_H^1_0(Ω)for every solution of ∂_tψ=ψin Ω, ψ_|∂Ω=0,with ψ(0)∈H^1_0(Ω). This result says that the heat equation with boundary control (<ref>) is well posed in the state space X=H^-1(Ω), with U=L^2(∂Ω). Note that (<ref>) is not well posed in the state space L^2(Ω), meaning that, for y^0∈ L^2(Ω) and u∈ L^2([0,T],∂Ω), the solution y of (<ref>) may fail to belong to C^0([0,T];L^2(Ω)) (even in dimension one). Actually, for every T>0,sup{∫_0^T‖∂ψ/∂ν_|∂Ω (t)‖_L^2(∂Ω)^2dt| ψ(0)∈ L^2(Ω),‖ψ(0)‖_L^2(Ω)=1 } = +∞(see <cit.>). If we take X=L^2(Ω) then B^*ϕ = -∂ϕ/∂ν_|∂Ω for every ϕ∈ H^2(Ω)∩ H^1_0(Ω) and B=-AD∈ L(U,D(A^3/4+ε)') for every ε>0 (see <cit.>), but the continuity property fails for ε=0 (even in dimension one). We have then α(B)=3/4 and B is not admissible by Lemma <ref>.§.§.§ Heat equation with Neumann boundary controlWe replace in (<ref>) the Dirichlet control with the Neumann control ∂ y/∂ν_|∂Ω=u. In this case, we set X=L^2(Ω), U=L^2(∂Ω), we consider the operator A=_N defined on D(A)={ y ∈ H^2(Ω) | ∂ y/∂ν_|∂Ω=0}, and we obtain B^*ϕ = ϕ_|∂Ω and B=-AN, where N is the Neumann map. We do not provide any details. Actually, we have B∈ L(U,(D(A^1/4+ε))') for every ε>0 (see <cit.>), thus α(B)=1/4, and hence B is admissible by Lemma <ref>. §.§.§ Second-order equationsAnother typical example is provided by second-order equations. The framework is the following (see <cit.>). Let H be a Hilbert space, and A_0:D(A_0)→ H be self-adjoint and positive. Recall that D(A_0^1/2) is the completion of D(A_0) with respect to the norm ‖ y‖_D(A_0^1/2)=√(⟨ A_0y,y⟩_H), and that D(A_0)⊂ D(A_0^1/2)⊂ H, with continuous and dense embeddings (see also Remark <ref>). We set X=D(A_0^1/2)× H, and we define the skew-adjoint operator A:D(A)→ X on D(A)=D(A_0)× D(A_0^1/2) byA=[0I; -A_00 ]. Let U be a Hilbert space and let B_0∈ L(U,D(A_0^1/2)'), where D(A_0^1/2)' is the dual of D(A_0^1/2) with respect to the pivot space H. The second-order control system∂_tty+A_0y=B_0ucan be written in the form∂/∂ t[y; ∂_ty ] = A [y; ∂_ty ] +Bu with B=[ 0; B_0 ].We have X_-1=D(A^*)'=H× D(A_0^1/2)' with respect to the pivot space X, where D(A_0^1/2)' is the dual of D(A_0^1/2) with respect to the pivot space H. Moreover we have B∈ L(U,H× D(A_0^1/2)') andB^*=[ 0; B_0^* ]∈ L(D(A_0)× D(A_0^1/2),U) .The following statements are equivalent: * B is admissible.* There exists K_T>0 such that every solution of∂_ttψ+A_0ψ=0,ψ(0)∈ D(A_0),∂_tψ(0)∈ D(A_0^1/2)satisfies ∫_0^T‖ B_0^*∂_tψ(t)‖_U'^2 dt≤ K_T(‖ψ(0)‖_D(A_0^1/2)^2+‖∂_tψ(0)‖_H^2).* There exists K_T>0 (the same constant) such that every solution of∂_ttψ+A_0ψ=0,ψ(0)∈ H,∂_tψ(0)∈ D(A_0^1/2)'satisfies ∫_0^T‖ B_0^*ψ(t)‖_U'^2 dt ≤K_T(‖ψ(0)‖_H^2+‖∂_tψ(0)‖_D(A_0^1/2)'^2). Consider the wave equation with Dirichlet boundary control∂_tty= yin Ω, y_|∂Ω=u,where Ω is a bounded open subset of ^n with C^2 boundary. We set H=H^-1(Ω), and we take A_0=-_D:D(A_0)=H^1_0(Ω)→ H (isomorphism). We have D(A_0^1/2)=L^2(Ω), and the dual space D(A_0^1/2)' (with respect to the pivot space H=H^-1(Ω)) is equal to the dual space (H^2(Ω)∩ H^1_0(Ω))' (with respect to the pivot space L^2(Ω)). The state space is X=D(A_0^1/2)× H = L^2(Ω)× H^-1(Ω)and we have X_1=D(A) = H^1_0(Ω)× L^2(Ω),X_-1 = H^-1(Ω)× (H^2(Ω)∩ H^1_0(Ω))' .The spaces X_α can be characterized easily. Setting U=L^2(∂Ω), the controlled wave equation is written as y_tt=-A_0y+B_0u in D(A_0^1/2)', where B_0^*ϕ = ∂/∂ν_|∂Ω(A_0^-1ϕ) ∀ϕ∈ L^2(Ω) ,or, equivalently, B_0=A_-1D ∈ L(U,D(A_0^1/2)') where D is the Dirichlet mapping. Then B∈ L(U,X_-1/2), but this is the limit case of Lemma <ref>. It is however true that B is admissible, i.e., for every T>0 there exists K_T>0 such that∫_0^T‖∂ψ/∂ν(t)‖^2_L^2(∂Ω)dt≤ K_T(‖ψ(0)‖^2_H_0^1(Ω)+‖∂_tψ(0)‖^2_L^2(Ω))for every solution of ∂_ttψ=ψ in Ω, ψ_|∂Ω=0. This is the hidden regularity property for the Dirichlet wave equation, proved in <cit.> (see also <cit.>) by using multipliers.The multiplier method consists of multiplying the evolution equations by adequate functions and then using integrations by parts. Consider the Dirichlet wave equation with internal control ∂_tty= y+χ_ω uin Ω,y_|∂Ω=0on a bounded open subset Ω with Lipschitz boundary. In this case, we set H=L^2(Ω), we take A_0=-_D:D(A_0)=H^1_0(Ω)∩ H^2(Ω)→ H (isomorphism). We have D(A_0^1/2)=H^1_0(Ω), and the dual space D(A_0^1/2)' (with respect to the pivot space H=L^2) is equal to the dual space H^-1(Ω). The state space is X=D(A_0^1/2)× H = H^1_0(Ω)× L^2(Ω), and we have X_1=D(A) = H^1_0(Ω)∩ H^2(Ω)× H^1_0(Ω) and X_-1 = L^2(Ω)× H^-1(Ω).Setting U=L^2(ω), the bounded control operator B_0∈ L(U,X) is such that, for every u∈ U, Bu is the extension of u by 0 to the whole Ω. Its admissibility (which is obvious) means that, for every T>0, there exists K_T>0 such that∫_0^T ∫_ωψ(t,x)^2dxdt≤ K_T(‖ψ(0)‖^2_L^2(Ω)+‖∂_tψ(0)‖^2_H^-1(Ω))for every solution of ∂_ttψ=ψ in Ω, ψ_|∂Ω=0. We refer to <cit.> (and references cited therein) for many other examples. § CONTROLLABILITYWe consider the linear control system (<ref>).We do not assume that B is admissible. §.§ DefinitionsLet us define the concept of controllability. A priori, the most natural concept is to require that for a given time T, for all y_0 and y_1 in X, there exists a control u∈ L^2([0,T],U) and a solution of (<ref>) such that y(0)=y_0 satisfies y(T)=y_1. In finite dimension, a necessary and sufficient condition is the Kalman condition. In infinite dimension, new difficulties appear. Indeed, let us consider a heat equation settled on a domain Ω of ^n, with either an internal or a boundary control. Due to the smoothing effect (see Remark <ref>, see also <cit.>), whatever the regularity of the initial condition and of the control may be, the solution y(t,·) is a smooth function (of x) as soon as t>0, outside of the control domain. It is therefore hopeless to try to reach a final target y(T)=y_1∈ L^2(Ω) in general (unless y_1 is smooth enough, so as to belong to the range of the heat semigroup). However, for such a parabolic equation, it makes sense to reach either y(T)=0, or to "almost reach" any y_1∈ L^2(Ω). This motivates the following definitions. Let T>0 arbitrary. The control system (<ref>) is said to be: * exactly controllable in (the state space) X in time T if, for all (y_0,y_1)∈ X^2, there exists u∈ L^2([0,T],U) such that the solution (<ref>) of (<ref>) satisfies y(T;y_0,u)=y_1;* approximately controllable in X in time T if, for all (y_0,y_1)∈ X^2, for every ε>0, there exists u∈ L^2([0,T],U) such that ‖ y(T;y_0,u)-y_1‖_X≤ε;* exactly null controllable in X in time T if, for every y_0∈ X, there exists u(·)∈ L^2([0,T],U) such that y(T;y_0,u)=0. Using the fact that y(T)=S(T)y_0+L_Tu (see (<ref>)), with L_T defined by (<ref>), we make the following remarks: * The control system (<ref>) is exactly controllable in X in time T if and only if Ran(L_T)=X. In particular, if this is true, then B must be admissible and thus α(B)≤ 1/2.* The control system (<ref>) is approximately controllable in X in time T if and only if Ran(L_T)∩ X is dense in X.* The control system (<ref>) is exactly null controllable in X in time T if and only if Ran(S(T))⊂Ran(L_T). Note that, if Ran(L_T)=X for some T>0, then Ran(L_t)=X for every t≥ T. Indeed, taking (as in the proof of Lemma <ref>) controls such that u=0 on (0,t-T), we haveL_t u= ∫_t-T^t S(t-s) u(s) ds = ∫_0^T S(T-τ)u(τ+t-T)dτ = L_Tu(·+t-T).This shows that if the control system (<ref>) is exactly controllable in time T then it is exactly controllable in any time t≥ T.We speak of approximate null controllability in time T when one takes the target y_1=0 in Definition <ref>; equivalently, Ran(S(T)) is contained in the closure of Ran(L_T). Approximate controllability and approximate null controllability (in time T) coincide when S(T)^* is injective, i.e., when Ran(S(T)) is dense in X (see <cit.> for finer results).There are other notions of controllability, depending on the context and on the needs, for instance: spectral controllability, controllability to finite-dimensional subspaces, controllability to trajectories (see, e.g., <cit.> and references therein).§.§ Duality controllability – observabilityAs for the admissibility, we are going to provide a dual characterization of the controllability properties. To this aim, it suffices to combine Remark <ref> with the following general lemma of functional analysis (see <cit.> for the first part and <cit.> for the last part). Let X and Y be Banach spaces, and let F∈ L(X,Y). Then: * Ran(F) is dense in Y (that is, F is "approximately surjective") if and only if F^*∈ L(Y',X') is one-to-one, that is: for every z∈ Y', if F^*z=0 then z=0.* Ran(F)=Y (that is, F is surjective) if and only if F^*∈ L(Y',X') is bounded below, in the sense that there exists C>0 such that ‖ F^*z‖_X'≥ C‖ z‖_Y' for every z∈ Y'.Let X, Y and Z be Banach spaces, with Y reflexive, and let F∈ L(X,Z) and G∈ L(Y,Z). Then Ran(F)⊂Ran(G) if and only if there exists C>0 such that ‖ F^*z‖_X'≤ C‖ G^*z‖_Y' for every z∈ Z'.It is interesting to stress the difference with the finite-dimensional setting, in which a proper subset cannot be dense. The fact that, in infinite dimension, a proper subset may be dense explains the fact that the notion of approximate controllability is distinct from the notion of exact controllability.Now, applying Lemma <ref> to the operators L_T and S(T), we get, with Remark <ref>, the following result.Assume that X and U are reflexive. Let T>0 arbitrary. The control system (<ref>) is: * exactly controllable in X in time T if and only if there exists C_T>0 such that ∫_0^T ‖ B^*S^*(T-t)z‖_U'^2 dt ≥ C_T‖ z‖_X'^2∀ z∈ D(A^*);* approximately controllable in X in time T if and only if ∀ z∈ D(A^*)∀ t∈[0,T] B^*S^*(T-t)z=0⇒ z=0;* exactly null controllable in X in time T if and only if there exists C_T>0 such that∫_0^T ‖ B^*S^*(T-t)z‖_U'^2dt ≥ C_T‖ S(T)^*z‖_X'^2∀ z∈ D(A^*). The inequalities (<ref>) and (<ref>) are called observability inequalities. As in Remark <ref>, they can be written for every z∈ X', provided that B^* be replaced with its Λ-extension B^*_Λ. The largest constant C_T>0 such that (<ref>) (or (<ref>)) holds true is called the observability constant.Like the admissibility inequality, an observability inequality is an energy-like inequality, but in the converse sense. Proving observability inequalities is a challenging issue in general for PDEs. We will give some examples further.In the context of PDEs, (<ref>) corresponds to a unique continuation property, often established thanks to Holmgren's theorem (see <cit.>).Setting φ(t) = S^*(T-t)z, we haveφ̇(t) = -A^*φ(t),φ(T)=z,and the properties above can be interpreted in terms of φ(t) which is the adjoint vector in an infinite-dimensional version of the PMP (see Section <ref>). Usually, we rather consider ψ(t)=φ(T-t)=S^*(t)z, and hence we haveψ̇(t) = A^*ψ(t),ψ(0)=z.This is the adjoint equation.In terms of this adjoint equation, the approximate controllability property is equivalent to the unique continuation property: B^*ψ(t)=0 for every t∈[0,T] implies ψ(·)=0. The exact controllability is equivalent to the observability inequality∫_0^T ‖ B^*ψ(t)‖_U'^2dt ≥ C_T ‖ψ(0)‖_X'^2for every solution of the adjoint equation, and the exact null controllability is equivalent to the observability inequality∫_0^T ‖ B^*ψ(t)‖_U'^2dt ≥ C_T ‖ψ(T)‖_X'^2.As announced in Remarks <ref> and <ref> (in Chapter <ref>, Section <ref>), the observability inequality (<ref>) is the infinite-dimensional version of the observability inequality (<ref>) obtained in the finite-dimensional setting.Note that, in finite dimension, the properties (<ref>) and (<ref>) are equivalent, whereas, in the infinite-dimensional setting, there is a deep difference, due to the fact that a proper subset of an infinite-dimensional space may be dense.Gramian operator. As in Remark <ref> and in Theorem <ref>, we can similarly define the Gramian operator in the present context of Banach spaces. Its general definition is the following. Assume that X and U are reflexive. Recall that, in general, we have L_T∈ L(L^2([0,T],U),D(A^*)') and L_T^*∈ L(D(A^*),L^2([0,T],U')) (see the proof of Lemma <ref>). Identifying U≃ U' and L^2([0,T],U)≃ L^2([0,T],U'), we define the Gramian operatorG_T = L_T L_T^* = ∫_0^T S(T-t)BB^*S(T-t)^* dt∈ L(D(A^*),D(A^*)')where, in the formula above, BB^* is to be understood as BJB^* where J:U'→ U is the canonical isomorphism.If the control operator B is admissible then L_T∈ L(L^2([0,T],U),X) and L_T^*∈ L(X',L^2([0,T],U')), and therefore G_T ∈ L(X',X). The expression of G_T is still given by (<ref>) when applied to some element z∈ D(A^*). Using Remark <ref>, it can be noted that the expression of G_T on the whole space X' is given byG_T = ∫_0^T S(T-t)B_Λ B_Λ^*S(T-t)^* dt. If the control system (<ref>) is exactly controllable in time T, then, using (<ref>) and Remark <ref>, it follows that⟨ G_Tz,z⟩_D(A^*)',D(A^*)≥ C_T‖ z‖_X'^2 for every z∈ D(A^*); the converse inequality is satisfied if B is admissible. In other words, we have the following lemma. The control operator B is admissible and the control system (<ref>) is exactly controllable in time T if and only if G_T:X'→ X is an isomorphism satisfying C_T‖ z‖_X'^2≤⟨ G_T z,z⟩_X',X≤ K_T‖ z‖_X'^2 for every z∈ X'. §.§ Hilbert Uniqueness Method (HUM)The Hilbert Uniqueness Method (in short, HUM; see <cit.>) is based on Lemma <ref> by noticing that, in the context of this lemma, the norm ‖·‖_X' is equivalent to the norm given by (⟨ G_T z,z⟩_X',X)^1/2. This gives a characterization of the state space X in which we have exact controllability.HUM can then be stated as follows. Let Y be a reflexive Banach space, let A:D(A)→ Y be an operator generating a C_0 semigroup and let (Y_α)_α∈ be the associated scale of Banach spaces. Let U be a fixed reflexive Banach space and let B∈ L(U,Y_-α) be a control operator. Let Z be the completion of D(A^*) for the norm (⟨ G_T z,z⟩_D(A^*)',D(A^*))^1/2, and let X be a Banach space such that X'=Z. Then X is the Banach space for which Lemma <ref> is satisfied, i.e., for which B is admissible and the control system (<ref>) is exactly controllable in time T in the state space X.HUM may as well be restated in the other way round: the (reflexive Banach) state space X is fixed and one wants to characterize the control Banach space U for which admissibility and exact controllability are satisfied. HUM functional. In the framework of Lemma <ref>, the so-called HUM functional J, defined byJ(z) = 1/2⟨ G_Tz,z⟩_X',X + ⟨ z,S(T)y_0-y_1⟩_X',X∀ z∈ X'is smooth and coercive in X', hence J has a unique minimizer z̅, satisfying 0 = ∇ J(z̅) = G_Tz̅ + S(T)y_0-y_1 .Defining the so-called HUM control by u̅(t) = B^*S(T-t)^*z̅ = (L_Tz̅)(t), the above equality says that S(T)y_0+L_Tu̅=y_1, i.e., y(T;y_0,u̅)=y_1. In other words, the control u̅ steers the control system (<ref>) from y_0 to y_1 in time T. Actually, the control u̅ is even the minimal L^2 norm control realizing this controllability property (see <cit.>): this can also be seen by observing that, when wanting to solve the overdetermined equation L_Tu=y_1-S(T)y_0, the control of minimal L^2 norm is given by u = L_T^#(y_1-S(T)y_0) where L_T^# is the pseudo-inverse of L_T (this is indeed a well known property of the pseudo-inverse); since L_T^#=L_T^*(L_TL_T^*)^-1 = L_T^*G_T^-1, the claim follows. HUM for exact null controllability. When wanting to realize an exact null controllability result for a control system that is not exactly controllable (like the heat equation), of course the conclusion of Lemma <ref> does not hold. In terms of the Gramian operator, the observability inequality (<ref>) is written as ⟨ G_Tz,z⟩_D(A^*)',D(A^*)≥ C_T ‖ S(T)^*z‖_X'^2 for every z∈ D(A^*).We can however still write the HUM functional as above (with an additional care) and determine the minimal L^2 norm control steering the control system (<ref>) to 0 in time T. The HUM functional J is defined as above, for every z∈ D(A^*), with the duality bracket ⟨ G_Tz,z⟩_D(A^*)',D(A^*) for the first term. The functional J is however not coercive in X'. To recover such a property, we define the Banach space 𝒳 as the completion of D(A^*) for the norm (⟨ G_Tz,z⟩_D(A^*)',D(A^*))^1/2. Note that the space 𝒳 is in general much larger than D(A^*) and may even fail to be a space of distributions (see <cit.>). Anyway, there is a unique minimizer z̅∈𝒳 of J, satisfying therefore 0 = ∇ J(z̅) = G_Tz̅ + S(T)y_0, and then the HUM control u̅(t) = B^*S(T-t)^*z̅ = (L_Tz̅)(t) steers the control system (<ref>) to 0 in time T, and is the control of minimal L^2 norm doing so.This approach provides a generalization of Theorem <ref> (in Section <ref>) toinfinite-dimensional autonomous linear control systems. §.§ Example: the wave equationThe typical (and historical) example of application of HUM is the wave equation, either with an internal control or with a (Dirichlet or Neumann) boundary control. In 1D, the analysis is easy thanks to Fourier series, as elaborated below. The multi-D case is much more complicated and can be treated thanks to microlocal analysis (see comments further).§.§.§ 1D wave equation with Dirichlet boundary controlLet T>0 and L>0 be fixed. We consider the 1D wave equation with Dirichlet boundary control at the right-boundary:∂_tty = ∂_xxy, t∈(0,T),x∈ (0,L), y(t,0)=0, y(t,L)=u(t), t∈(0,T),y(0,x)=y_0(x), ∂_ty(0,x)=y_1(x), x∈(0,L),where the state at time t∈ [0,T] is (y(t,·),∂_t y(t,·)) and the control is u(t)∈. Let us establish that this equation is exactly controllable in time T in the space L^2(0,L)× H^-1(0,L) with controls u∈ L^2(0,T) if and only if T≥ 2L.By Theorem <ref> (see also Example <ref>), this is equivalent to establishing the following observability inequality: there exists C_T>0 such that any solution of∂_ttψ = ∂_xxψ, ψ(t,0)=ψ(t,L)=0,such that (ψ(0),∂_tψ(0))∈ H^1_0(0,L)× L^2(0,L) satisfies∫_0^T |∂_xψ(t,L)|^2dt ≥C_T ( ‖ψ(0)‖_H^1_0(0,L)^2 + ‖∂_tψ(0)‖_L^2(0,L)^2 )Given any T≥ 2L, let us establish (<ref>) by using spectral expansions (Fourier series).We expand the solutions of (<ref>) asψ(t,x) = ∑_k=1^∞L/kπ( a_kcoskπ t/L + b_ksinkπ t/L)sinkπ x/Lwith (a_k)_k∈^*∈ℓ^2() and (b_k)_k∈^*∈ℓ^2(), so that ψ(0,x) = ∑_k=1^∞L/kπa_ksinkπ x/L and ∂_tψ(0,x) = ∑_k=1^∞ b_k sinkπ x/L and thus‖ψ(0)‖_H^1_0^2 + ‖∂_tψ(0)‖_L^2^2 = ∫_0^L (|∂_xψ(0,x)|^2 + |∂_tψ(0,x)|^2 ) dx = L/2∑_k=1^∞ (a_k^2+b_k^2) .Then∫_0^T |∂_xψ(t,L)|^2dt≥∫_0^2L|∂_xψ(t,L)|^2dt = ∫_0^2L|∑_k=1^∞ (-1)^k ( a_kcoskπ t/L + b_ksinkπ t/L)|^2 dt = ∑_j,k=1^∞ (-1)^j+k∫_0^2L(a_jcos(jπ t/L)+b_jsin(jπ t/L)) ×(a_kcos(kπ t/L)+b_ksin(kπ t/L)) dt = L∑_k=1^∞ (a_k^2+b_k^2) = 2 ( ‖ψ(0)‖_H^1_0(0,L)^2 + ‖∂_tψ(0)‖_L^2(0,L)^2 ) ,and (<ref>) is proved.Using similar Fourier series expansions, we see that admissibility property (of the Dirichlet control operator) is satisfied for any T>0: by Proposition <ref> (see also Example <ref>), equivalently, for any T>0 there exists K_T>0 such that ∫_0^T |∂_xψ(t,L)|^2dt ≤ K_T ( ‖ψ(0)‖_H^1_0(0,L)^2 + ‖∂_tψ(0)‖_L^2(0,L)^2 ) .for any solution of (<ref>). Indeed, ∫_0^T |∂_xψ(t,L)|^2dt ≤∫_0^2nL|∂_xψ(t,L)|^2dt for some n∈^* we perform the same expansion as above. We conclude that, for T≥ 2L, we have the double inequalityC_T ‖(ψ(0),∂_tψ(0))‖_H^1_0× L^2^2≤∫_0^T |∂_xψ(t,L)|^2dt≤ K_T ‖(ψ(0),∂_tψ(0))‖_H^1_0× L^2^2for all solutions of (<ref>), saying that (∫_0^T |∂_xψ(t,L)|^2dt )^1/2 is a norm, equivalent to the norm of H^1_0(0,L)× L^2(0,L). This illustrates Lemma <ref>. The term ∫_0^T |∂_xψ(t,L)|^2dt stands for the Gramian.Let us finally prove that controllability is lost if T<2L. Let δ>0 be such that T≤ 2L-2δ. We consider a solution of (<ref>) such that ψ(T/2,·) and ∂_xψ(T/2,·) are supported in (0,δ). Then, using the fact that the support of any solution of the wave equation propagates at speed 1, it follows that the observability inequality (<ref>) is not satisfied.Note that the same argument shows that, for T<2L, the wave equation is not approximately controllable either.§.§.§ 1D Dirichlet wave equation with internal controlInstead of (<ref>), we consider∂_tty = ∂_xxy + χ_ω u,t∈(0,T),x∈ (0,L), y(t,0)=0, y(t,L)=0,t∈(0,T),y(0,x)=y_0(x), ∂_ty(0,x)=y_1(x), x∈(0,L),where the control is u(t,x)∈ and ω⊂(0,L) is a measurable subset of positive Lebesgue measure. Let us establish that (<ref>) is exactly controllable in time T≥ 2L in the space H^1_0(0,L)× L^2(0,L) with controls u∈ L^2((0,T)×ω).By Theorem <ref>, this is equivalent to establishing the following observability inequality: for every T≥ 2L, for every ω⊂(0,L) measurable of positive measure, there exists C_T(ω)>0 such that∫_0^T∫_ωϕ(t,x)^2 dxdt ≥ C_T(ω)( ‖ϕ(0)‖_L^2(0,L)^2 + ‖∂_tϕ(0)‖_H^-1(0,L)^2)for every solution ϕ of the adjoint equation∂_ttϕ-∂_xxϕ=0 , ϕ(t,0)=ϕ(t,π)=0 .We consider solutions ϕ of (<ref>) expanded asϕ(t,x) = ∑_j=1^∞( a_jcos(jπ t/L) + b_jsin(jπ t/L) )sin(jπ x/L)with (a_j)_j∈^*∈ℓ^2() and (b_j)_j∈^*∈ℓ^2(), so thatϕ(0,x) = ∑_j=1^∞ a_jsin(jπ x/L) and ∂_tϕ(0,x) = ∑_j=1^∞jπ/L b_jsin(jπ x/L) and thus‖ϕ(0)‖_L^2(0,L)^2 + ‖∂_tϕ(0)‖_H^-1(0,L)^2 = L/2∑_j=1^∞ (a_j^2+b_j^2) .For every T≥ 2L, we have ∫_0^T∫_ωϕ(t,x)^2 dxdt ≥∫_0^2L∫_ωϕ(t,x)^2 dxdt and∫_0^2L∫_ωϕ(t,x)^2 dxdt =L/π∑_j,k=1^∞∫_0^2π (a_jcos(js)+b_jsin(js))(a_kcos(ks)+b_ksin(ks)) ds ×∫_ωsin(jπ x/L) sin(kπ x/L) dx =L ∑_j=1^∞ (a_j^2+b_j^2) ∫_ωsin^2(jπ x/L) dx .We now give two ways to infer the observability inequality (<ref>).First way. We observe that sin^2(jπ x/L)⇀1/2 as j→+∞ (in weak L^2 topology). It follows that, for any measurable subset ω⊂(0,L) of positive measure, there exists C(ω)>0 such that∫_ωsin^2(jπ x/L) dx≥ C(ω)∀ j∈^* ,and then (<ref>) follows.Note that we have used here an information on the highfrequency eigenfunctions ϕ_j(x)=√(2/L)sin(jπ x/L). Second way. We have the following lemma. Given any measurable subset ω⊂(0,L) of positive Lebesgue measure |ω|>0, we have∫_ωsin^2(jπ x/L) dx ≥1/2( |ω| - L/πsin( π/L|ω|))∀ j∈^*. For a fixed integer j, consider the problem of minimizing the functional K_j(ω')=∫_ω'sin^2(jπ x/L) dx over all possible measurable subsets ω'⊂(0,π) s.t. |ω'|=|ω|.Identifying the minima (zeros) of sin^2(jπ x/L), using a bathtub principle argument, it is quite obvious to see that there exists a unique (up to zero measure subsets) optimal set, characterized as a level set of the function x↦sin^2(jπ x/L), which isω_j^inf=(0,|ω|/2j) ⋃ ⋃_k=1^j-1 (kL/j-|ω|/2j,kL/j+|ω|/2j) ⋃ (L-|ω|/2j,L)and we have∫_ω_j^infsin^2(jπ x/L) dx=2j∫_0^|ω|/2jsin^2(jπ x/L) dx =2L/π∫_0^π/2L|ω|sin^2udu =1/2( |ω| - L/πsin( π/L|ω|))for any j. Since this value does not depend on j, the lemma follows. We infer from that lemma that∫_0^T∫_ωϕ(t,x)^2 dxdt ≥( |ω| - L/πsin( π/L|ω|)) ( ‖ϕ(0)‖_L^2^2 + ‖∂_tϕ(0)‖_H^-1^2 )which gives the observability inequality (<ref>). It is interesting to note that, for T=2L, the inequality is sharp and thus the observability constant isC_T=2L(ω) = |ω| - L/πsin( π/L|ω|). Recalling the admissibility property in Example <ref>, we have obtained that, for T≥ 2π, (∫_0^T∫_ωϕ(t,x)^2 dxdt)^1/2 is a norm, equivalent to the norm of L^2(0,L)× H^-1(0,L), which illustrates Lemma <ref>.As in the boundary control case, controllability is lost if T is too small: using the finite speed propagation of the wave equation, it suffices to consider a solution supported in (0,L)∖ω over a small enough time interval. For instance, if ω=(a,b)⊂(0,L) then the minimal controllability time is T=2max(a,L-b).§.§.§ Multi-D Dirichlet wave equation with internal controlConsider the internally controlled Dirichlet wave equation∂_tty =y + χ_ω u, t∈(0,T),x∈Ω, y(t,x)=0,t∈(0,T), x∈∂Ω y(0,x)=y_0(x), ∂_t y(0,x)=y_1(x),x∈Ω,on a bounded open domain Ω⊂^n having a C^2 boundary, and internal control on a measurable subset ω⊂Ω of positive Lebesgue measure.The admissibility (well-posedness) has been seen in Example <ref>.It is proved in <cit.> that, if ω is open and if the pair (ω,T) satisfies the Geometric Control Condition (GCC), then (<ref>) is exactly controllable in time T in the space H^1_0(Ω)× L^2(Ω) with controls u∈ L^2((0,T)×ω); equivalently, by Theorem <ref>, there exists C_T(ω)>0 such that∫_0^T∫_ωϕ(t,x)^2 dxdt ≥ C_T(ω)( ‖ϕ(0)‖_L^2(Ω)^2 + ‖∂_tϕ(0)‖_H^-1(Ω)^2)for every solution ϕ of the adjoint equation ∂_ttϕ-ϕ=0 with ϕ=0 along the boundary of Ω. The GCC stipulates that any geodesic ray, propagating in Ω (seen as a billiard) at speed 1 and reflecting at the boundary according to the laws of classical optics (see Figure <ref>), meets the open set ω within time T. On this figure, on the right, Ω is a square in the plane and ω is an internal disk: the GCC is never satisfied because of the existence of trapped rays (bouncing balls).The proof of (<ref>) requires significantly more elaborate tools than in the 1D case: microlocal analysis, propagation of singularities, defect measures. The “almost-equivalence" between GCC and the observability inequality (<ref>) is studied in <cit.>.We have obtained that, under GCC, (∫_0^T∫_ωϕ(t,x)^2 dxdt)^1/2 is a norm, equivalent to the norm of L^2(Ω)× H^-1(Ω). This illustrates Lemma <ref>. There are similar results for the boundary control case. Note that, as initially developed in <cit.>, under stronger geometric conditions on ω, there are more elementary proofs based on the multiplier method (see <cit.> and <cit.>). Carleman estimates can also be used, with the advantage of allowing to tackle lower-order and/or low regularity terms (see <cit.>). §.§ Example: the heat equation§.§.§ Dirichlet heat equation with internal controlLet Ω⊂^n be a bounded open set having a C^2 boundary, and let ω⊂Ω be an open subset. Like in Section <ref>, we consider the heat equation with internal control and Dirichlet boundary conditions∂_ty= y+χ_ω uin Ω, y_|∂Ω=0, y(0)=y_0∈ L^2(Ω).The admissibility property of the control operator, as discussed in Section <ref>, is obvious since B is bounded.Due to the smoothing effect of the heat semigroup (e^ty_0 is smooth on Ω for every t>0, whatever the regularity of y_0 may be), the above heat equation cannot be exactly controllable in X=L^2(Ω). But it is approximately controllable in X and exactly null controllable in X in any time T>0.Indeed, by Theorem <ref>, the approximate controllability property is equivalent to the following property: given any T>0, given any solution ψ of ∂_tψ=ψin Ω,ψ_|∂Ω=0such that ψ(0)∈ H^2(Ω)∩ H^1_0(Ω),if ψ(t,x)=0 for all t∈[0,T] and x∈ω then ψ≡ 0. This unique continuation property is a consequence of Holmgren's theorem (see <cit.>).Exact null controllability is equivalent, by Theorem <ref>, to the following observability inequality: given any T>0, there exists C_T(ω)>0 such that∫_0^T ∫_ωψ(t,x)^2dxdt ≥ C_T(ω) ‖ψ(T)‖_X^2for any solution of (<ref>). The observability inequality (<ref>) has been established in <cit.>. Nowadays, it seems that the most powerful tool in order to establish such inequalities in the parabolic setting is the Carleman estimates (see <cit.> or <cit.>). Even in 1D, the proof of (<ref>) by a Carleman estimate is quite technical. It can however be noted that, in 1D, the exact null controllability property can also be proved thanks to harmonic analysis considerations, by applying the moment method (see Section <ref>). §.§.§ Heat equation with Dirichlet boundary controlLike in Section <ref>, we consider the heat equation with Dirichlet boundary control∂_ty= yin Ω, y_|∂Ω=u, y(0)=y_0∈ L^2(Ω),where Ω⊂^n is a bounded open subset having a C^2 boundary. We have seen in that section that, setting X=H^-1(Ω) and U=L^2(∂Ω), the control operator is admissible, but that admissibility is not true if one takes X=L^2(Ω).According to <cit.>, the heat equation (<ref>) is exactly null controllable in X=H^-1(Ω) in any time T>0; equivalently, for every T>0 there exists C_T>0 such that∫_0^T‖∂ψ/∂ν_|∂Ω (t)‖_L^2(∂Ω)^2dt≥ C_T‖ψ(T)‖^2_H^1_0(Ω)for any solution of ∂_tψ=ψin Ω, ψ_|∂Ω=0,with ψ(0)∈H^1_0(Ω). In most of the existing literature (see, e.g., <cit.>) one can find the observability inequality (<ref>) with the L^2 norm at the right-hand side, thus saying by duality that the heat equation (<ref>) is exactly null controllable in L^2(Ω) in any time T>0. Actually, it is exactly controllable in any H^s(), for any s∈ (and in particular for any s<0): indeed, taking any y_0∈ H^s(Ω), considering an appropriate extension S(t) of the Dirichlet heat semigroup, one has S(t)y_0∈ H^1_0(Ω)∩ C^∞(Ω) for any t>0 and in particular for t=T/2; then apply the controllability property in time T/2 to steer S(T/2)y_0 to 0. Then, by duality, the observability inequality (<ref>) remains true when replacing the H^1_0 norm at the right-hand side with the norm of X_α, for any α∈, where (X_α)_α∈ is the family of Dirichlet spaces constructed in Example <ref>.§.§ Pontryagin maximum principleFormally, HUM is obtained by applying the PMP to the LQ optimal control problem consisting of steering the control system ẏ=Ay+Bu from y(0)=y_0 to y(T)=y_1 in time T, by minimizing the cost functional ∫_0^T‖ u(t)‖_U^2dt. We have anyway to be careful there. Indeed, as shortly discussed in Section <ref>, the generalization of the PMP to the infinite-dimensional setting, done for instance in <cit.> (and proved, in this book, by using the Ekeland variational principle), requires in general that the final state y(T) is subject to a finite number only of scalar constraints. There are counterexamples to the statement of the PMP whenever y(T)=y_1∈ X when X=+∞, i.e., when there are an infinite number of final scalar constraints (such counterexamples are easy to design by considering systems enjoying approximate but not exact controllability: see Example <ref> hereafter). Nevertheless, under exact controllability properties, the PMP is valid for LQ optimal control problems.More generally, as mentioned in Section <ref>, the PMP is generalized to infinite dimension, with the same statement as in Theorem <ref>, under the following assumption: There exists z_1∈Conv(M_1) such that Span(M_1-z_1) is of finite codimension in X.Roughly speaking this means that we can impose only a finite number of scalar constraints on y(T).We do not give more details since, with this additional finite codimensionality assumption, the statement is the same. Following <cit.>, let us design an example of an optimal control problem in infinite dimension on which the expected PMP statement fails. Let X be an infinite-dimensional separable Hilbert space. Consider the control system ẏ(t)=Ay(t)+bu(t), with initial condition y(0)=0, where A:D(A)→ X is an operator generating a C_0 semigroup (S(t))_t≥ 0 and where b∈ X is fixed; here, the control u is a real-valued function. The idea is to make assumptions ensuring that we can find a point y_1∈ X that can be reached from 0 in time 1 with only one control. The constant control u=1 steers in time 1 the control system to y_1=y(1)=∫_0^1 S(t)bdt = (S(1)-id)A^-1b (the latter formula is obtained by assuming that A is invertible). Let us assume that A is self-adjoint negative, so that there exists a Hilbert basis (ϕ_j)_j∈^* of eigenfunctions, i.e., Aϕ_j=-λ_jϕ_j with λ_j>0 for every j∈^*.For any control u̅ steering the system from y(0)=0 to y(1)=y_1, we must have ∫_0^1 S(1-t)b(u̅(t)-1)dt=0, i.e., expanding b=∑_j∈^*b_jϕ_j,∑_j∈^*∫_0^1 e^-(1-t)λ_j(u̅(t)-1)dt b_jϕ_j = 0 .Assuming that b_j≠ 0 for every j∈^*, we infer that ∫_0^1 e^tλ_j(u̅(t)-1)dt=0 for every j∈^*. Let us now further assume that ∑_j∈^*1/λ_j=+∞. Then, by the Müntz-Szász theorem (see Remark <ref> in Section <ref> further), the family (e^tλ_j)_j∈^* is complete in L^2(0,1). It then follows that u̅=1. We have thus proved that, under the above assumptions, u̅=1 is the unique solution steering the system from y(0)=0 to y(1)=y_1.Therefore, the control u̅=1 is optimal for any cost functional (and, by the way, it must be abnormal but we will not use this fact). Let us consider the cost functional C(u) = ∫_0^1 ( ⟨ a,y(t)⟩_X + cu(t)) dt ,where a∈ X and c∈ are fixed, and let us assume that the controls are subject, for instance, to the constraint | u(t)|≤ 2 for almost every t∈[0,1]. The Hamiltonian of the optimal control problem isH(y,p,p^0,u) = ⟨ p,Ay⟩_X+⟨ p,b⟩_Xu+p^0⟨ a,y⟩_X+p^0cu.Let us prove, by contradiction, that the statement of the PMP is not satisfied for the optimal control u̅=1. Otherwise, there would exist an adjoint p(·) satisfying ṗ=-Ap-p^0a and thus, by integration,p(t)= S(1-t)p(1) + p^0(S(1-t)-id)A^-1a. Besides, the condition ∂ H/∂ u=0 gives ⟨ p,b⟩_X+p^0c=0 on [0,1], i.e., ⟨ p(1)+p^0A^-1a, S(1-t)b⟩_X + p^0(c-⟨ A^-1a,b⟩_X) = 0∀ t∈[0,1], with S(1-t)b = ∑_j∈^* e^-(1-t)λ_jb_jϕ_j. By linear independence of the exponential functions, we infer that all terms are equal to 0. But then, assuming that c≠⟨ A^-1a,b⟩_X, it follows that p(1)=0 and p^0=0, which is a contradiction. § FURTHER RESULTS§.§ Kalman condition in infinite dimensionIt is interesting to mention that the unique continuation property implies an infinite-dimensional version of the Kalman condition. A simple sufficient condition is the following. We assume that X is reflexive and that B∈ L(U,X) is a bounded control operator. We setU_∞ = { u∈ U | Bu∈⋂_k=1^+∞ D(A^k) }.If the set 𝒦_T=Span{ A^k Bu | u∈ U_∞, k∈} is dense in X then the control system (<ref>) is approximately controllable in any time T>0. Note that the set 𝒦_T is the infinite-dimensional version of the image of the Kalman matrix in finite dimension. We use the equivalence between approximate controllability and (<ref>). Note that, in (<ref>), it suffices to take z in any dense subspace of D(A^*). Let z∈∩_k=1^+∞ D((A^*)^n) (which is dense in D(A^*)) be such that B^*S(T-t)^*z=0 for every t∈[0,T]. Then, by successive derivations with respect to t, and taking t=T, we obtain B^*(A^k)^*z=0, hence ⟨ z,A^kBu⟩_X',X=0, and therefore z=0 because 𝒦_T is dense in X. We refer the reader to <cit.> for a more precise result (and an almost necessary and sufficient condition).§.§ Necessary conditions for exact controllabilityLet us assume that X is of infinite dimension, and let us provide general conditions under which the control system (<ref>) is never exactly controllable in finite time T, with controls in L^2([0,T],U). We have already seen that exact controllability implies that the control operator B is admissible (and thus α(B)≤ 1/2). Under any of the following assumptions: * B∈ L(U,X) is compact; * B∈ L(U,X) is bounded and S(t) is compact for every t>0;* X≃ X' and U≃ U' are Hilbert spaces, -A is a self-adjoint positive operator with compact inverse, and B∈ L(U,X_-1/2) (and thus B is admissible and α(B)≤ 1/2, see Section <ref>); the control system (<ref>) is not exactly controllable in any time T>0 (with controls in L^2([0,T],U)). For instance, B is compact if U is finite dimensional. Then, the first point implies that it is impossible to control exactly an infinite-dimensional system with a finite number of controls (see <cit.>). The second point applies for instance to the heat equation with internal control.The third point applies for instance to the heat equation with Neumann boundary control.In the first case where B is compact, it is easy to see that the operator L_T is compact. The conclusion follows since X is infinite dimensional, by the Riesz compactness lemma. In the second case (compact semigroup), for every ε>0, we define the operator L_T,ε:L^2([0,T],U)→ X by L_T,εu=∫_0^T-εS(T-t)Bu(t)dt. Clearly, L_T,ε converges strongly L_T as ε tends to 0. Besides, using S(T-t)=S(ε)S(T-ε-t), we get that L_T,ε = S(ε) L_T-ε, and hence we infer that L_T,ε is a compact operator, for every ε>0. Therefore L_T is compact and the conclusion follows as previously.In the third case, recall that X_1/2=D((-A)^1/2) is the completion of D(A) for the norm √(-(Ax,x)_X), and X_-1/2=X_1/2' with respect to the pivot space X (see also Remark <ref>). Let (ϕ_j)_j∈^* an orthonormal basis of (unit) eigenvectors of -A associated with eigenvalues λ_j>0, with λ_j→ +∞ as j→ +∞. Firstly, we have∫_0^T ‖ B^*S(T-t)^*ϕ_j‖_U^2 dt= ∫_0^T e^-λ_j(T-t)‖ B^*ϕ_j‖_U^2 dt∼1/λ_j‖ B^*ϕ_j‖_U^2 = ‖ B^*(-A)^-1/2ϕ_j‖_U^2as j→+∞. Secondly, since the operator (-A)^-1/2 is compact, it follows that the operator B^*(-A)^-1/2∈ L(X,U) is compact as well. Thirdly, we claim that the sequence (ϕ_j)_j∈^* converges to 0 for the weak topology of X. Indeed, since ∑_j=1^+∞(x,ϕ_j)_X^2 <+∞ for every x∈ X (by Parseval for instance), it follows that (x,ϕ_j)_X→ 0 as j→+∞, for every x∈ X, whence the claim. Since B^*(-A)^-1/2∈ L(X,U) is compact and since ϕ_j⇀ 0, it follows that B^*(-A)^-1/2ϕ_j converges strongly to 0. Then, from (<ref>), we infer that the observability inequality (<ref>) does not hold true.We refer to <cit.> for such arguments. §.§ Moment methodThe moment method, relying on harmonic analysis results, has been used in the 70s to establish the first exact controllability results, essentially in 1D (see <cit.>).To explain the moment method, let us consider the 1D heat equation on (0,π) with internal control and with Dirichlet boundary conditions∂_t y = ∂_xx y + χ_ω u,y(t,0)=y(t,π)=0 ,y(0)=y_0∈ L^2(0,π).The eigenfunctions √(2/π)sin(jx) of the Dirichlet-Laplacian, associated with the eigenvalues λ_j=-j^2, make an orthonormal basis of L^2(0,π). Expanding in series, we have y_0(x) = ∑_j=1^+∞ a_jsin(jx) with (a_j)_j∈^*∈ℓ^2(), and writing y(t,x)= ∑_j=1^+∞ y_j(t)sin(jx), we get ẏ_j(t) = -j^2 y_j(t) + ∫_ω u(t,x)sin(jx)dx and thusy_j(T) = e^-j^2Ta_j+∫_0^T e^-j^2(T-t)∫_ω u(t,x)sin(jx) dxdt.In order to realize the exact null controllability in time T, we wish to find some controls u such that∀ j∈^*∫_0^T∫_ω u(t,x) e^-j^2(T-t)sin(jx)dxdt = -a_j e^-j^2T.By the Müntz-Szász theorem, there exists a sequence (θ_T^j)_j∈^* in L^2(0,T), spanning a proper subspace of L^2(0,T), that is biorthogonal to the sequence of functions t↦ e^-j^2t, j∈^*, i.e., ∫_0^T e^-j^2tθ_T^k(t)dt = δ_jk for all j,k∈^* (see <cit.> for fine properties).We search controls u satisfying (<ref>), of the formu(t,x) = ∑_k=1^+∞ b_kθ_T^k(T-t)sin(kx) ,which gives b_j∫_ωsin^2(jx)dx=-a_je^-j^2T for every j∈^*. We conclude thatu(t,x) = -∑_k=1^+∞ a_k e^-k^2Tθ_T^k(T-t)sin(kx)/∫_ωsin^2(ky)dy.Thanks to Lemma <ref>, this function is well defined and is in L^2((0,T)×(0,π)).Abstract generalization. Let X and U be Hilbert spaces. Let A:D(A)→ X be a densely defined operator, assumed to be self-adjoint and of compact inverse. Let (ϕ_j)_j∈^* be a Hilbert basis of X consisting of eigenvectors of A. Note that ϕ_j∈ X_1=D(A)=D(A^*) for every j∈^*. Let B∈ L(U,D(A^*)') be an admissible control operator. We consider the control system (<ref>) with y(0)=y_0=∑_j∈^* a_jϕ_j. Expanding y(t)= ∑_j=1^+∞ y_j(t)ϕ_j, we have ẏ_j=λ_j y_j+⟨ Bu,ϕ_j⟩_X_-1,X_1 and thus∀ j∈^* y_j(T) = e^λ_jTa_j+∫_0^T e^λ_j(T-t) (u(t),B^*ϕ_j)_U dt,and we want to solve y_j(T)=0 for every j∈^*.We assume that there exists a sequence (θ_T^j)_j∈^* in L^2(0,T) that is biorthogonal to the family Λ of functions t↦ e^λ_jt, j∈^* (see Remark <ref> below).We search u in the particular formu(t)=∑_k∈^* b_kθ_T^k(T-t) B^*ϕ_k .Then, solving y_j(T)=0 for every j∈^* is equivalent to requiring that b_j‖ B^*ϕ_j‖^2_U=-a_je^λ_jT, and thus,u(t)=-∑_k∈^* a_k e^λ_kTθ_T^k(T-t) B^*ϕ_k/‖ B^*ϕ_k‖^2_U .Showing that such a series gives a well-defined function requires to establish lower estimates of ‖ B^*ϕ_k‖^2_U. Such a biorthogonal sequence (θ_T^j)_j∈^* exists if and only if the family Λ is minimal, that is, every element t↦ e^-λ_jt lies outside of the closure in L^2(0,T) of the vector space spanned by all other elements t↦ e^-λ_kt, with k≠ j. If this condition is fulfilled, then the biorthogonal sequence (θ_T^j)_j∈^* is uniquely determined if and only if the family Λ is complete in L^2(0,T). Note anyway that the biorthogonal sequence is difficult to construct in practice.It is well known, by the Müntz-Szász theorem, that the family Λ is complete in L^2(0,T) (but not independent) if and only if∑_j∈^*1/(λ_j)+λ=+∞ for some real number λ such that (λ_j)+λ>0 for every j∈^* (for instance, λ=- (λ_1)+1). At the opposite, if this series is convergent then the closure of the span of Λ is a proper subspace of L^2(0,T), moreover Λ is minimal and thus a biorthogonal sequence exists. Then, here, we are led to assume that the series is convergent, which is a quite strong restriction on the parabolic system under consideration. Relationship with HUM. Within the previous abstract general framework, let us solve the moment equations y_j(T)=0 for every j∈^*, in another way:using (<ref>), it suffices to search a control u such that( u , (t,x)↦ e^λ_j(T-t)(B^*ϕ_j)(x) )_L^2([0,T],U) = -e^λ_jTa_j∀ j∈^*and, generalizing the previous approach, we can solve this moment problem by using, if it exists, a biorthogonal sequence (u_j)_j∈^* to the sequence of (time-space) functions (t,x)↦ e^λ_j(T-t)(B^*ϕ_j)(x), i.e., noting that S(T-t)^*ϕ_k = e^λ_k(T-t)ϕ_k, a sequence satisfying( u_j , B^*S(T-t)^*ϕ_k )_L^2([0,T],U) = δ_jk∀ j,k∈^* .Note that, when such a family exists, u_j is a control steering the control system (<ref>) from -e^-λ_jTϕ_j to 0 in time T (this is related to the notion of spectral controllability).There are plenty of ways for designing such controls u_j, when this is possible.For every j∈^*, let u_j∈ L^2([0,T],U) be the (unique) HUM control steering the initial condition -e^-λ_jTϕ_j to 0 (we assume that this is possible): we have u_j(t)=B^*S(T-t)^*ψ_j where G_Tψ_j=ϕ_j (note that u_j is the control of minimal L^2 norm, and ‖ u_j‖^2_L^2([0,T],U)=(G_Tψ_j,ψ_j)_U=(ϕ_j,ψ_j)_U). Then, obviously, (<ref>) holds.Now, the (unique) HUM control u such that y(T)=0 is given by u(t)=B^*S(T-t)^*ψ with S(T)y_0+G_Tψ=0. Since y_0=∑_j∈^* a_jϕ_j, it easily follows by linearity that, formally,u = -∑_j=1^+∞ a_j e^λ_jTu_j. Of course, all above computations are formal, and it may be difficult to establish the convergence of the series in practical examples. §.§ Equivalence between observability and exponential stabilityThe following result is a generalization of the main result of <cit.>. Let X be a Hilbert space, let A:D(A)→ X be a densely defined skew-adjoint operator, let B be a bounded self-adjoint nonnegative operator on X. We have equivalence of: * There exist T>0 and C>0 such that every solution of the conservative[It is said to be conservative because, since A is skew-adjoint, we have ‖ϕ(t)‖_X=Cst=‖ϕ(0‖_X for every t∈.] equationϕ̇(t)+Aϕ(t)=0 satisfies the observability inequality‖ϕ(0)‖_X^2 ≤ C∫_0^T‖ B^1/2ϕ(t)‖_X^2 dt . * There exist C_1>0 and δ>0 such that every solution of the damped equation ẏ(t)+Ay(t)+By(t)=0 satisfiesE_y(t) ≤ C_1 E_y(0)e^-δ t∀ t≥ 0 ,whereE_y(t) = 1/2‖ y(t)‖_X^2. Let us first prove that the first property implies the second one. Let y be a solution of the damped equation. Let ϕ be the solution of ϕ̇+Aϕ=0, ϕ(0)=y(0). Setting θ=y-ϕ, we have θ̇+Aθ+By=0, θ(0)=0. Then, taking the scalar product with θ, since A is skew-adjoint, we get(θ̇+By,θ)_X=0. But, setting E_θ(t)=1/2‖θ(t)‖_X^2, we have Ė_θ = -(By,θ)_X. Then, integrating a first time over [0,t], and then a second time over [0,T], since E_θ(0)=0, we get∫_0^T E_θ(t) dt = - ∫_0^T∫_0^t ( By(s),θ(s) )_Xdsdt = -∫_0^T (T-t) ( B^1/2y(t),B^1/2θ(t) )_Xdt ,where we have used the Fubini theorem. Hence, using the Cauchy-Schwarz inequality and then the Young inequality ab≤α/2a^2+1/2αb^2 with α=2T ‖ B^1/2‖, we infer that1/2∫_0^T‖θ(t)‖_X^2 dt≤ T ‖ B^1/2‖∫_0^T ‖B^1/2y(t)‖_X‖θ(t)‖_X dt ≤ T^2 ‖ B^1/2‖^2 ∫_0^T ‖B^1/2y(t)‖_X^2 dt +1/4∫_0^T ‖θ(t)‖_X^2 dt,and therefore∫_0^T ‖θ(t)‖_X^2 dt ≤ 4 T^2 ‖ B^1/2‖^2 ∫_0^T ‖B^1/2y(t)‖_X^2 dt.Now, since ϕ=y-θ, it follows that∫_0^T ‖ B^1/2ϕ(t)‖_X^2 dt ≤ 2 ∫_0^T ‖ B^1/2y(t)‖_X^2 dt + 2∫_0^T ‖ B^1/2θ(t)‖_X^2 dt ≤ (2+8 T^2 ‖ B^1/2‖^4) ∫_0^T ‖ B^1/2 y(t)‖_X^2 dt.Finally, sinceE_y(0)=E_ϕ(0)=1/2‖ϕ(0)‖_X^2 ≤C/2∫_0^T‖ B^1/2ϕ(t)‖_X^2 dtit follows that E_y(0) ≤ C (1+4 T^2 ‖ B^1/2‖^4) ∫_0^T ‖ B^1/2 y(t)‖_X^2 dt. Besides, one has E_y'(t)=-‖ B^1/2 y(t)‖_X^2, and then ∫_0^T ‖ B^1/2 y(t)‖_X^2 dt=E_y(0)-E_y(T). ThereforeE_y(0) ≤ C (1+4 T^2 ‖ B^1/2‖^4) (E_y(0)-E_y(T))=C_1(E_y(0)-E_y(T))and henceE_y(T)≤C_1-1/C_1 E_y(0) = C_2 E_y(0),with C_2<1.Actually this can be done on every interval [kT,(k+1)T], and it yields E_y((k+1)T)≤ C_2 E_y(kT) for every k∈, and hence E_y(kT)≤ E_y(0)C_2^k.For every t∈ [kT,(k+1)T), noting that k=[t/T]> t/T-1, and that ln1/C_2>0, it follows thatC_2^k=exp(kln C_2)=exp(-kln1/C_2)≤1/C_2exp(-ln1/C_2/T t)and hence E_y(t)≤ E_y(kT)≤δ E_y(0) exp(-δ t) for some δ>0.Let us now prove the converse: assume the exponential decrease, and let us prove the observability property. Let ϕ be a solution of the conservative equation. Let y be a solution of the damped equation such that y(0)=ϕ(0).From the exponential decrease inequality, one has∫_0^T ‖ B^1/2 y(t)‖_X^2 dt = E_y(0)-E_y(T) ≥ (1-C_1e^-δ T)E_y(0) = C_2 E_y(0),and for T>0 large enough there holds C_2=1-C_1e^-δ T>0.Then we make the same proof as before, starting from ϕ̇+Aϕ=0, that we write in the form ϕ̇+Aϕ +Bϕ=Bϕ, and considering the solution of ẏ + Ay + By=0, y(0)=ϕ(0). Setting θ=ϕ-y, we have θ̇+Aθ+Bθ=Bϕ, θ(0)=0. Taking the scalar product with θ, since A is skew-adjoint, we get (θ̇+Bθ,θ)_X=(Bϕ,θ)_X,and therefore Ė_θ +( Bθ,θ)_X = ( Bϕ,θ)_X. Since ( Bθ,θ)_X=‖ B^1/2θ‖_X≥ 0, it follows that Ė_θ≤ ( Bϕ,θ)_X. As before we apply ∫_0^T∫_0^t and hence, since E_θ(0)=0,∫_0^T E_θ(t) dt ≤∫_0^T ∫_0^t ( Bϕ(s),θ(s))_X dsdt = ∫_0^T (T-t) ( B^1/2ϕ(t),B^1/2θ(t))_X dt.Thanks to the Young inequality, we get, exactly as before,1/2∫_0^T‖θ(t)‖_X^2 dt≤ T ‖ B^1/2‖∫_0^T ‖B^1/2ϕ(t)‖_X ‖θ(t)‖_X dt ≤ T^2 ‖ B^1/2‖^2 ∫_0^T ‖B^1/2ϕ(t)‖_X^2dt +1/4∫_0^T ‖θ(t)‖_X^2 dt ,and finally,∫_0^T ‖θ(t)‖_X^2 dt ≤ 4 T^2 ‖ B^1/2‖^2 ∫_0^T ‖B^1/2ϕ(t)‖_X^2dt.Now, since y=ϕ-θ, it follows that∫_0^T ‖ B^1/2 y(t)‖_X^2 dt ≤ 2 ∫_0^T ‖ B^1/2ϕ(t)‖_X^2 dt + 2∫_0^T ‖ B^1/2θ(t)‖_X^2 dt ≤ (2+8 T^2 ‖ B^1/2‖^4) ∫_0^T ‖ B^1/2ϕ(t)‖_X^2 dt .Now, using (<ref>) and noting that E_y(0)=E_ϕ(0), we infer thatC_2 E_ϕ(0) ≤ (2+8 T^2 ‖ B^1/2‖^4) ∫_0^T ‖ B^1/2ϕ(t)‖_X^2 dt.This is the desired observability inequality.This result says that the observability property for a linear conservative equation is equivalent to the exponential stability property for the same equation in which a linear damping has been added. This result has been written in <cit.> for second-order equations, but the proof works exactly in the same way for more general first-order systems, as shown here. The above proof uses in a crucial way the fact that the operator B is bounded. We refer to <cit.> for a generalization for unbounded operators with degree of unboundedness ≤ 1/2, and only for second-order equations, with a proof using Laplace transforms, under a condition on the unboundedness of B that is related to “hidden regularity" results.For instance this works for waves with a nonlocal operator B corresponding to a Dirichlet condition, in the state space L^2× H^-1, but not for the usual Neumann one, in the state space H^1× L^2 (except in 1D).§.§ 1D semilinear heat equationLet L>0 be fixed and let f:→ be a function of class C^2 such that f(0)=0. We consider the 1D semilinear heat equation∂_t y = ∂_xx y+f(y),y(t,0)=0, y(t,L)=u(t),where the state is y(t,·):[0,L]→ and the control is u(t)∈.We want to design a feedback control locally stabilizing (<ref>) asymptotically to 0. Note that this cannot be global, because we can have other steady-states (a steady-state is a function y∈ C^2(0,L) such that y”(x)+f(y(x))=0 on (0,L) and y(0)=0). By the way, here, without loss of generality we consider the steady-state 0. Let us first note that, for every T>0, (<ref>) is well posed in the Banach space Y_T = L^2([0,T],H^2(0,L))∩ H^1([0,T],L^2(0,L)), which is continuously embedded in L^∞((0,T)×(0,L)).[Indeed, considering v∈ L^2([0,T],H^2(0,L)) with v_t∈ H^1([0,T],L^2(0,L)), writing v=∑_j,kc_jke^ijte^ikx, we have∑_j,k| c_jk|≤( ∑_j,k1/1+j^2+k^4)^1/2( ∑_j,k (1+j^2+k^4)| c_jk|^2 )^1/2and these series converge, whence the embedding, allowing to give a sense to f(y).Now, if y_1 and y_2 are solutions of (<ref>) on [0,T], then y_1=y_2. Indeed,v=y_1-y_2 is solution of v_t=v_xx+av, v(t,0)=v(t,L)=0, v(0,x)=0, with a(t,x)=g(y_1(t,x),y_2(t,x)) where g is a function of class C^1. We infer that v=0.] First of all, in order to end up with a Dirichlet problem, we set z(t,x)=y(t,x)-x/Lu(t). Assuming (for the moment) that u is differentiable, we set v(t)=u'(t), and we consider in the sequel v as a control. We also assume that u(0)=0. Then we have∂_t z=∂_xx z+f'(0)z+x/Lf'(0)u-x/Lv+r(t,x), z(t,0)=z(t,L)=0,with z(0,x)=y(0,x) and (by performing a second-order Taylor expansion of f with integral remainder)r(t,x)= (z(t,x)+x/Lu(t))^2∫_0^1 (1-s)f”(sz(s,x)+sx/Lu(s))ds.Note that, given B>0 arbitrary, there exist positive constants C_1 and C_2 such that, if | u(t)|≤ B and ‖ z(t,·)‖_L^∞(0,L)≤ B, then‖ r(t,·)‖_L^∞(0,L)≤ C_1(u(t)^2 +‖ z(t,·)‖_L^∞(0,L)^2) ≤ C_2(u(t)^2 +‖ z(t,·)‖_H^1_0(0,L)^2) .In the sequel, r(t,x) will be considered as a remainder.We define the operator A=+f'(0)id on D(A) = H^2(0,L)∩ H_0^1(0,L), so that (<ref>) is written asu̇=v,∂_t z=Az+au+bv+r, z(t,0)=z(t,L)=0,with a(x) = x/Lf'(0) and b(x)=-x/L.Since A is self-adjoint and has a compact resolvent, there exists a Hilbert basis (e_j)_j≥ 1 of L^2(0,L), consisting of eigenfunctions e_j∈ H^1_0(0,L)∩ C^2([0,L]) of A, associated with eigenvalues (λ_j)_j≥ 1 such that -∞<⋯<λ_n<⋯<λ_1 and λ_n→-∞ as n→+∞.Any solution z(t,·)∈ H^2(0,L)∩ H^1_0(0,L) of (<ref>), as long as it is well defined, can be expanded as a series z(t,·)=∑_j=1^∞z_j(t)e_j(·) (converging in H_0^1(0,L)), and then we have, for every j≥ 1,ż_j(t) = λ_j z_j(t) + a_j u(t) + b_j v(t) + r_j(t),witha_j= f'(0)/L∫_0^L xe_j(x)dx,b_j= -1/L∫_0^L xe_j(x)dx,r_j(t)=∫_0^L r(t,x)e_j(x)dx.Setting, for every n∈^*,X_n(t)=[ u(t); z_1(t);⋮; z_n(t) ], A_n=[ 0 0 ⋯ 0; a_1 λ_1 ⋯ 0; ⋮ ⋮ ⋱ ⋮; a_n 0 ⋯ λ_n ] , B_n=[ 1; b_1; ⋮; b_n ] , R_n(t)=[0; r_1(t);⋮; r_n(t) ],we have thenẊ_n(t) = A_nX_n(t) + B_n v(t) + R_n(t).For every n∈^*, the pair (A_n,B_n) satisfies the Kalman condition.We computedet( B_n, A_nB_n, …, A_n^nB_n ) = ∏_j=1^n(a_j+λ_jb_j)VdM(λ_1,…,λ_n)where VdM(λ_1,…,λ_n) is a Van der Monde determinant, and thus is never equal to zero since the λ_i, i=1… n, are pairwise distinct. On the other part, using the fact that each e_j is an eigenfunction of A and belongs to H^1_0(0,L), we computea_j+λ_jb_j = 1/L∫_0^L x ( f'(0)-λ_j)e_j(x) dx = -1/L∫_0^L x e_j”(x) dx = - e_j'(L),and this quantity is never equal to zero since e_j(L)=0 and e_j is a nontrivial solution of a linear second-order scalar differential equation. Therefore the determinant (<ref>) is never equal to zero. By the pole-shifting theorem (Theorem <ref>), there exists K_n=( k_0,…,k_n ) such that the matrix A_n+B_nK_n has -1 as an eigenvalue of multiplicity n+1. Moreover, by the Lyapunov lemma (see Example <ref>), there exists a symmetric positive definite matrix P_n of size n+1 such thatP_n(A_n+B_nK_n) + (A_n+B_nK_n)^⊤ P_n = -I_n+1 .Therefore, as shown in Example <ref>, the function defined by V_n(X) = X^⊤ P_n X for any X∈^n+1 is a Lyapunov function for the closed-loop system Ẋ_n(t) = (A_n+B_nK_n)X_n(t): along this system we have d/dt V_n(X_n(t)) = -‖ X_n(t)‖_2^2. Here, ‖ ‖_2 stands for the Euclidean norm of ^n+1.Let γ>0 and n∈^* to be chosen later. For every u∈ and every z∈ H^2(0,L)∩ H^1_0(0,L), we setV(u,z)=γX_n^⊤ P_n X_n - 1/2⟨ z,Az⟩_L^2(0,L) = γX_n^⊤ P_n X_n - 1/2∑_j=1^∞λ_jz_j^2where X_n=(u,z_1,…,z_n)^⊤∈^n+1 and z_j=⟨ z,e_i⟩_L^2(0,L) for every j.Using that λ_n→-∞ as n→+∞, it is clear that, choosing γ>0 and n∈^* large enough, we have V(u,z)>0 for all (u,z)∈× (H^2(0,L)∩ H^1_0(0,L))∖{(0,0)}. More precisely, there exist positive constants C_3, C_4, C_5 and C_6 such thatC_3( u^2+‖ z‖_H^1_0(0,L)^2) ≤ V(u,z) ≤ C_4( u^2+‖ z‖_H^1_0(0,L)^2),V(u,z) ≤ C_5( ‖ X_n‖_2^2 + ‖ Az‖_L^2(0,L)^2 ) ,γ C_6‖ X_n‖_2^2 ≤ V(u,z) ,for all (u,z)∈× (H^2(0,L)∩ H^1_0(0,L)). Our objective is now to prove that V is a Lyapunov function for the system (<ref>) in closed-loop with the feedback control v=K_nX_n and u defined by u̇=v and u(0)=0. We computed/dt V(u(t),z(t))=-γ ‖ X_n(t)‖_2^2-‖ Az(t,·)‖_L^2^2-⟨ Az(t,·),a(·)⟩_L^2u(t) -⟨ Az(t,·),b(·)⟩_L^2K_nX_n(t) -⟨ Az(t,·),r(t,·)⟩_L^2+γ(R_n(t)^⊤ P_nX_n(t)+X_n(t)^⊤ P_nR_n(t)) .Let us estimate the terms at the right-hand side of (<ref>). Under the a priori estimates | u(t)|≤ B and ‖ z(t,·)‖_L^∞(0,L)≤ B, there exist positive constants C_7, C_8 and C_9 such that (dropping the dependence in t)|⟨ Az,a⟩_L^2u |+|⟨ Az,b⟩_L^2K_nX_n |≤1/4‖ Az‖_L^2^2+C_7‖ X_n‖_2^2, |⟨ Az,r⟩_L^2|≤1/4‖ Az‖_L^2^2+C_8V^2 ,‖ R_n‖_∞≤C_2/C_3 V ,|γ(R_n^⊤ P_nX_n+X_n^⊤ P_nR_n)|≤C_2/C_3√(C_6)√(γ) V^3/2 .We infer that, if γ>0 is large enough, then there exist C_10,C_11>0 such that d/dt V ≤ -C_10V+C_11V^3/2. We easily conclude the local asymptotic stability of the system (<ref>) in closed-loop with the control v=K_nX_n. The above local asymptotic stability can be achieved with other procedures, for instance, by using the Riccati theory (see <cit.> for Riccati operators in the parabolic case). However, the procedure developed above is much more efficient because it consists of stabilizing a finite-dimensional part of the system, namely, the part that is not naturally stable. We refer to <cit.> for examples and for more details. Actually, it is proved in that reference that, thanks to such a strategy, one can pass from any steady-state to any other one, provided that the two steady-states belong to a same connected component of the set of steady-states: this is a partially global exact controllability result. The main idea used above[This idea has been used as well to treat other parabolic problems, and even hyperbolic: it has been as well used in <cit.> for the 1D semilinear equation∂_tty = ∂_xx y+f(y),y(t,0)=0, y_x(t,L)=u(t). We first note that, if f(y)=cy is linear (with c∈ L^∞(0,L)), then, taking u(t) = -α∂_t y(t,L) with α>0 yields an exponentially decrease of the energy ∫_0^L ( ∂_t y(t,x)^2+∂_xy(t,x)^2 )dt, and moreover, the eigenvalues of the corresponding operator have a real part tending to -∞ as α→ 1. Therefore, in the general case, if α is sufficiently close to 1 then at most a finite number of eigenvalues may have a nonnegative real part. Using a Riesz spectral expansion, the above spectral method yields a feedback based on a finite number of modes, which stabilizes locally the semilinear wave equation, asymptotically to equilibrium.] is the following fact, already used in <cit.>. Considering the linearized system with no control, we have an infinite-dimensional linear system that can be split, through a spectral decomposition, in two parts: the first part is finite-dimensional, and consists of all spectral modes that are unstable (meaning that the corresponding eigenvalues have nonnegative real part); the second part is infinite-dimensional, and consists of all spectral modes that are asymptotically stable (meaning that the corresponding eigenvalues have negative real part). The idea used here then consists of focusing on the finite-dimensional unstable part of the system, and to design a feedback control in order to stabilize that part. Then, we plug this control in the infinite-dimensional system, and we have to check that this feedback indeed stabilizes the whole system (in the sense that it does not destabilize the other infinite-dimensional part). This is the role of the Lyapunov function V defined by (<ref>). 99 AgrachevSachkovA. Agrachev, Y. Sachkov, Control theory from the geometric viewpoint, Encyclopaedia of Mathematical Sciences, 87, Control Theory and Optimization, II, Springer-Verlag, Berlin, 2004.AmmariTucsnak K. Ammari, M. Tucsnak, Stabilization of second order evolution equations by a class of unbounded operators, ESAIM: Cont. Optim. Calc. Var. 6 (2001), 361–386.AndersonMooreB.D. Anderson, J.B. Moore,Optimal filtering, Prentice hall, Englewood Cliffs, 1979.BardosLebeauRauch C. Bardos, G. Lebeau, J. Rauch, Sharp sufficient conditions for the observation, control and stabilization of waves from the boundary, SIAM J. Cont. Optim. 30 (1992), 1024–1065.Betts J.T. Betts, Practical methods for optimal control and estimation using nonlinear programming, Second edition, Advances in Design and Control, 19, SIAM, Philadelphia, PA, 2010.BonnardCaillauTrelat_COCV2007 B. Bonnard, J.-B. Caillau, E. Trélat, Second order optimality conditions in the smooth case and applications in optimal control, ESAIM Control Optim. Calc. Var. 13 (2007), no. 2, 207–236.BonnardChyba B. Bonnard, M. Chyba, Singular trajectories and their role in control theory, Math. & Appl. (Berlin), 40. Springer-Verlag, Berlin, 2003.BonnardFaubourgTrelat B. Bonnard, L. Faubourg, E. Trélat, Mécanique céleste et contrôle de systèmes spatiaux, Math. & Appl. 51, Springer Verlag (2006), 276 pages.BourdinTrelat L. Bourdin, E. Trélat, Pontryagin Maximum Principle for finite dimensional nonlinear optimal control problems on time scales, SIAM J. Control Optim. 51 (2013), no. 5, 3781–3813.BressanPiccoli A. Bressan, B. Piccoli, Introduction to the mathematical theory of control, AIMS Series on Applied Mathematics, 2, Springfield, MO, 2007.Brezis H. Brezis, Functional analysis, Sobolev spaces and partial differential equations, Universitext, Springer, New York, 2011.BrysonHo A. Bryson, Y.C. Ho, Applied optimal control, Hemisphere Pub. Corporation, 1975.CazenaveHaraux T. Cazenave, A. Haraux, An introduction to semilinear evolution equations, Translated from the 1990 French original by Yvan Martel and revised by the authors. Oxford Lecture Series in Mathematics and its Applications, 13. The Clarendon Press, Oxford University Press, New York, 1998.Cesari L. Cesari, Optimization – theory and applications. Problems with ordinary differential equations, Applications of Mathematics, 17, Springer-Verlag, 1983.Chow W.-L. Chow,Über Systeme von linearen partiellen Differentialgleichungen erster Ordnung, Math. Ann. 117 (1939), 98–105.Clarke F.H. Clarke, Optimization and nonsmooth analysis, Canadian Mathematical Society Series of Monographs and Advanced Texts, John Wiley & Sons, Inc., New York, 1983.Coron J.-M. Coron, Control and nonlinearity, Mathematical Surveys and Monographs, 136. American Mathematical Society, Providence, RI, 2007, xiv+426 pp. CoronTrelat J.-M. Coron, E. Trélat, Global steady-state controllability of 1-D semilinear heat equations, SIAM J. Control Optim. 43 (2004), no. 2, 549–569.CoronTrelat_CCM2006 J.-M. Coron, E. Trélat, Global steady-state stabilization and controllability of 1-D semilinear wave equations, Commun. Contemp. Math. 8 (2006), no. 4, 535–567.CurtainZwart R.F. Curtain, H. Zwart, An introduction to infinite-dimensional linear systems theory, Texts in Applied Mathematics 21, Springer-Verlag, New York, 1995.Andrea B. D'Andréa-Novel, M. De Lara, Control theory for engineers, Springer-Verlag, 2013.Dmitruk A. V. Dmitruk, On the development of Pontryagin's maximum principle in the works of A. Ya. Dubovitskii and A. A. Milyutin, Control Cybernet. 38 (2009), no. 4A, 923–957. Ekeland I. Ekeland, On the variational principle, J. Math. Anal. Appl. 47 (1974), 324–353.FursikovImanuvilov A. V. Fursikov, O. Yu. Imanuvilov, Controllability of evolution equations, Lecture Notes Ser., 34,Seoul National University, Research Institute of Mathematics, Global Analysis Research Center, Seoul, 1996, iv+163 pp.HumbertPrivatTrelat_CPDE2019 E. Humbert, Y. Privat, E. Trélat,Observability properties of the homogeneous wave equation on a closed manifold, Comm. Partial Differential Equations 44 (2019), no. 9, 749–772.Nagel K.-J. Engel, R. Nagel, One-parameter semigroups for linear evolution equations, Graduate Texts Math. 194, Springer-Verlag, 2000.Evans Lawrence C. Evans,Partial differential equations, Graduate Studies in Mathematics, 19, American Mathematical Society, Providence, RI, 1998.GaravelloPiccoli M. Garavello, B. Piccoli, Hybrid necessary principle, SIAM Journal on Control and Optimization 43 (2005), no. 5, 1867–1887.Grisvard P. Grisvard,Elliptic problems in nonsmooth domains,Monographs and Studies in Mathematics, 24, Pitman, Boston, MA, 1985.HaberkornTrelat T. Haberkorn, E. Trélat, Convergence results for smooth regularizations of hybrid nonlinear optimal control problems, SIAM J. Control Optim. 49 (2011), no. 4, 1498–1522.Hale J.K. Hale, Ordinary differential equations, Second edition, Robert E. Krieger Publishing Co., Inc., Huntington, NY, 1980. xvi+361 pp.Haraux A. Haraux, Une remarque sur la stabilisation de certains systèmes du deuxième ordre en temps, Portugal. Math. 46 (1989), no. 3, 245–258.HartlSethi R.F. Hartl, S.P. Sethi, R.G. Vickson, A survey of the maximum principles for optimal control problems with state constraints SIAM Rev. 37 (1995), no. 2, 181–218.HermesLaSalleH. Hermes, J.P. LaSalle, Functional analysis and time optimal control, Mathematics in Science and Engineering, Vol. 56, Academic Press, New York-London, 1969. Imanuvilov O. Yu. Imanuvilov, Controllability of parabolic equations, Sb. Math. 186 (1995), no. 6, 879–900.IoffeThikhomirov A.D. Ioffe, V.M. Tihomirov, Theory of extremal problems, Studies in Mathematics and its Applications, 6, North-Holland Publishing Co., 1979.Jacobson D.H. Jacobson, M.M. Lele, J.L. Speyer, New necessary conditions of optimality for control problems with state-variable inequality constraints, J. Math. Anal. Appl. 35 (1971), 255–284.Hurwitz A. Hurwitz, Uber die Bedingungen, unter welchen einer Gleichung nur Wurzeln mit negativen Reelen Teilen Besitzt, Math. Ann. 146 (1895), 273–284.Jean F. Jean, Control of nonholonomic systems: from sub-Riemannian geometry to motion planning, SpringerBriefs Math., Springer, Cham, 2014, x+104 pp.Jurdjevic V. Jurdjevic, Geometric control theory, Cambridge Studies in Advanced Mathematics, 52, Cambridge University Press, 1997.JurdjevicQuinn V. Jurdjevic, J.P. Quinn,Controllability and stability, J. Differential Equations 28 (1978), no. 3, 381–389.KailathT. Kailath,Linear Systems, Prentice-Hall, 1980.Kato T. Kato, Perturbation theory for linear operators, Reprint of the 1980 edition, Classics in Mathematics, Springer-Verlag, Berlin, 1995.Kautsky J. Kautsky, N.K. Nichols, Robust pole assignment in linear state feedback, Int. J. Control 41 (1985), 1129–1155.Khalil H.K. Khalil, Nonlinear systems, Macmillan Publishing Company, New York, 1992.Komornik V. Komornik, Exact controllability and stabilization, the multiplier method, Wiley, Masson, Paris, 1994.KwakernaakSivan H. Kwakernaak, R. Sivan,Linear optimal control systems, John Wiley, New-York, 1972.LasieckaTriggiani I. Lasiecka, R. Triggiani, Control theory for partial differential equations: continuous and approximation theories. I. Abstract parabolic systems, Encyclopedia of Mathematics and its Applications, 74, Cambridge University Press, Cambridge, 2000.LebeauRobbiano G. Lebeau, L. Robbiano, Contrôle exact de l'équation de la chaleur, Comm. Partial Differential Equations 20 (1995), 335–356.LeeMarkusE.B. Lee, L. Markus, Foundations of optimal control theory, John Wiley, New York, 1967.LiYong X. Li, J. Yong, Optimal control theory for infinite-dimensional systems Systems & Control: Foundations & Applications, Birkhäuser Boston, Inc., Boston, MA, 1995.Lions_interp J.-L. Lions, Espaces d'interpolation et domaines de puissances fractionnaires d'opérateurs, J. Math. Soc. Japan 14, no. 2 (1962), 233–241.Lions_SIREV J.-L. Lions, Exact controllability, stabilization and perturbations for distributed systems, SIAM Rev. 30 (1988), 1–68.Lions_HUM J.-L. Lions, Contrôlabilité exacte, perturbations et stabilisation de systèmes distribués, Tome 1, Recherches en Mathématiques Appliquées, 8, Masson, 1988.LionsMagenes J.-L. Lions, E. Magenes, Problèmes aux limites non homogènes et applications, Vol. 1, Travaux et Recherches Mathématiques, No. 17, Dunod, Paris, 1968.Maurer H. Maurer, On optimal control problems with bounded state variables and control appearing linearly, SIAM J. Cont. Optim. 15 (1977), 345–362.MicuZuazua S. Micu, E. Zuazua,Regularity issues for the null-controllability of the linear 1-d heat equation, Systems Control Lett. 60 (2011), no. 6, 406–413. NelsonE. Nelson, Analytic vectors, Ann. Math. 70 (1959), 572–615.Pazy A. Pazy, Semigroups of linear operators and applications to partial differential equations, Applied Mathematical Sciences, 44, Springer-Verlag, New York, 1983, viii+279Pontryagin L.S. Pontryagin, V.G. Boltyanskii, R.V. Gamkrelidze, E.F. Mishchenko, The mathematical theory of optimal processes,Inc. New York-London 1962, viii+360 pp.Rashevski P.K. Rashevski,About connecting two points of complete nonholonomic space by admissible curve, Uch. Zapiski Ped. Inst. Libknexta 2 (1938), 83–94.RebarberWeiss R. Rebarber, G. Weiss, Necessary conditions for exact controllability with a finite-dimensional input space, Syst. Cont. Letters 40 (2000), 217–227.Routh E.J. Routh, A treatise on the stability of a given state of motion, Macmillan & Co.,London, 1877.Rudin W. Rudin, Functional analysis, Second edition, International Series in Pure and Applied Mathematics, McGraw-Hill, Inc., New York, 1991, xviii+424 pp.Russell D.L. Russell, Controllability and stabilizability theory for linear partial differential equations: recent progress and open questions,SIAM Rev. 20 (1978), no. 4, 639–739.Sontag E.D. Sontag, Mathematical control theory. Deterministic finite-dimensional systems, Second edition, Texts in Applied Mathematics, 6, Springer-Verlag, New York, 1998, xvi+531Staffans O. Staffans, Well-posed linear systems, Encyclopedia of Mathematics and its Applications, 103, Cambridge University Press, Cambridge, 2005, xviii+776 pp.Trelat E. Trélat, Contrôle optimal. (French) [Optimal control] Théorie & applications. [Theory and applications], Mathématiques Concrètes. [Concrete Mathematics] Vuibert, Paris, 2005, vi+246 pp.Trelat_JOTA E. Trélat, Optimal control and applications to aerospace: some results and challenges, J. Optim. Theory Appl. 154 (2012), no. 3, 713–758.TrelatWangXu E. Trélat, G. Wang, Y. Xu,Characterization by observability inequalities of controllability and stabilization properties,Pure and Applied Analysis 2 (2020), no. 1, 93–122.Triebel H. Triebel,Interpolation Theory, Function Spaces, Differential Operators, North-Holland, Amsterdam, 1978.Triggiani_SICON1976 R. Triggiani, Extensions of rank conditions for controllability and observability to Banach spaces and unbounded operators, SIAM J. Control Optimization 14 (1976), no. 2, 313–338.TucsnakWeiss M. Tucsnak, G. Weiss, Observation and control for operator semigroups, Birkhäuser Advanced Texts, Birkhäuser Verlag, Basel, 2009, xii+483 pp.Vinter R. Vinter, Optimal control. Systems & Control: Foundations & Applications, Birkäuser, Boston, 2000.Wang_book G. Wang, L. Wang, Y. Xu, Y. Zhang, Yubiao, Time optimal control of evolution equations, Progr. Nonlinear Differential Equations Appl., 92, Subser. Control, Birkhäuser/Springer, Cham, 2018, xvi+334 pp.Weiss_IJM1989 G. Weiss, Admissible observation operators for linear semigroups, Israel J. Math. 65 (1989), no. 1, 17–43.Weiss_SICON1989 G. Weiss, Admissibility of unbounded control operators, SIAM J. Control Optim. 27 (1989), no. 3, 527–545.Zabczyk J. Zabczyk,Mathematical control theory: an introduction, Systems & Control: Foundations & Applications, Birkhäuser Boston, Inc., Boston, MA, 1992.
http://arxiv.org/abs/2312.15925v1
{ "authors": [ "Emmanuel Trélat" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20231226073837", "title": "Control in finite and infinite dimension" }
↓ ĉ Ŝ ⟨|⟨ |⟩⟩ equation (<ref>T deqnarrayR̂TrS̃tr∞⟨|,ϕ||,ϕ|⟩Û⟨ϕ|||ϕ|⟩Ẑd̂𝒟d̅c̅eqnarray𝒢A sharp inequality for ℓ_p quasi-norm with 0<p≤ 1 and ℓ_q-norm with q>1 is derived, which shows that the difference between x_p and x_q of an n-dimensional signal x is upper bounded by the difference between the maximum and minimum absolute value in x. The inequality could be used to develop new ℓ_p-minimization algorithms. Sharp inequality for ℓ_p quasi-norm and ℓ_q-norm with 0<p≤ 1 and q>1 Zenghui Zhang====================================================================§ INTRODUCTIONThe problem of recovering a high-dimensional sparse signal from a few numbers of linear measurements has attracted much attention <cit.><cit.>. Let x=(x_1,x_2,...,x_n)∈ℝ^n be the signal we need to recover. We say x is k-sparse if it has no more than k nonzero elements, i.e., x_0≤ k. Let Φ∈ℝ^m× n be the measurement matrix with m<<n. We have b = Φx+z, where z∈ℝ^n is a vector of measurement errors and we assume that z_2≤ε. The sparse recovery problem is to reconstruct x based on b and Φ. It can be solved by the following ℓ_0-minimization(P_0)min_xx_0, s.t. b-Φx_2≤ε.However, (P_0) is an NP-hard problem and therefore can not be solved efficiently <cit.>. As alternative strategies, many substitution models for (P_0) are proposed by replacing x_0 with functions that evaluate the desirability of a would-be solution to b = Φx. Because ofx_0 = lim_p→ 0^+∑_i=1^n |x_i|^p = lim_p→ 0^+x_p^p,the following ℓ_p-minimization with 0<p≤1 is often used <cit.><cit.><cit.> (P_p)min_xx_p, s.t. b-Φx_2≤ε. The behavior of different norms is illustrated in Fig. 1. Researchers show that the ℓ_p-minimization with 0<p<1 could recover a sparse signal with fewer measurements than the traditional used ℓ_1-minimization. A central problem in (P_p) is to find the relationship between x_p and x_2. In 2010, Cai T., Wang L., and Xu G. <cit.> gave a norm inequality for ℓ_1 and ℓ_2 as0≤x_2-x_1/√(n)≤√(n)/4(max_1≤ i≤ n|x_i|-min_1≤ i≤ n|x_i|). In this letter, a sharp inequality for ℓ_p and ℓ_q with 0<p≤1 and q>1 is presented, which results in a new inequality for ℓ_p and ℓ_2 as0≤x_2-n^1/2-1/px_p ≤ c_p,2√(n)(max_1≤ i≤ n|x_i|-min_1≤ i≤ n|x_i|),wherec_p=(1-p/2)(p/2)^p/2-p. § NORM INEQUALITY FOR ℓ_P AND ℓ_Q First, we give a lemma below, which will be used to prove the main result of this letter.Lemma 1 Lets(x,y)=(kx^q+(n-k)y^q)^1/q-n^1/q-1/p(kx^p+(n-k)y^p)^1/p,where x>y≥0, 0<p≤1, q>1, n, k are positive integers, and 1≤ k<n. We haves(x,y)≤ s(x-y,0). [Proof] Leth(t) =n^-1/qs(x-t,y-t)=(k/n(x-t)^q+n-k/n(y-t)^q)^1/q-(k/n(x-t)^p+n-k/n(y-t)^p)^1/pwith 0≤ t≤ y. Its derivative about t ish'(t) =-[k/n(x-t/y-t)^q+n-k/n]^1/q-1[k/n(x-t/y-t)^q-1+n-k/n]+[k/n(x-t/y-t)^p+n-k/n]^1/p-1[k/n(x-t/y-t)^p-1+n-k/n].Consider the function g(x,q)=(ax^q+1-a)^1/q-1(ax^q-1+1-a) with x≥1 and 0≤ a<1. We haveg'(x,q)=(1-q)a(1-a)x^q-2(x-1)(ax^q+1-a)^1/q-2.For q>1, g'(x,q)≤0 and g(x,q)≤ g(1,q)=1. For 0<p≤1, g'(x,p)≥0 and g(x,p)≥ g(1,p)=1. Therefore, we haveh'(t) = g(x-t/y-t,p)-g(x-t/y-t,q)≥0.Thus, h(t) is increasing with t, which yields s(x,y)≤ s(x-y,0).Theorem 1 For any x=(x_1,x_2,...,x_n)∈ℝ^n, 0<p≤1 and q>1, we have0≤x_q-n^1/q-1/px_p ≤ n^1/qc_p,q(max_1≤ i≤ n|x_i|-min_1≤ i≤ n|x_i|),with c_p,q=(1-p/q)(p/q)^p/(q-p). The first equality holds if and only if |x_1|=|x_2|=...=|x_n|. The second equality holds if and only if |x_1|=|x_2|=...=|x_n|, or m=n(p/q)^pq/(q-p) is a positive integer and x satisfies |x_i_1|=|x_i_2|=...=|x_i_m| for some 1≤ i_1<i_2<...<i_m≤ n and x_k=0 for k∉{i_1,i_2,...,i_m}.[Proof] (1) The first part of the inequality.Suppose x_i≥0,i=1,...,n. We consider the functionf(p) = log(n^-1/px_p) = 1/plog(1/n∑_i=1^nx_i^p)Its derivative about p isf'(p)= -1/p^2log(1/n∑_i=1^nx_i^p)+1/p1/n∑_i=1^n x_i^plogx_i/1/n∑_i=1^nx_i^p=-1/p^21/n∑_i=1^nx_i^p[(1/n∑_i=1^nx_i^p)log(1/n∑_i=1^nx_i^p)-1/n∑_i=1^n x_i^plogx_i^p]Let g(x)=xlogx, x>0. We have g”(x)=1/x>0, which means that g(x) is strictly concave. Thus,(1/n∑_i=1^nx_i^p)log(1/n∑_i=1^nx_i^p) = g(1/n∑_i=1^nx_i^p)≤1/n∑_i=1^ng(x_i^p)=1/n∑_i=1^n x_i^plogx_i^pTherefore, f'(p)≥0 and f(p) is increasing with p. If 0<p<q, we havex_q-n^1/q-1/px_p =n^1/q(f(q)-f(p))≥0The equality is attained if and only if x_1=x_2=...=x_n.(2) The second part of the inequality. It is obvious that the result holds if |x_1|=|x_2|=...=|x_n|. Without loss of generality, we assume that x_1≥ x_2≥ ...≥ x_n≥0 and not all x_i are equal. Letf(x)=x_q-n^1/q-1/px_p.We have∂ f/∂ x_i=x_i^q-1x_q^1-q-n^1/q-1/px_i^p-1x_p^1-pand∂^2 f/∂ x_i^2 =(q-1)x_i^q-2(∑_i=1^nx_i^q)^1/q-1(1-x_i^q/∑_i=1^nx_i^q)+n^1/q-1/p(1-p)x_i^p-2(∑_i=1^nx_i^p)^1/p-1(1-x_i^p/∑_i=1^nx_i^p).If 0<p≤1, q>1, ∂^2 f/∂ x_i^2≥0 which shows that f(x) is convex. Therefore, if we fix x_1 and x_n, f(x) must achieve its maximum on the borders. This implies that the maximum has the form of x_1=x_2=...=x_k and x_k+1=x_k+2=...=x_n for some 1≤ k<n. Thusf(x)≤(kx_1^q+(n-k)x_n^q)^1/q-n^1/q-1/p(kx_1^p+(n-k)x_n^p)^1/p.By Lemma 1, we havef(x)≤ k^1/q(x_1-x_n)-n^1/q-1/pk^1/p(x_1-x_n).Treat the right-hand side of the above as a function of k for k∈(0,n) l(k)=k^1/q(x_1-x_n)-n^1/q-1/pk^1/p(x_1-x_n).By taking the derivative, we have l'(k)=0 if k=n(p/q)^pq/(q-p). Thereforef(x)≤ l(k)≤ n^1/q(1-p/q)(p/q)^p/q-p(x_1-x_n).Proof of Theorem 1 is completed.§ DISCUSSIONSConsider the inequality of (<ref>), if we define the normalized ℓ_p quasi-norm of x asx_p̅ = (|x_1|^p+|x_2|^p+...+|x_n|^p/n)^1/p,we have0≤x_q̅-x_p̅≤ c_p,q(max_1≤ i≤ n|x_i|-min_1≤ i≤ n|x_i|).Thus, the constant c_p,q is critical for measuring the sharpness of the inequality. The changing of c_p,q with 0<p≤ 1 for various values of q is illustrated in Fig. 2.From Fig. 2, we can get the following results for c_p,q.(1) 0≤ c_p,q≤ 1, which means that the difference between x_q̅ and x_p̅ is no more than the difference between the maximum and minimum absolute value in x. Also, we have lim_p→ 0c_p,q=0 and lim_q→ +∞c_p,q=1. Therefore, the inequality of (<ref>) is very sharp.(2) For every fixed q, c_p,q is monotonously decreasing with p. It is easy to prove because that 1-p/q and (p/q)^p/(q-p) are all monotonously decreasing with p. If we consider the function of l(p) = p/(q-p)ln(p/q) , we have l'(p)=q/(q-p)^2(1+ln(p/q)-p/q)<0 for 0<p≤ 1 and q>1.(3) For every fixed p, c_p,q is monotonously increasing with q. If we consider the function of l(q)=p/(q-p)ln(p/q), we have l'(q)=p/(q-p)^2(-1-ln(p/q)+p/q)>0 for 0<p≤ 1 and q>1.A direct consequence of Theorem 1 is that for any x∈ℝ^n and 0<p≤1,0≤x_2-n^1/2-1/px_p ≤ c_p,2√(n)(max_1≤ i≤ n|x_i|-min_1≤ i≤ n|x_i|),where c_p,2 is defined in (<ref>). § CONCLUSIONA new inequality for ℓ_p-norm and ℓ_q quasi-norm of an n-dimensional signal is proposed, and the conditions that the inequality holds are given in the case where 0<p≤ 1 and q>1. Analysis shows that the new inequality is very sharp. Because the relationship between ℓ_p quasi-norm and ℓ_2-norm is critical for the research of ℓ_p-minimization problems, the new inequality could be used to develop new ℓ_p-minimization algorithms. The generalization of the norm inequality for arbitrary 0<p<q will be studied in the future. 5pt Zenghui Zhang(School of Electronics, Information, and Electrical Engineering, Shanghai Jiao Tong University, P.R.China) 3pt E-mail: [email protected] Donoho, D. L.:`Compressed sensing', IEEE Trans. Information Theory, 2006, 52, pp. 1289-1306 Candes2005 Candès E. J., and Tao T.: `Decoding by linear programming', IEEE Trans. Information Theory, 2005, 51, pp. 4203-4215 Natarajan1995 Natarajan B.:`Sparse approximate solutions to linear systems', SIAM J. Comput., 1995, 24, pp. 227-234Gribonval2003 Gribonval R., and Nielsen M.:`Sparse representations in the union of bases', IEEE Trans. Information Theory, 2003, 49, pp. 3320-3325 Chartrand2007 Chartrand R.:`Exact reconstruction of sparse signals via nonconvex minimization', IEEE Signal Processing Letters, 2007, 14, pp. 707-710 Foucart2009 Foucart S., and Lai M. J.:`Sparsest solutions of underdetermined linear systems via ℓ_q-minimization for 0<q<1', Appl. Comput. Harmon. Anal., 2009, 26, pp. 395-407 Cai2010IT Cai T., Wang L., and Xu G.: `New bounds for restricted isometry constants', IEEE Trans. Information Theory, 2010, 56, pp. 4388-4394
http://arxiv.org/abs/2312.16394v1
{ "authors": [ "Zenghui Zhang" ], "categories": [ "eess.SP" ], "primary_category": "eess.SP", "published": "20231227035527", "title": "Sharp inequality for $\\ell_p$ quasi-norm and $\\ell_q$-norm with $0<p\\leq 1$ and $q>1$" }
[email protected] Dept. of Theoretical Physics, University of Geneva, 1211 Geneva, Switzerland Laboratoire Physico-Chimie Curie, Institut Curie, Université PSL, Sorbonne Université, CNRS UMR168, F-75248 Paris, France Laboratoire Physico-Chimie Curie, Institut Curie, Université PSL, Sorbonne Université, CNRS UMR168, F-75248 Paris, France [email protected] Laboratoire Physico-Chimie Curie, Institut Curie, Université PSL, Sorbonne Université, CNRS UMR168, F-75248 Paris, France Topological defects are ubiquitous on surfaces with orientational order fields. Here, we study equilibrium states generated by the feedback between geometry and nematic order on fluid membranes with an integer topological defect. When the Frank elastic constants associated with the orientational field dominate, the surfaces spontaneously deform toward an conical shape featuring an aster topological defect at its apex. In the case of vanishing tension this is a solution to the normal force balance. We show that the stability of the surface depends on the balance of the elastic parameters and the phase of the defect. When boundary constraints are introduced, we observe three distinct modes of deformation. These deformation modes take advantage of the way in which splay, twist and bend distortions of the director field can be exchanged on a curved surface. We discuss how these deformation modes are distinguished by their response to the cost of twist distortions and the existence of inverted solutions. Our findings show that fusion of +1/2 topological defect pairs can reduce the total energy of deformable surfaces. Finally, we argue how these results can be relevant for biological systems. Passive defect driven morphogenesis in nematic membranes C. Blanch-Mercader January 14, 2024 ========================================================The study of shape shifting materials is often inspired by biological systems and has a broad range of applications. For example, deformable solids that have a prescribed orientational field, such as elastomers <cit.> or inflatable structures <cit.>, can achieve families of morphologies by differential growth <cit.> and have applications in soft robotics<cit.>. Here, we focus on fluid membranes with nematic order. Unlike the previous cases, these membranes can, in addition, flow, remodel and adapt both their orientational field and shape. Fluid membranes achieve their shapes by minimizing a free-energy subjected to physical constraints <cit.>. When orientational order is included, the stresses generated by topological defects can render a flat surface unstable to out-of-plane perturbations <cit.>. In addition, the geometrical properties of the surface, such as the extrinsic curvature, can in turn influence the dynamics of the orientational field and its equilibrium configurations <cit.>. Indeed, for prescribed geometries, couplings between the nematic field and the intrinsic or the extrinsic geometry have been shown to induce symmetry-breaking of equilibrium orientational configurations, or influence topological defect dynamics, <cit.>. Recent studies extended these works by, for instance, including out-of-equilibrium processes, such as active stresses or active torques, or varying surface geometry and topology <cit.>.We study rotationally symmetric systems, which therefore feature a +1 topological defect at their center; +1 topological defects have been associated with geometrical changes in natural and synthetic systems <cit.>. We use cylindrical coordinates to define these shapes, where r is the radial coordinate, θ is the azimuthal coordinate and ζ is the axial coordinate, Fig. <ref>a. The outer circular boundary is placed at r=R; without loss of generality, we set the vertical offset by ζ(R)=0 and R=1. The orientation of the nematic field is described by a director field n̂ that represents the averaged local orientation on the surface. We consider that the director field is tangential to the surface. In addition, we consider that the system is deep into the nematic phase and impose |n̂|=1. This allows the director field to be defined by a scalar phase ψ(r), which corresponds to the angle between the director field and the curvilinear radial direction. The cases ψ=0 corresponds to the aster, 0<ψ<π/2 to a spiral, and ψ=π/2 to the vortex.The two-dimensional free-energy of a fluid membrane with nematic order is given byℱ = ∫_𝒜{k_B H^2+σ+k_1(∇·n̂)^2+ k_2(n̂·(∇×n̂))^2+k_3(n̂×(∇×n̂))^2}da.The first term is the bending energy with mean curvature H, and the second term represents surface tension. We disregard anisotropies in bending energy dependent on n̂. The other terms are the Frank free-energy associated respectively with splay, twist and bend distortions of the director field n̂ <cit.>. The corresponding elastic coefficients are: the bending rigidity k_B, the surface tension σ, and the reduced Frank constants k_1, k_2, and k_3 that are proportional to the membrane thickness. Because the director field is parallel to the surface, the effects of the saddle-splay distortions with elastic constant k_24 can be absorbed in a redefinition of the other Frank constants, <cit.>. As shown in Refs. <cit.> and subsequently extended for more general cases in <cit.>, the thin-film limit of the Frank free-energy results in contributions that couple the director field with both the intrinsic and the extrinsic geometry. In our case, Eq. (<ref>) takes the form derived in Refs. <cit.> and the expression of the free-energy (<ref>) for the special case of a surface of revolution is derived in Sec. 1.1 <cit.>. Other descriptions for surfaces with orientational order formulate the elastic energy associated with distortions of the director field by using the covariant derivative, and thereby neglecting the couplings with the extrinsic geometry <cit.>. For a discussion on the thin-film approximation of liquid crystals, we refer to Ref. <cit.>.We combine analytical and numerical approaches to study equilibrium configurations of the free-energy (<ref>). The former approach is restricted to director fields with a uniform phase ψ, see Sec. 1 and 2 in <cit.>. The latter approach discretizes the functions ζ(r) and ψ(r), Fig. <ref>a, and uses a Monte-Carlo algorithm, see Sec. 3 <cit.>. We first consider the case where all elastic constants associated with distortions of the director field are equal, i.e. k_1=k_2=k_3=k/3. The energy scale is set by the constraint k+k_B+σ=1. Fig. <ref>b shows the height at the center ζ(0) of the equilibrium states that were found for varying elastic parameters. When k_B orσ dominate, the minimal states are flat discs which minimize surface area and curvature. Because the Frank constants are equal, k_1=k_3, all director configurations with a constant phase ψ have equal energy, and thus are minimal states, <cit.>. However, when k dominates, the minimal state changes to an approximately conical surface with phase ψ=0, Fig. <ref>d. The morphological transition occurs via a symmetry-breaking process during which the director adopts an aster configuration. To better understand this spontaneous transition from flat to conical surfaces, we derived analytically the normal force balance equation for a surface of revolution with an embedded director field with uniform phase ψ, see Sec. 1.2 in <cit.> (see <cit.> for a general case). In the special case σ=0, this nonlinear ODE has a set of exact non-trivial solutions corresponding to conical surfaces of varying heights; this is valid even when the one-constant approximation is relaxed, see Sec. 1.3 in <cit.>. This gives us a subset of shapes described by ζ(r)=± m r, with m the height of the cone, that allow us to move continuously from a flat disk to a cone of varying height. Minimizing Eq. (<ref>) with respect to the height m and for a fix phase ψ, one obtains the non-trivial condition m^2=A(1-2sin(ψ)^2)-2-B/1+Asin(ψ)^2+ B,where A=4 k/3k_B and B=(4σ/k_B)(1-Δ^2)/log(1/Δ) are dimensionless parameters and Δ is a dimensionless cut-off lengthscale, see Sec. 2 in <cit.>. The logarithmic divergence at Δ=0 is commonly found in the context of topological defects <cit.>. Eq. (<ref>) leads to the existence condition for conical shapes A(1-2sin(ψ)^2)>2+B. The total free-energy associated with this minimal surface readsF_c/π k_B =√((1+Asin(ψ)^2+B)(Acos(ψ)^2-1))log(1/Δ).As observed, Eq. (<ref>) shows that as k/k_B or k/σ increase, the minimal surface varies from a flat disc (m=0) to a cylinder (m→∞). In addition, Eq. (<ref>) shows that the minimal phase corresponds to ψ=0, even when the Frank constants k_1=k_3 are equal. The selection mechanism for the phase arises from the coupling between the director field and the extrinsic curvature, which tend to align the director field with the minimal principal curvature <cit.>. In the case of conical surfaces, the aster is favoured because it features only splay distortions, and vanishing twist and bend distortions. Furthermore, the total Frank free-energy for the aster decreases as the height of the cone increases, leading to the spontaneous out-of-plane deformation of a surface. This is balanced by the increased curvature and area of the surface. The threshold for a flat disc with an aster topological defect to become unstable is set when its energy (i.e. 2π k_1 log(1/Δ)+σπ(1-Δ^2)) equals the energy (<ref>) for ψ=0, that is whenk/3 = k_B/2+σ(1-Δ^2)/log(1/Δ),shown with the magenta line, Fig. <ref>b. This condition equals the existence condition of conical shapes. A similar approach is used with varying elastic coefficients and defect phase to generate the green line in Fig. <ref>c.According to Eq. <ref>, the height of the surface is also controlled by the phase of the defect. For equal Frank constants, the existence condition for conical shapes can be satisfied when ψ<π/4. Beyond this point, the increased energy associated with twist and bend distortions of the director field on a conical surface is too great and a flat surface is favoured. To further explore this effect, we numerically studied the equilibrium shapes of surfaces with a prescribed uniform phase, ψ, and a varying ratio of bend and splay elastic constants, k_1/(k_1+k_3), Fig. <ref>c. In general, we find that vortices cannot deform a surface. In most cases, the aster generates the maximal out-of-plane deformation, except when k_3≫ k_1 and the maximal deformation occurs near ψ∼π/8, see Fig. <ref>e. This result is also predicted by the conical surface approximation, which is given by the green line in Fig. <ref>c. To further explore these results, we now focus on the effects of varying all three Frank elastic constants; we also relax the constraint that ψ is constant. Both ζ' and ψ are unconstrained at the boundary. Figs. <ref>a and <ref>b show respectively the height at the center, ζ(0), and the phase at the outer boundary, ψ(R), of the energy minimizing states for varying Frank coefficients. We found that, in agreement with Eq. <ref>, the height of the deformed membrane is determined by the magnitude of k_1. Two transitions from flat to deformed surfaces were identified, which are primarily dependent on the relative values of k_1 and k_3. The transition from a flat to a deformed surface with an aster is determined by the threshold (<ref>) with k_1=k/3 (magenta line in Figs. <ref>a-b). A flat disc with a vortex is linearly stable to out-of-plane deformations, see Sec. 1.4 in <cit.>. The red line in Fig. <ref>a-b can be found by comparing the energy of a flat disc with a vortex (i.e. 2π k_3 log(1/Δ)+πσ(1-Δ^2)) to the energy (<ref>) for ψ=0, that is whenk_1/k_B = (k_3/k_B+B/4)^2/(1+B) + 1/4. On a flat surface, if k_1>k_3 (k_1<k_3), the director field assumes a vortex (aster) configuration <cit.>. On a curved surface, however, an aster can be energetically favoured for values of k_1>k_3; indeed all deformed surfaces here feature an aster at their core. In most cases, this is a conical aster deformation with constant phase, Fig. <ref>c. However, when k_2≪ k_3<k_1 we observe a new state with a spatially varying phase which features a conical aster close to the core combined with a negative Gaussian curvature spiral region near the boundary, Fig. <ref>d. The spiral region reduces splay at the cost of additional bend, however the bend is then exchanged for twist on the negative Gaussian curvature surface when ψ has an intermediate value, which significantly reduces the free-energy density when k_2 is small. Now, we consider the system under boundary conditions that might be found in real biological systems. Hence, we fix ψ(R)=ψ_R as the phase at the boundary of the membrane and fix ζ'(R)=0, implying that the surface must be flat at its boundary [Note that the boundary condition ζ'(R)=0 enforces that the total Gaussian curvature, including the tip, is zero.]. Figs. <ref>a&b show the height and the phase at the center of the surface for a full range of the bend and splay constants (k_2=1/3) and the boundary phase ψ_R. On a flat surface, the value of k_1/(k_1+k_3) controls the phase at the core of the topological defect. By varying ψ_R, we can explore regimes where the preferred phase at the center is frustrated with the boundary. We observe five distinct states. First, when the boundary phase ψ_R is compatible with the dominant elastic coefficient, we observe flat asters and flat vortices. In addition to this there are three deformed configurations. When k_1≈k_3 and ψ_R≈0 we find pointy surfaces featuring an aster, where the core and boundary of the defect are broadly in phase, see Fig. <ref>c. As in Fig. <ref>, the deformation here reduces splay distortions and hence stabilizes an aster, even in some cases when k_1>k_3. In the bend dominated regime k_1<k_3 and for ψ_R≈π/2, we observe pointy deformations on which the phase transitions from a vortex at the boundary to an aster at the center, Fig. <ref>e. In this configuration, the transition in the phase is localized to a ring of negative Gaussian curvature where bend distortions can be reduced at the cost of twist through the previously highlighted mechanism highlighted in Fig. <ref>d.In the splay dominated regime k_1>k_3 and for ψ_R≈0 we observe domed deformation on which the phase transitions from aster at the boundary to vortex at the center, Fig. <ref>d. Close to the boundary, the surface sharply deforms toward a cylindrical shape, abruptly reducing the splay at the cost of bend. On the cylindrical surface, the director field transitions from an aster to a vortex, further reducing splay and introducing additional twist and bend distortions. The surface flattens toward the top featuring a vortex. Overall, this buckling mode is able to achieve the highest degree of deformation. Twist distortions are only possible in spiral regions of the defect when ψ has an intermediate value, this is only observed in the deformation modes that feature a transition in the phase. Therefore the value of k_2 can be used to mediate the deformation of these surfaces. The splay dominated deformationmode, Fig. <ref>d, features twist on an extended cylindrical section of the surface. When k_2 is increased the cylindrical area is reduced and the spiral is pushed into the positive Gaussian curvature, Fig. <ref>a&c. This reduces the induced twist and the height of the deformation reduces asymptotically toward a finite value, cyan curve Fig. <ref>a. The radial transition in the phase of the bend dominated deformation, Fig. <ref>e, is entirely within the negative Gaussian curvature region, which introduces increased twist. When k_2 is increased, the only way to reduce the twist is to reduce the negative Gaussian curvature, which in turn reduces the magnitude of the deformation and eventually suppress this mode, magenta curve Fig. <ref>a. Conversely, the conical aster deformation, Fig. <ref>c, features no transition in the phase, hence no twist, and is approximately unaffected by increasing k_2, black curve Fig. <ref>a. In fact, when k_2 is reduced to near zero, the director will locally distort to reduce bend and introduce twist, Fig. <ref>b. We additionally observe bistability for these solutions. All surfaces shown thus far are identical under the reflection ζ→-ζ. This symmetry can be broken by setting the boundary conditions ζ'(R)≠ 0. All three previously identified buckling modes have a stable configuration in which ζ' changes sign closer to the center, see Figs. <ref>d-f. Finally, this study reveals a novel mechanism for the spontaneous fusion of half-integer topological defects on fluid membranes. Consider the special case σ=0 and k_1=k_2=k_3=k/3. It is known that the energy minimizing nematic field with a total charge of +1 on a flat disc has two +1/2 defects. In this case, the energy scales, up to numerical pre-factors of order 1, as F_f∼π k/3 log(R/Δ) <cit.>. However, the free-energy of a conical surface with an aster at the apex scales sub-linearly with k, (i.e F_c∼π k_B √((4k/3k_B-1))) log(R/Δ)). Therefore, a critical threshold (k/3k_B)_c=2+√(3) arises from the balance between the energies of these two states. Thus, if (k/3k_B)>(k/3k_B)_c, the pair of +1/2 topological defects can spontaneously fuse by deforming the surface out of plane. Indeed, past works have shown that this process can occur for prescribed conical surfaces <cit.>.Next we discuss the relevance of this mechanism in biological systems, see Sec. 5 in <cit.>. For suspensions of cytoskeletal filaments, the Frank constant ranges from k/h∼ 0.1-1 pN for actin to 100 pN for microtubules, <cit.>, where h=0.1 μm is the average thickness of a cytoskeletal layer <cit.>. In the bending dominated regime (k_B≫σ R^2), taking thebending modulus of a vesicle k_B∼ 10-100 pN· nm <cit.>, we obtain that the ratio k/k_B∼ 1 for actin and it is k/k_B∼ 10^2-10^3 for microtubules. In the tension dominated regime k_B≪σ R^2, the threshold is controlled in addition by the system geometry via R. Taking the tension of a vesicle to be 10^-3 N/m <cit.>, we obtain that the critical lengthscale R_c=√(k/σ) is comparable to the thickness h and therefore negligible. These estimations suggest that in low-tension regimes, biological system can induce out-of-plane deformation via the mechanism described above. In fact, this phenomenon may have been observed in Ref. <cit.> that studied a suspension of microtubules encapsulated on vesicles. In this work, the authors report that lowering the surface tension leads to a spindle-like vesicles with two +1 defects localized at the spindle poles. In addition, the equilibrium deformation by topological defects can also influence shape dynamics in systems driven by out-of-equilibrium processes <cit.>. For instance, because the aster tend to facilitate surface deformations, it can be a nucleation point for out-of-plane deformations on fluid membranes, bacterial biofilms or cell monolayers.We are grateful to Isabelle Bonnet, Mathieu Dedenon, Karsten Kruse, Jean François Joanny, Jacques Prost, and Feng-Ching Tsai for insightful discussions. DJGP acknowledges funding from the Swiss National Science Foundation under starting grant TMSGI2 211367. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 847718. Q.C acknowledges funding from Institut Curie EuReCa PhD Programme. This publication reflects only the author's view and that the European Research Agency is not responsible for any use that may be made of the information it contains. References cited in SI: <cit.>
http://arxiv.org/abs/2312.16654v1
{ "authors": [ "D. J. G. Pearce", "C. Thibault", "Q. Chaboche", "C. Blanch-Mercader" ], "categories": [ "cond-mat.soft", "physics.bio-ph" ], "primary_category": "cond-mat.soft", "published": "20231227180050", "title": "Passive defect driven morphogenesis in nematic membranes" }
A Self Supervised StyleGAN for Image Annotation and Classification with Extremely Limited Labels Dana Cohen Hochberg, Hayit Greenspan Member, IEEE, Raja Giryes Member, IEEE D.C. Hochberg is with the School of Electrical Engineering, Tel-Aviv University, Tel-Aviv 6997801, Israel (email: [email protected]) H. Greenspan is with the School of Biomedical Engineering, Tel-Aviv University, Tel-Aviv 6997801, Israel (email: [email protected]) R. Giryes is with the School of Electrical Engineering, Tel-Aviv University, Tel-Aviv 6997801, Israel (email: [email protected]) This work was supported by the Ministry of Science and Technology, Israel. The work of RG is supported by ERC StG under Grant 757497. 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: https://doi.org/10.1109/TMI.2022.318717010.1109/TMI.2022.3187170January 14, 2024 ===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================The recent success of learning-based algorithms can be greatly attributed to the immense amount of annotated data used for training. Yet, many datasets lack annotations due to the high costs associated with labeling, resulting in degraded performances of deep learning methods. Self-supervised learning is frequently adopted to mitigate the reliance on massive labeled datasets since it exploits unlabeled data to learn relevant feature representations. In this work, we propose SS-StyleGAN, a self-supervised approach for image annotation and classification suitable for extremely small annotated datasets. This novel framework adds self-supervision to the StyleGAN architecture by integrating an encoder that learns the embedding to the StyleGAN latent space, which is well-known for its disentangled properties. The learned latent space enables the smart selection of representatives from the data to be labeled for improved classification performance. We show that the proposed method attains strong classification results using small labeled datasets of sizes 50 and even 10. We demonstrate the superiority of our approach for the tasks of COVID-19 and liver tumor pathology identification. Classification, Pathology Identification, StyleGAN, Self-Supervised Learning, Representative selection. § INTRODUCTION Deep learning achieved great success in various computer vision tasks, and particularly in supervised learning tasks such as classification. This success is attributed to the large amounts of labeled training data that enable the network to learn meaningful feature representations. Unfortunately, it is often difficult to obtain a satisfactory amount of labeled images since obtaining them is both expensive and time-consuming. Moreover, in the medical field, annotation of medical images additionally requires domain expertise. Therefore, it is often the case in medical image analysis, where there are extremely small labeled datasets with positive pathology cases <cit.>. This has led to an increase in the search for approaches that perform well with limited annotations. To overcome this problem current research includes methods such as self-supervised learning <cit.>,few-shot learning <cit.> and active learning <cit.>. Active learning addresses the above limitation by querying and annotating the most informative samples from the unlabeled data to achieve the highest classification performance at the lowest labeling cost. Traditionally, active learning is an iterative process where a model is updated in each iteration, and points are selected to be labeled from the unlabeled data in accordance with a set of heuristics. Active learning methods can generally be divided into three main approaches: uncertainty-based<cit.>, representation-based <cit.>, and their combination <cit.>. Uncertainty-based methods label samples from the unlabeled data pool that the model is least confident about, hence enhancing the models’ performance when adding them to the labeled pool. Representation-based approaches label the most representative samples of the unlabeled data, thus increasing the diversity of the labeled data pool. Self-supervised learning has gained considerable popularity in recent years demonstrating promising results on a broad range of computer vision tasks. The advantage of these techniques is that they leverage unlabeled images from the target data domain during a pretraining phase that learns relevant representations of the data, which boosts the performance of various learning tasks <cit.>. Some works use Generative Adversarial Networks (GANs) in conjunction with self-supervised tasks <cit.> and establish promising results in unconditional image generation. In an unsupervised manner, GANs learn the feature representations of the data by generating images from input latent codes while training in an adversarial manner. The input latent code acts as the feature representation of the generated image since it contains the necessary information for synthesizing the image. To obtain this representation, an encoder can be added as self-supervision. Adding self-supervision to GANs was shown to improve both the image quality and training stability <cit.>. It is generally used as a regularization mechanism for the discriminator which consequently aids the generator in producing higher quality synthesis and better capturing the global structures<cit.>.The resolution and quality of synthetic images generated by GANs have significantly improved in recent years. StyleGAN, introduced by Karras et al. <cit.> proposes a novel style-based generator that achieves state-of-the-art performance in high-resolution image synthesis. Aside from its unprecedented generation capabilities, StyleGAN learns an intermediate latent space which was shown to contain disentangled properties and enable control over the synthesis process <cit.>. These motivated many researchers to embed into the latent space of StyleGAN <cit.> for manipulation of its latent space and various image-to-image translation tasks <cit.>. In this work, we introduce Self-Supervised StyleGAN (SS-StyleGAN), a self-supervised feature representation learning strategy for image annotation and classification that requires minimal labeled data. We present a novel framework that seamlessly combines StyleGAN with an encoder that learns the embedding to its intermediate latent space and leverages its semantically meaningful structure for selecting representatives for labeling and classification. The encoder is incorporated within the StyleGAN architecture thereby adding self-supervision to the framework while learning the feature representations of the data. Consequently, by regularizing the discriminator, the generator is prompted to produce a higher quality synthesis. Once trained, all images are mapped to their latent representations where we employ a smart labeling scheme based on the farthest point sampling algorithm (FPS) <cit.> to select distinct representatives for labeling to achieve high classification performance. An important advantage of our approach compared to previous self-supervised methods is that the semantic structure of the latent space induced by StyleGAN allows us to replace the random sampling with the FPS-based smart labeling algorithm that exploits the favorable structure of the latent space. Combining this latent space with our sampling algorithm allows us to obtain superior classification results. Additionally, unlike active learning methods, our approach does not require retraining between each labeling iteration but rather selects all representatives at once. Our method is of high medical significance since it allows radiologists to label fewer images while maximizing the classification performance. We validate our method on two medical image classification tasks: COVID-19 and liver tumor classification. We demonstrate the superiority of our method over the current state-of-the-art classification, self-supervised learning, and active-learning approaches. The main contributions of our work may be summarized as * A new Self-Supervised approach for annotation and classification with very small amounts of labeled data. * A novel use of the StyleGAN latent space for the task of smart annotation.* Demonstrate the strength of the developed system for the tasks of COVID-19 and liver tumor pathology identification.§ BACKGROUND The following section includes an overview of the background and related work. First, we introduce self-supervised learning techniques suitable for training with a limited amount of images. We then describe the StyleGAN latent spaces and strategies for embedding into these spaces. Finally, we present an overview of several active learning approaches, some of which are used as baselines in our experiments. §.§ Self-Supervised LearningMany existing deep learning approaches are often limited by the lack of annotations, as they lead to poor performance due to over-fitting and biased results.Self-supervised learning methods offer asolution that eliminates the necessity of labeled data by learning the underlying structure of the data while solving some auxiliary pretext task from the input data itself. These representations could then be fine-tuned with a few labels for a supervised downstream task such as image classification, object detection, semantic segmentation, etc.Recently, contrastive-loss-based learning methods have gained popularity in self-supervised computer vision tasks and achieve impressive performance for learning with small amounts of annotated data <cit.>. Instance-discrimination techniques like MoCo <cit.> and SimCLR <cit.> apply contrastive learning to the entire image instance with the objective of keeping the learned representations invariant under different types of image augmentations. By adding a linear classifier, they achieved classification accuracy that approached the fully supervised learning methods. These approaches have been leveraged in the medical domain to dramatically improve label efficiency for semi-supervised learning <cit.>. Some works adapted contrastive learning to the medical domain <cit.>, while others designed task-specific pretexts <cit.>.For example, Sowrirajan et al. <cit.> proposed an adaptation of MoCo for improving the classification of chest X-ray models while Azizi et al. <cit.> demonstrated the effectiveness of contrastive self-supervised approaches as a pretraining strategy for medical image classification. §.§ StyleGANA considerable improvement has been made in image quality and resolution since GANs were first introduced by Goodfellow et al. <cit.>. In recent years, progressive and style-driven approaches have been proposed, setting new standards for image synthesis. StyleGAN <cit.> extends the concept of the progressive growing GANs for synthesizing images by continuously increasing the image resolution throughout the training <cit.>. StyleGAN introduces a novel generator architecture that yields state-of-the-art results in high-resolution image synthesis and provides a new technique for controlling the synthesis process. StyleGAN is comprised of several latent spaces. The generator learns a mapping network M which embeds an input latent code z ∈ Z to a vector in the intermediate latent space w ∈ W, aka "style". W defines the styles that are integrated within the generator architecture. While the distribution of the input vector z is fixed, the distribution of w is learned from the training data itself throughout the training. Therefore, there are no explicit constraints on the structure of W and it learns to capture and disentangle the inherent structure of the training data <cit.>. As a result, w latent vectors are more semanticallymeaningful than z <cit.>.In recent years, multiple studies have explored embedding real-world images into the latent space of GANs. Embedding into these spaces enables the control of the synthesis and therefore opens the door to many image-to-image translation tasks. GAN inversion is used to invert real images back into their latent representations and reconstruct the image via a pretrained generator. Leading approaches for the inversion task include latent vector optimization and using an encoder to map images to the latent space.iGAN <cit.> obtains the embedding of an image by continuous optimization while BiGAN <cit.> adds an encoder network to the GAN architecture. This framework is simultaneously trained with the discriminator's objective to classify whether the latent code is real, from a generated image or a real image with an encoded code.However, DCGAN <cit.> is used as a Generator in both works, resulting in limited quality.Feigin et al. <cit.> proposed GAEL, a generic architecture that embeds an encoder within the discriminator network with the purpose of creating a latent code from the discriminator’s input images. They demonstrated an improvement in the generated images in both Vanilla GAN and Wasserstein GAN and proved their superiority over BiGAN and other similar methods. For higher resolution images, many works use StyleGAN for the embedding due to its high quality synthesis as well as its semantically meaningful latent space which can be utilized for many image-to-image translation tasks <cit.>. These works typically embed to the W space <cit.> or to the extended latent space W+ <cit.> which is a concatenation of different w vectors, one for each scale of the generator architecture. Note that inverting into W or W+ was found to be easier than into an Z and achieved better reconstructions and editing <cit.>.Works embedding into the StyleGAN latent space include embedding either by direct optimization approaches <cit.>, encoder-based methods <cit.>, or their combination <cit.>. Image2StyleGAN <cit.> optimizes a separate style for each scale by projecting the image into the extended latent space W+.Pidhorskyi et al. proposed ALAE <cit.>, a StyleGAN-based, progressively growing autoencoder architecture, where the encoder is trained alongside the StyleGAN generator to generate latent codes in the W space. Furthermore, Tov et al. introduced e4e <cit.>, an encoder designated for the task of image editing by mapping to a latent code comprised of a series of style vectors with a similar distribution of W.Optimization methods typically lead to better reconstruction quality than encoder-based methods, but they require significantly more time. Nonetheless, existing encoder-based methods require the addition of an encoder network which adds computational cost as well as additional training time due to separate training of StyleGAN and the encoder.While these works focus on the tasks of image reconstruction and manipulation, we leverage the properties of the StyleGAN latent space for an entirely different task of image annotation and classification. Yet, we embed directly into the W latent space which contains the semantic content and maintains the same class as the input image. Inspired by GAEL <cit.>, we integrate an encoder within the StyleGAN architecture for the task of self-supervised learning. In this combined framework, the latent space is focused to be suited for inversion while maintaining its disentangled properties and thereby enabling us to encode images into the latent space for the classification task. While the workin <cit.> focused on improving image generation quality, our work focuses on self-supervised learning and classification. Our unique combination of the encoder with StyleGAN that both exploits its semantically meaningful learned latent space and allows projecting an image directly into this space provides us with a very strong tool for self-supervision.Moreover, compared to existing StyleGAN encoders <cit.>, our encoder is integrated in the discriminator and requires only very little additional computational overhead or training time.§.§ Active LearningIn recent years, several approaches for active learning have been introduced. These techniques can be partitioned into either pool-based or query synthesis methods. Pool-based algorithms select the most informative samples based on multiple sampling strategies. However, in query-synthesis methods, generative models are used to generate the most informative samples rather than querying them from the unlabeled data <cit.>. As pool-based approaches are more pertinent to our research, we will focus our review on those studies.Pool-based methods select new training samples from a given pool of unlabeled data by evaluating the importance of each image based on a given criterion. These methods can be divided into three main categories: uncertainty-based <cit.>, representation-based <cit.>, and a combination of the two <cit.>.Uncertainty-based methods select samples from the unlabeled data pool to maximize an uncertainty metric that the model is less confident about. Common uncertainty-based sampling heuristics include entropy, least confidence, and margin sampling techniques <cit.>.Gal et al. <cit.> use multiple forward passes with Monte Carlo Dropout for estimation of the uncertainty. Yoo et al. <cit.> propose a Learning-Loss loss method that attaches a loss prediction module to a task-learner to predict the losses of the unlabeled samples. Hence, the losses are predicted for the entire unlabeled pool and the samples with the top-K losses are labeled.Representative-based approaches select the highest-diversity data points that will increase the diversity of the labeled pool. To this end, the representation of the data is extracted from the model where the distribution is computed. Some works include optimizing the selection of the data by imposing a diversity constraint <cit.> and clustering to select the most representative samples <cit.>. The Coreset algorithm <cit.> determines representative samples by minimizing the distance between labeled and unlabeled data with intermediate feature information of a trained deep neural network. Nevertheless, Coreset’s optimization algorithm does not scale well as the number of classes, and unlabelled samples increases. Moreover, distance-based representation methods, such as Coreset, do not cope well with high-dimensional data <cit.>.Hybrid approaches combine uncertainty and representative-based methods where the samples with the highest uncertainty are selected as the most representative samples in a batch.Recently, variational autoencoders (VAE) have been used in conjunction with adversarial training <cit.> to determine the informativeness of unlabeled samples. Sinha et al. <cit.> proposed Variational Adversarial Active Learning (VAAL), which trains a VAE alongside an adversarial network to discriminate between the labeled and unlabeled samples. § METHOD Our method consists of several stages: (i) First, an encoder is trained alongside StyleGAN to learn the latent representations of the data; (ii) Next, all images are embedded to the latent space where T-Distributed Stochastic Neighbor Embedding (t-SNE) <cit.> is performed; (iii) The FPS algorithm <cit.> is applied to select the instances for labeling from the unlabeled images that are the most distinct in the latent space; (iv) Finally, The unlabeled images are classified according to the nearest labeled neighbor (NN) in the latent space <cit.>. §.§ SS-StyleGAN Architecture Our proposed architecture is based on the original implementation of StyleGAN2 <cit.> with the addition of self-supervision. We incorporate an encoder to the discriminator architecture of StyleGAN2 (noted as StyleGAN for the rest of the paper) that aims to estimate the latent code for the discriminator’s input images. The encoder framework is trained simultaneously with StyleGAN thereby adding self-supervision to the network. The encoder is integrated via shared layers and weights whose output is then forwarded into two small independent similar sub-networks, one for the discriminator and one for the encoder. Fig. <ref> illustrates our proposed framework. The number of shared layers was determined empirically to achieve optimal results in the training. Refer to Section <ref> for the analysis of the impact of the number of shared layers on the performance of StyleGAN. The latent space chosen for the embedding is the intermediate latent space of StyleGAN, W, owing to its disentangled properties which we utilize later for both the annotation and classification tasks. StyleGAN is simultaneously trained along with the encoder, therefore, our encoder is able to learn meaningful representations while adding very little training time or computational cost. Our network is trained with a weighted combination of the original StyleGAN loss as well as two additional losses for the encoder. The first loss is derived from the log-likelihood loss in the GAN inversion presented in <cit.>. The StyleGAN generator, G, maps each latent code W ∈ R^M from the intermediate latent space to an image X ∈ R^N× N, i.e., x = G(w), and the encoder, Enc(x), embeds an input image x tothe latent space W, i.e., w = Enc(x). Σ(x) represents an output of the encoder that acts as the variance of the estimation of the encoded images, Enc(x). The second loss includes the mean squared error (MSE) between the original image, x_real and reconstructed image G(Enc(x_real)).The overall encoder loss is ℒ_Enc = -λ E_x,w∼ P(x,w)[log(P(w|x)] +βℒ_MSE(x_real,G(Enc(x_real))),where log(P(w|x))= −0.5log((2π)^M|Σ(x)|)-0.5(w - Enc (x))^TΣ^-1(x)(w − Enc (x)).§.§ Representative-based Nearest Neighbor ClassificationOne of the main advantages of the style-based generative framework is its well-behaved, disentangled feature space which can be exploited for downstream tasks. For the task of classification, this is especially profitable as only a few annotations, selected by a smart sampling algorithm, are required to generalize over the entire space. Once SS-StyleGAN is trained, all images are embedded to the latent space where t-SNE <cit.> and FPS <cit.> are employed. t-SNE enables the representation of high-dimensional data in a low-dimensional space while maintaining both the local and global data structures and emphasizing the similarity between the data points. This attribute along with its non-linearity have enabled t-SNE to surpass other dimensionality reduction techniques, such as PCA (principal components analysis) <cit.>, in various applications including classification <cit.>.We apply t-SNE on the encoder's output, the intermediate 512-dimensional latent vector, to project it to a 2D space.FPS is then employed for iteratively sampling a set of k representatives for labeling. FPS starts by selecting a random point and iteratively selects the point with the largest geodesic distance to the ones previously selected until k (the number of samples to be annotated) are chosen. The points selected by FPS are labeled and the others are classified based on their nearest labeled sample with a nearest neighbor classifier <cit.> applied on the 2D t-SNE space. An illustration of the annotation and classification workflow is provided in Fig. <ref>. §.§ DatasetsTo demonstrate our method, we examined two medical imaging datasets. Examples of images from each dataset are presented in Fig. <ref>. §.§.§ COVID-19 DatasetTwo small annotated COVID-19 publicly available datasets were combined: The first, MedSeg COVID-19 CT dataset <cit.>, contains 9 axial volumetric CT scans with both COVID-19 positive and negative slices. The second dataset <cit.> consists of 20 CT scans that contain patients who are positive or negative but with other types of pneumonia. Axial slices smaller than 0.9 and larger than 0.22 were used from both datasets leading to 3,036 slices in total which are divided into COVID-19 positive and negative slices.Preprocessing included windowing the Hounsfield unit (HU) values in the range [-1024, 325], resizing the images to a resolution of 512×512 and replicating each slice to create the 3 channels of an RGB image.§.§.§ LiTs DatasetComputed tomography (CT) scans from the publicly available training dataset of the Liver Tumor Segmentation Challenge (LiTS) challenge <cit.>. The images are of resolution of 512× 512 and contain slices coming from the entire abdomen region which we separated into two groups: slices with tumors in the liver and without, as we use this dataset for a binary classification task. To maintain only the relevant organs the HU values were windowed to the range [-300, 300]. Additionally, as mentioned, the images were converted to RGB images.§.§ Implementation DetailsWe build upon the official TensorFlow implementation of StyleGAN <cit.>. We modify the architecture to include an encoder by sharing 12 resolution layers with the discriminator that are followed by 4 additional (non-shared) layers for each. All the training details and parameters are identical to configuration F of StyleGAN. Our models were trained on images of resolution 512×512 but could easily be adapted to other resolutions as well. All networks were trained on a single NVIDIA GeForce GTX 1080 Ti GPU.§ EXPERIMENTS AND RESULTSTo assess the effectiveness of our approach on the classification performance we compared it to a state-of-the-art classification network, a self-supervised method, and two active learning methods. We additionally conduct an ablation study to estimate the impact of the self-supervision on StyleGAN, investigate the optimal configuration of the encoder and annotation algorithm, and highlight the importance of the latent space used for the embedding on the classification task. §.§ Experimental setup We divide our method into two phases: self-supervised training and classification. For the self-supervised phase, SS-StyleGAN was trained in an unsupervised manner on all images from each dataset. For the LiTs dataset, we fine-tuned a StyleGAN pretrained on the FFHQ dataset <cit.> and for the COVID-19 training, we fine-tuned StyleGAN pretrained on the LiTs dataset, since the LiTs dataset is much larger in size and has similar anatomical structure. The same training parameters were used for both models. The framework was trained with λ=10 and β = 1. Similar to <cit.>, Σ(x) was modeled as the identity matrix, Σ(x) = I, for increased simplicity. For the dimension reduction technique in the classification phase, t-SNE was trained for 1000 iterations with a learning rate of 200 and a perplexity of 10 for the COVID-19 dataset and 50 for the LiTs dataset. In the comparison of the classification task, we consider as baselines the following state-of-the-art frameworks: the EfficientNet classification network <cit.>, the MoCo v2 <cit.> self-supervised learning approach, and two known active learning methods, VAAL <cit.> and the Learning Loss algorithm <cit.>. Transfer learning (TL) was applied for both EfficientNet and MoCo v2 from ImageNet <cit.> as it was shown to improve classification results of medical images for both supervised and self-supervised methods <cit.>.Furthermore, for the COVID-19 dataset, due to its small size, we trained MoCo v2 on the LiTs dataset prior to fine-tuning on COVID-19, referred to as MoCo v2 (TL) as it has been demonstrated that pretraining MoCo on a medical dataset leads to improved representations and results <cit.>. Next, the encoder layers were frozen and a linear classifier was trained as described in <cit.>.For a fair comparison, we also employ our representative labeling approach on MoCo v2, referred to as MoCo v2 (FPS). The labeling is applied in the same manner as in our approach, namely using the t-SNE algorithm in the learned latent space of MoCo, followed by FPS and NN classification. Furthermore, we present the results of our method, SS-StyleGAN, with random sampling (RS) instead of FPS.We compare our model to two active learning approaches; a GAN-based method, VAAL<cit.>, and Learning Loss <cit.>. Another active learning method tested was the Coreset approach <cit.>. This method was not able to achieve adequate results even when training with 50 images in our experiments and therefore we do not display its results. The training of the active learning models is initialized with a single labeled image from the training set while the rest of the training images comprise the unlabeled data set. In each iteration, ten images are selected from the unlabeled data for annotation. These images are then added to the labeled training set and the training is repeated.We trained the active learning models three times and all other models fives times, each with a different train and test set (randomly selected each time), due to the extremely long training time required for adequate training of the active learning methods. Additionally, to asses the generalization ability of our model on other data that has not been used for training SS-StyleGAN, we tested our annotation and classification algorithms on another COVID-19 dataset from China Consortium of Chest CT Image Investigation (CC-CCII) <cit.>. This dataset is publicly available and contains COVID-19 positive and negative CT scans. For this dataset, the performance of our annotation and classification strategy was evaluated on 722 COVID positive and 294 negative CT slices. Note that the encoder is tuned in our model not for accurate reconstruction (as is the case with the StyleGAN inversion models) but rather for improving the generation quality, which is reflected in the higher FID score that is achieved (Table <ref>), and creating a useful representation that can be used for embedding and classification afterwards. Therefore, we do not measure its performance in terms of reconstruction quality in the experiments as it does not represent the functionality of our designed solution. §.§ Ablation Study To evaluate the significance of each aspect of our suggested approach, we conducted several experiments to evaluate the effect of self-supervision on the quality of the generated images, find the optimal encoder framework and annotation method, and examine the latent space for the embedding.The comparison of the different models and configurations was performed by evaluating the quality of the generated images in each. To that end, the Fréchet Inception Distance (FID) metric <cit.> was computed. FID is a measure of the difference between two distributions in the feature space of an InceptionV3 classifier <cit.> and is often used to assess the quality of the synthesized images by GANs. Table <ref> reports the FID metric of our model in various configurations and of the original StyleGAN model when trained with the LiTs dataset. Results show that our model outperforms StyleGAN, reaching an FID score of 11.6 compared to 16.3. In other words, by incorporating self-supervision, the generator is empowered to a higher quality synthesis. Examples of generated images by our model for both datasets are displayed in Fig. <ref>. Furthermore, our model offers significantly better performance when trained with 12 shared layers (4 non-shared) in comparison to the other configurations, and was therefore the selected configuration. This performance can be observed in Table <ref> by the decrease in FID scores as the number of shared layers declines and reaches a minimum at 12 shared layers. We explore the optimal latent space used for the classification task by comparing the performance of our model when embedding into each of the latent spaces of StyleGAN, Z, and W. Table <ref> presents a comparison of the classification performance when mapping to each space for both datasets. As a reminder, our setup, SS-StyleGAN, includes both embedding into W space and training with a loss on W (Equation <ref>). The results show significantly degraded performance when embedding into Z in all experiments. This does not come as a surprise as W is known for its disentangled properties and meaningfulness. Moreover, to address the question of whether the results are derived from the latent space used in the loss throughout the training, we further employ our method with the loss on Z for the LiTs dataset, noted as SS-StyleGAN (Z loss, Z space). As shown in Table <ref>, this configuration does not produce reasonable results as well. Furthermore, we examined our method with several variations presented in Table <ref>; classification directly on W and linear classification instead of NN on W before and after t-SNE. The results in these experiments were inferior to our current setup.This confirms that the t-SNE data dimensionality reduction preserves only the essential information leading to improved classification performance.Moreover, we evaluated our method with PCA instead of t-SNE using the aforementioned variations and also in a setup, where we add a Multi-layer Perceptron (MLP) classifier. Note, FPS was applied in all cases to select the images to annotate. All experiments led to degraded performance as can be seen in Table <ref>.Since PCA is a linear technique it is incapable of capturing non-linear dependencies. Moreover, unlike t-SNE, PCA does not preserve the local structures of the data <cit.> which is essential considering our task. §.§ Classification ResultsTo evaluate our classification performance, we conducted a series of experiments. In each experiment, 15% of the CTs were selected randomly as the test set and the rest were used for training. From the training set 10, 20, or 50 slices were selected for labeling. Table <ref> and <ref> display the results of the mean and standard deviation of all models for both COVID-19 and LiTs datasets respectively. The results confirm that our model is capable of generalizing even from an extremely limited set of training images. With only 10 and 20 images we achieve an AUC of (0.82 ± 0.03, 0.87 ± 0.04) and (0.72 ± 0.02, 0.78 ± 0.06) for the COVID-19 and LiTs datasets respectively. Our model outperforms all other models for both datasets and was capable of attaining meaningful results even when most others completely failed. For example, for the COVID-19 dataset and 20 images training, EffitientNet fails with an AUC of 0.53 and we outperform the next best model, MoCo v2 (FPS), with a 4% gain in AUC and the traditional MoCo v2 (with and without TL from LiTs), with a gain of more than 24%. Similarly, for both datasets with 50 training images, we outperform all models in AUC by over 6%. Fig. <ref> demonstrates that our model achieved significantly higher AUC scores than any other model on both datasets. Fig. <ref> displays the images selected by our annotation algorithm for the 10 labeled images experiment. The images show that our approach selects samples of different lungs or abdominal heights. Moreover, for the COVID-19 dataset, an equal number of positive and negative slices were chosen. The large variability in the selected examples provides reasoning to the performance achieved with our algorithm. As previously mentioned, transfer learning was used for most competing models. We can therefore conclude that transfer learning alone is not enough in the scenarios presented. As opposed to all other models, our model is capable of handling small amounts of training data, whereas most other models require more images to ensure adequate training. This task remains very challenging, even for MoCo v2, which is also a self-supervised method, and all active learning models. The results indicate that the FPS algorithm improves the performance of our model, but even without it, it remains superior to most others. Therefore, it can be concluded that our model learns meaningful feature representations of the data and that even a small number of images are sufficient for its generalization. All these are unique and set apart our model from other approaches. §.§ The Learned Latent Space To better understand the reason behind our method's success, we visualize the t-SNE features of our learned latent space and of MoCo v2 in Fig. <ref>. The plots indicate the presence of distinct classes for our method in both datasets. Similarly, the classes in the MoCo v2 latent space for the COVID-19 dataset can also be distinguished, which contributes to the improved performance obtained by MoCo v2 (FPS) over the conventional method (Table <ref>). However, for the LiTs dataset, MoCo v2 was entirely unable to differentiate between the classes. This confirms that the latent space imposed by our method is capable of capturing the underlying representation of the data and reflects how the classification can be performed with such a limited number of images.To further exhibit the good properties of the learned space, we demonstrate the disentanglement of the class attribute in the latent space by manipulating this attribute. The semantically meaningful latent space of StyleGAN allows us to determine which direction vectors represent individual factors of variation, specifically class direction vectors. By finding the boundary line that separates between the classes in the latent space, and moving towards each side we can affect the extent to which the class will appear in the generated image. Towards this end, we present an example with the COVID-19 dataset. Fig. <ref> displays the manipulation of images by adding and removing lung opacities associated with this disease.§.§ Generalization to New Data Table <ref> demonstrates our method's performance and generalization ability on a test COVID-19 dataset, which was not seen before by SS-StyleGAN. The test images were embedded to the latent space using our COVID-19 pretrained SS-StyleGAN where t-SNE, FPS and NN classification were performed. Fig. <ref> presents a visualization of the t-SNE projected space. The classification results as well as the distinct separation in the latent space show that our self-supervised StyleGAN based pre-training is able to generalize well to new data. § CONCLUSIONWe present SS-StyleGAN, a self-supervised StyleGAN dedicated for image annotation and classification. By leveraging the disentanglement of the StyleGAN latent space, we design a smart representative selection algorithm of samples to be labeled for the classification task which enables the classification with extremely limited training data.Our proposed framework incorporates an encoder within the StyleGAN architecture to learn a latent space encouraged to be well suited for inversion while retaining its disentangled properties.We demonstrate that our method outperforms state-of-the-art self-supervised learning, supervised image classification, and active learning methods on two medical image datasets. Moreover, we show that using only 10 images for training is sufficient to achieve adequate classification results. Future work includes examining iterative sampling of images to be labeled by adopting an active learning approach. Another interesting direction may include manipulation of the latent space to generate class-specific images to further improve the classification results. Furthermore, though we have presented only binary classification tasks, our work can be extended to multi-class classification and SS-StyleGAN can be adopted for other downstream tasks such as detection.We defer these extensions to future research. In summary, our proposed framework is generic and can be beneficial to many additional medical tasks and applications by improving classification in scenarios of limited annotated datasets.IEEEtran
http://arxiv.org/abs/2312.15972v1
{ "authors": [ "Dana Cohen Hochberg", "Hayit Greenspan", "Raja Giryes" ], "categories": [ "eess.IV", "cs.CV", "cs.LG", "92C55", "J.3; I.5.3" ], "primary_category": "eess.IV", "published": "20231226094650", "title": "A Self Supervised StyleGAN for Image Annotation and Classification with Extremely Limited Labels" }
These two authors contributed equallyThese two authors contributed equally [Corresponding author: ][email protected] Department of Physics, ETH Zürich, 8093 Zürich, Switzerland Quantum Center, ETH Zürich, 8093 Zürich, Switzerland Mechanical degrees of freedom are natural candidates for continuous-variable quantum information processing and bosonic quantum simulations. These applications, however, require the engineering of squeezing and nonlinearities in the quantum regime. Here we demonstrate ground state squeezing of a gigahertz-frequency mechanical resonator coupled to a superconducting qubit. This is achieved by parametrically driving the qubit, which results in an effective two-phonon drive. In addition, we show that the resonator mode inherits a nonlinearity from the off-resonant coupling with the qubit, which can be tuned by controlling the detuning. We thus realize a mechanical squeezed Kerr oscillator, where we demonstrate the preparation of non-Gaussian quantum states of motion with Wigner function negativities and high quantum Fisher information. This shows that our results also have applications in quantum metrology and sensing. Quantum squeezing in a nonlinear mechanical oscillator Matteo Fadel January 14, 2024 ======================================================From the oscillation of a trapped particle to the vibration of a solid-state structure, mechanical modes are ubiquitous degrees of freedom that can exhibit sought-after properties such as high quality factors and large coupling rates to spins and electromagnetic fields.When operated in the quantum regime, mechanical modes are powerful building blocks for quantum technologies, with applications in information processing <cit.>, bosonic simulations <cit.>, memories <cit.> and microwave-to-optical frequency conversion <cit.>.Moreover, their nonzero mass makes them particularly suited for sensing forces <cit.>, as well as for fundamental physics investigations, ranging from tests of the superposition principle <cit.> to the detection of dark matter <cit.> and quantum gravity effects <cit.>.To fully unlock these applications, however, it is necessary to have available a sophisticated toolbox for the preparation and manipulation of quantum states of motion, which is a nontrivial task.In this context, mechanical resonators have recently attracted a lot of attention as new elements for hybrid quantum systems <cit.>.In particular, gigahertz-frequency resonators can be interfaced to superconducting qubits and thus controlled with the toolbox of circuit quantum acoustodynamics (cQAD). For example, resonant interaction with a qubit was used to demonstrate the preparation of mechanical Fock states <cit.> and Schrödinger cat states <cit.>.Crucially, compared to their electromagnetic counterparts, mechanical resonators have small physical footprints and a high density of accessible long-lived modes, making them ideal candidates for hardware-efficient quantum processors <cit.> and quantum random access memories <cit.>. The realization of continuous variable (CV) quantum computing and bosonic simulations relies on the availability of a universal gate set composed of phase shift, displacement, beam-splitter, single-mode squeezing and Kerr nonlinearity <cit.>. The first two are relatively simple to realize through free evolution and coherent driving. Beam-splitter operations have been recently demonstrated in cQAD between surface <cit.> and bulk <cit.> acoustic waves. For mechanical systems, quantum noise squeezing was pioneered in trapped ions <cit.>, and later demonstrated in drum oscillators using the tools of electromechanics <cit.>. In cQAD, a recent experiment demonstrated two-mode squeezing of gigahertz-frequency surface acoustic waves through modulation of one of the Bragg reflectors <cit.>.Nonlinear evolutions in the quantum regime are difficult to realise with standard opto- or electromechanical coupling, since this coupling is linear for small displacements.One possibility is to off-resonantly couple a mechanical oscillator to a two-level system, which gives rise to an effective nonlinearity for the phonon through a hybridization of the modes.Recently, this was demonstrated in an experiment where the vibrational modes of a carbon nanotube were coupled to a quantum dot <cit.>. Despite all this progress, however, the demonstration of a full gate set for universal CV quantum information processing in a single cQAD device is still lacking. In this work, we present ground state squeezing of a gigahertz-frequency phonon mode of a high-overtone bulk acoustic wave resonator (HBAR) with tuneable nonlinearity.The phonon mode is coupled to a superconducting qubit, which we use as a mixing element for implementing the effective squeezing drive: by applying two microwave tones to the qubit, we activate a parametric process that creates pairs of phonons in the resonator. Moreover, this coupling gives rise to an effective Kerr nonlinearity for the phonon mode, which we tune by changing the phonon-qubit detuning. To characterize our system, we study the dependence of the squeezing rate and of the Kerr nonlinearity on different system parameters. Having demonstrated control over both these quantities, we combine them to realize a mechanical version of a squeezed Kerr oscillator, a paradigmatic model in quantum optics.By using the qubit to perform direct Wigner function measurements, we show that operating this system in different regimes results in the preparation of non-Gaussian states of motion with Wigner negativities and high quantum Fisher information. The device used in this work is a cQAD system where a transmon qubit is flip-chip bonded to a HBAR, an improved version of devices used in previous works <cit.>. The qubit has a frequency ω_q = 2π·5.042GHz, which can be tuned via a Stark shift drive <cit.>.At this frequency, the qubit has an energy relaxation time T_1=17(0.4)μ s, Ramsey decoherence time T_2^∗=24(0.7)μ s, and anharmonicity α=2π·185MHz.The HBAR is coupled to the qubit through a piezoelectric transducer made of aluminum nitride that mediates a Jaynes-Cummings (JC) interaction with a coupling strength g=2π·292kHz.The phonon mode we consider in this work has a frequency ω_a=2π·5.023GHz, an energy relaxation time T_1=132(4)μ s and a Ramsey decoherence time T_2^∗=210(9)μ s. Our system can be described by the HamiltonianH_cQAD/ħ = ω_qq - α2^2 q^2+ ω_aa + g(q+a) + /ħ ,where q and a are the bosonic annihilation operators for the qubit and the phonon mode, respectively.The term H_qd/ħ=(Ω_1 e^-iω_1 t + Ω_2 e^-iω_2 t) q^†+h.c. describes the two off-resonant microwave drives at frequencies ω_1,2 and amplitude Ω_1,2 applied to the qubit, see Fig. <ref>a.We define the detunings Δ_1,2=ω_1,2-ω_q and the dimensionless drive strengths ξ_1,2=Ω_1,2/Δ_1,2. In addition, we use a third, far off-resonant drive to control the qubit frequency via the AC Stark shift. When the resonance condition ω_1+ω_2 = 2ω_a is fulfilled, the qubit nonlinearity mediates a four-wave mixing process that results in a two-phonon drive (^2 + a^2). The emergence of this squeezing term can be unveiled through a series of unitary transformations (see <cit.> for details).In combination with the nonlinearity the phonon mode inherits from the qubit, this results in an effective squeezed Kerr Hamiltonian for the phonon modeH/ħ = -Δ a - ϵ (^2 + a^2) - K ^2 a^2.Here, Δ= ( ω_1 + ω_2 - 2ω_a' )/2,where ω_a' ≈ω_a + g^2/_a, is the frequency of the phonon mode including a normal mode shift due to the presence of the qubit. _a=ω_a - ω_q^ss is the detuning between the phonon mode and the AC Stark shifted qubit. The squeezing rate ϵ is given by <cit.> ϵ = 2 g^2_aξ_1ξ_2 α + α ,where =_1+_2.Finally, K is the Kerr nonlinearity, which is K≈ g^4/Δ_a^3 for α≫Δ_a ≫ g <cit.>. The Hamiltonian in Eq. (<ref>) is a paradigmatic model in quantum optics, exhibiting a plethora of interesting phenomena such as chaotic dynamics <cit.>, quantum phase transitions <cit.>, tunneling <cit.>, and parametric amplification <cit.>.Moreover, this model admits macroscopic superpositions as quantum ground states, which can be exploited for error-protected qubit encoding <cit.>.The latter application made squeezed Kerr oscillators particularly attractive for quantum information processing, which motivated their recent experimental implementation for electromagnetic modes in circuit QED platforms <cit.>.Here we present the implementation for a mechanical mode, and use it to prepare squeezed and non-Gaussian quantum states of motion of a massive system. First, let us investigate the squeezing dynamics for a large Δ_a= 2π· 1.5MHz, such that K∝Δ_a^-3 is significantly smaller than ϵ∝Δ_a^-1.When the parametric drives are applied, the qubit frequency is modified by the AC Stark shift, which then results in a change to the normal mode shift of the phonon mode, see Fig. <ref>a.To ensure that the resonance condition for squeezing is satisfied, we perform the following calibration experiment, see Fig. <ref>b:We apply the parametric drives for a time t_S=20μ s, including 0.5μs Gaussian edges, with drive frequency ω_2=2ω_a-ω_1+δ. We then reset the qubit to its ground state by swapping the population acquired during the off-resonant driving to an ancillary phonon mode.Finally, we bring the qubit into resonance with the phonon mode we want to squeeze for a time π/(2√(2)g), thereby swapping part of the phonon population to the qubit, after which we measure the qubit state using standard dispersive readout.Repeating this experiment for different δ results in Fig. <ref>c, showing a peak around δ≈2π·140kHz that provides an indication of when the two-phonon drive becomes resonant. To fully characterize the state resulting from the two-phonon drive and verify the coherence of this process, we set a desired δ and t_S, and perform a Wigner function measurement of the phonon mode <cit.>.The results for δ=2π·80kHz and t_S=0, 6, 12μ s are shown in Fig. <ref>d.For t_S=0 μs we obtain a measurement of the ground state, while for larger times we observe a reduction of the quantum noise along one quadrature, squeezing, as well as an increase in the perpendicular quadrature.Note that for the longest evolution time we see also a distortion of the state, that is due to the residual phonon mode nonlinearity. As we will see later in more detail, this distortion limits the squeezing, and it depends on δ. Choosing δ=2π·80kHz allowed us to observe the strongest squeezing, as a result of a partial compensation of the nonlinearity K by the detuning Δ≈ g^2/Δ_a - δ/2 in Eq. (<ref>). Given phase space quadratures X and P, we define the variance along direction θ as V(θ)=Var[X cosθ + P sinθ].For an ideal ground state V_GS=V(θ)=1/2, therefore observing a smaller value implies quantum squeezing.We thus define V_min=min_θ V(θ) and V_max=max_θ V(θ) as the squeezed and antisqueezed quadratures. Fitting the Wigner function at t_S=6μ s with a two-dimensional Gaussian, we obtain V_min=0.252(6), which corresponds to a noise reduction of 3.0(1)dB below V_GS. From this, and V_max=1.45(4), we estimate a thermal population of √(V_minV_max)-1/2=0.10(1), corresponding to a state purity of 83(1)% <cit.>. For the chosen experimental parameters, numerical simulations indicate that t_S≈6μ s is the optimal time for the strongest squeezing, this being mostly limited by residual nonlinearity (see later Fig. <ref>a). In fact, longer evolution times result in non-Gaussian states, such as the one shown for t_S=12μ s, for which the above analysis is not reliable. Alternatively, maximum-likelihood estimate allows us to reconstruct from the measurements shown in Fig. <ref>d the corresponding density matrices <cit.>, from which we calculate the associated covariance matrices. For t_S=6μ s we obtain V_min=0.236(1), while for t_S=12μ s we obtain V_min=0.268(3), which shows less squeezing due to the nonlinear evolution. We then plot the diagonal elements of the density matrices in Fig. <ref>e, corresponding to the Fock state populations of the phonon states.We observe high populations of even states, characteristic of a two-phonon drive.Black bars indicate a fit of the reconstructed populations to the closest ideal squeezed state. To characterize the usefulness of the prepared squeezed states for applications such as quantum metrology, we measure their lifetime. For this, we prepare a squeezed state and wait for a variable time t_w before performing the Wigner function measurement, see Fig. <ref>b.The (anti)squeezing for different t_w is shown in Fig. <ref>f. As expected from the free evolution of a squeezed state in the presence of relaxation, we observe that its variances gradually return to those of the ground state <cit.>.Fitting the squeezed variance measurements with (1-e^-γ_d t(1-2 V_min))/2, where V_min is the variance at t_w=0, gives a decay time γ_d^-1=78(11)μ s. While the decay time of V_max of 125(12)μ s is compatible with the phonon T_1, we attribute the lower γ_d^-1 to the fact that the squeezed quadrature is more sensitive to dephasing than the antisqueezed quadrature.We now investigate the dependence of the squeezing rate ϵ on the experimental parameters. We extract ϵ by measuring the evolution of V_min as a function of the squeezing time t_S. Concretely, ϵ is inferred from a fit of the squeezing measurements with the function V_min(t)=(γ +4 ϵe^-t (γ +4 ϵ ))/2(γ + 4 ϵ ), which describes the squeezing dynamics in the presence of decay at rate γ <cit.>.Note that ϵ can be estimated from the evolution at short times t_S≪ 1/K, where the state is still Gaussian. For this reason we obtain V_min from a Gaussian fit, after having checked that for short times it is consistent with the one obtained from state reconstruction. An example is shown in Fig. <ref>a, for δ=2π·80kHz, Δ_a=2π·1.5MHz, ξ_1=0.28, and ξ_2=0.26. Fitting of V_min(t) gives us an effective decay time γ^-1=12.8(11)μ s and squeezing rate ϵ=2π· 7.6(3)kHz. Note that this effective decay time is shorter than the bare phonon T_1, as these measurements are subject to Purcell decay via the qubit and to additional dephasing resulting from finite qubit population and parametric driving.Repeating the measurement shown in Fig. <ref>a for different drive powers ξ_1 ξ_2 and qubit-phonon detunings Δ_a we obtain the rates shown in Fig. <ref>b, c. The precise value of these controllable parameters is extracted from independent measurements in the following way.We set the desired drive strengths ξ_1,2, which we previously calibrated via their Stark shift on the qubit <cit.>.The detuning Δ_a is then set by changing the qubit frequency via the independent Stark shift drive.In Fig. <ref>b, c we compare the measured squeezing rate ϵ, to the one predicted by Eq. (<ref>).We see a disagreement that we attribute to effects of higher order in ξ_1ξ_2, which are not included in Eq. (<ref>).For this reason, we add a comparison with the rates expected from Floquet theory <cit.> and from a time-domain simulation of our system Hamiltonian, Eq. (<ref>), which shows good agreement with the measurements.After having characterized the squeezing rate, we proceed with investigating the nonlinearity K of the phonon mode.In Fig. <ref>a we show a spectroscopic measurement taken by applying a 400μs long probe tone detuned by _p = ω_p - ω_a from the phonon mode and then measuring the qubit state population. When the probe tone is resonant with the phonon mode, it drives the phonon mode into a steady state, from which population leaks to the qubit, which results in our spectroscopic readout signal.This lets us infer the population of the phonon mode <cit.>, which we denote on the second y-axes of Fig. <ref>a.We observe an asymmetric resonance peak characteristic of a nonlinear Duffing oscillator.Fitting these data with the equation of motion of a classical driven Duffing oscillator <cit.>, we extract the nonlinearity of the mode <cit.>. Repeating this analysis for different probe amplitudes and detunings Δ_a gives us the data shown in Fig. <ref>b.These are in good agreement with the predictions obtained from numerical diagonalization of Eq. (<ref>) without drives, as well as with the analytical expression K≈ g^4/Δ_a^3 obtained from fourth-order perturbation theory <cit.>. This shows that we can tune the phonon nonlinearity by approximately one order of magnitude.Having demonstrated control over the parameters of Eq. (<ref>), we now show that different regimes of this Hamiltonian can be used to prepare non-Gaussian states.The dynamics of the system is fully determined by the two dimensionless parameters Δ/K and ϵ/K, where the former is controlled by the drive frequencies and the latter by the drive amplitudes.This two-dimensional parameter space is divided into three different regions by two phase transitions at Δ=± 2 ϵ <cit.>.In a semiclassical description, these regions are associated with an effective single-, double-, and triple-node potential in the frame rotating at half the driving frequency <cit.>, see Fig. <ref>a.To prepare interesting mechanical states, we start with the phonon mode in the ground state, and then let it evolve according to Hamiltonian Eq. (<ref>).We choose ξ_1ξ_2=0.07 and _a=2π·0.53MHz. This fixes ϵ/K, but leaves us control over Δ/K by changing the parametric drive correction δ.The Wigner functions measured at different values of t_S (see Fig. <ref>b) are shown in Fig. <ref>b.We note parameter regimes that result in states significantly deviating from Gaussian states.Moreover, in some cases negative Wigner function regions appear, which are a direct indication of non-classicality <cit.>. The states we observe can be qualitatively understood from an evolution of the ground state in the semiclassical potential associated with the chosen parameter regime. From exact diagonalization we obtain K=2π· 14(1)kHz, while from a Gaussian fit of the states at t_S=3μ s we estimate ϵ = 2π· 11(1)kHz.We show in Fig. <ref>c numerical simulations of Eq. (<ref>) with these parameters, which show good agreement with our measurements. For these simulations, we use a lower phonon lifetime of 40μ s, estimated taking into account Purcell decay via the qubit, and include a global rotation to take into account that the measurements in Fig. <ref>b are performed in the rotating frame of the qubit. To have a pragmatic characterization of the states we can prepare, we quantify their usefulness for a metrological protocol by means of the quantum Fisher information (QFI).For a perturbation generated by the operator Â, the QFI associated with the state ρ is defined as F_Q[ρ,A] = 2 ∑_k,l(λ_k-λ_l)^2/(λ_k+λ_l)|⟨k|A|l⟩|^2, where λ_k and |k⟩ are the eigenvalues and eigenvectors of ρ, respectively, and the summation goes over all k,l such that λ_k+λ_l>0.Taking the parameter estimation task to be the measurement of a displacement amplitude, then A(θ) = X sinθ + P cosθ, with θ specifying the displacement direction. From this we define F_Q^max=max_θ F_Q[ρ,A(θ)], which gives the maximum sensitivity attainable by the state.Note that a coherent state has F_Q[|α⟩,A(θ)]=2, meaning that if we consider coherent states as classical resources then any F_Q^max>2 implies non-classicality. We estimate F_Q^max numerically by first reconstructing the density matrix of the state, and then maximizing F_Q[ρ,A(θ)] over θ.The results are shown in Fig. <ref>d, together with the values we obtain for a Fock |1⟩ state and for the Schrödinger cat states of Ref. <cit.>.The Fisher information is computed for the time-evolution of states at four different detunings.We observe that the states corresponding to point iii in Fig. <ref>a exhibit significantly larger values after 6μs than previously measured states, while the states in the squeezed regime do not surpass them. In conclusion, we have demonstrated ground state squeezing of a gigahertz-frequency phonon mode of a HBAR device with tunable nonlinearity. This allows us to prepare non-Gaussian states of motion characterized by a high quantum Fisher information, which can find immediate application in quantum sensing with mechanical degrees of freedom. Our results, in combination with the beam-splitter operation demonstrated in the same platform in Ref. <cit.>, complete the toolbox for universal CV quantum information processing and bosonic quantum simulation in HBARs. This opens up the possibility to use the large number of modes available in these devices for hardware-efficient quantum chemistry simulations <cit.>, as well as for nonlinear boson sampling <cit.>. AcknowledgementsThe authors thank Alex Eichler, Francesco Adinolfi and Hugo Doeleman for useful discussions and feedback on the manuscript, and Max Drimmer for contributing to the device fabrication. Fabrication of the device was performed at the FIRST cleanroom of ETH Zürich and the BRNC cleanroom of IBM Zürich. We acknowledge support from the Swiss National Science Foundation under grant 200021_204073. MF was supported by the Swiss National Science Foundation Ambizione Grant No. 208886, and The Branco Weiss Fellowship – Society in Science, administered by the ETH Zürich.Author contributionsUvL and MF conceived the experiments. UvL, YY, MB, and AO fabricated the device. SM, UvL, YY, and MF wrote experiment control sequences. SM, UvL, and MF performed measurements and analysed the data. SM, UvL, OJ, and MF performed numerical simulations of the experiments. UvL and MF derived theoretical models. YC and MF supervised the work. SM, UvL, and MF wrote the manuscript with feedback from all authors.Data and code availabilityRaw data and analysis scripts will be made available on Zenodo. Additional material is available from the corresponding author on reasonable request.apsrev4-1 Supplementary Material forQuantum squeezing in a nonlinear mechanical oscillator Stefano Marti^∗, Uwe von Lüpke^∗, Om Joshi, Yu Yang, Marius Bild, Andraz Omahen, Yiwen Chu, and Matteo FadelDepartment of Physics, ETH Zürich, 8093 Zurich, SwitzerlandQuantum Center, ETH Zürich, 8093 Zürich, Switzerland§ DEVICE AND EXPERIMENT PARAMETERS§ DERIVING THE HAMILTONIANIn this section we derive the effective squeezing rate g_sq.We start from the Hamiltonian of a qubit with frequencyand anharmonicity -α coupled to a single phonon mode with frequency , and driven with two microwave drives at frequencies _1,2H_sys/ħ =-α/2^2 q^2 ++ g ( q + ) + (_1 e^-i_1 t + _2 e^-i_2 t - iϕ) + ,where g is the qubit-phonon coupling strength, _1,2∈ℛ are the drive amplitudes, and ϕ is the initial phase difference between the drives. In the following, we take ħ=1 for convenience. We now enter a rotating frame at the qubit and phonon frequenciesU_rf = exp[i( +)t],in which the system Hamiltonian readsH_rf = -α/2^2 q^2 + g ( q e^i _a^(0) t + ) + (_1 e^-i_1 t + _2 e^-i_2 t - iϕ) + ,where _a^(0) =-.Next, we enter an interaction picture of the microwave drives with the transformationU_d =exp[ ξ_1 e^i_1 tq + ξ_2 e^i_2 t + iϕq - ],where ξ_1,2 = _1,2 / |_1,2| are the relative drive amplitudes.U_d transforms the qubit operator as q' = U_d q U_d^† = q + ξ_1 e^-i_1 tq + ξ_2 e^-i_2 t-iϕ.Applying U_d to H_rf and using a rotating wave approximation to drop fast-oscillating terms, we find H_d = U_d H_rf U_d^† + i U̇_d U_d^†= -α/2^2 q^2_H_Kerr +g ( q e^i _a t + )_qubit-phonon coupling- αξ_1 ξ_2 (^2 e^-i t - i ϕ + )_two photon qubit drive . Here, we have defined = _1 + _2 = _1 + _2 - 2_q.In H_d we have absorbed an AC Stark shift of the qubit frequency caused by the two microwave drives into the detuning _a = _a^(0)-_q^Stark shift.Of the terms that emerge from the drive transformation we kept the two-photon qubit drive through the rotating wave approximation. This is justified when we set up the drive frequencies symmetrically around the phonon mode such that ∼ 2_a. Since the two-photon operator ^2+ acts on more than our usual computational subspace of the first two qubit energy levels, we now enter a qubit-state dependent rotating frame, in which we eliminate H_Kerr. The transformation U_K = exp[-iα/2^2 q^2 t]transforms the qubit operator as U_K q U_K^† = e^iα t q ,   U_K U_K^† =e^-iα t, and  U_K^2 U_K^† = ^2 e^-iα t (2+1).With this transformation H_d becomesH_K = U_K H_d U_K^† + i U̇_K U_K^† = g ( e^iα t q e^i _a t + ) - αξ_1 ξ_2 (^2e^-iα t (2+1)-i t - i ϕ + ). The resonance conditions in the phase exponents of Eq. (<ref>) now take the qubit anharmonicity into account. Writing M =+ α (2 + 1), we can move to eliminate the two-photon qubit drive with a modified displacement transformationU_sq = e^S_sq≡exp[ αξ_1 ξ_2 ^2 M^-1 e^-i M t -i ϕ -].U_sq transforms the qubit operator like U_sq q U_sq^† ≈ q + [S_sq, q] = q + αξ_1 ξ_2 e^-i( + α) t - iϕ[ ^2 M^-1 e^-2iα t, q ] ≈ q+ αξ_1 ξ_2 e^-i( + α) t - iϕ[^2, q ]M^-1 e^-2iα t = q-2αξ_1 ξ_2 e^-i( + α) t - iϕ M^-1 e^-2iα t, where in Eq. (<ref>) we neglected the commutator [ M^-1 e^-2iα t, q], because, as we will see later, it only produces far off-resonant terms related to higher qubit levels and thus does not significantly affect the dynamics of our experiment. The transformed Hamiltonian then readsH_c = U_sq H_K U_sq^† + i U̇_sq U_sq^† = g ( e^iα t q e^i _a t + ) - 2αξ_1 ξ_2 ge^iα t e^-2iα tM^-1 e^-i( + α) t - iϕ e^i_a t += g ( e^iα t q e^i _a t + ) - 2αξ_1 ξ_2 ge^-iα tM^-1 e^-i( - _a) t - iϕ +. As before, the first term in H_c describes the qubit-phonon coupling. The new, second term describes a two-mode interaction involving the simultaneous creation or annihilation of excitations in qubit and phonon mode, which becomes resonant when ≈_a.While higher qubit states play a role for the prefactor of the two-photon qubit drive in Eq. (<ref>), they do not participate in the phonon squeezing term we are looking for.Therefore, we now undo the level-dependent rotating frame transformation U_K, by applying its inverse. H_c'= U_K^† H_c U_K + i ∂/∂ t(U_K^†) U_K = g (q e^i _a t + )_qubit-phonon coupling - 2αξ_1 ξ_2 g (M^-1 e^-i( - _a) t - iϕ + ) - α/2^2 q^2_H_Kerr . Note, that the qubit anharmonicity is still described by this Hamiltonian in form of the reappearing H_Kerr, but we can now treat the qubit-phonon interaction separate from this anharmonicity by transforming and interpreting the first two terms. We now eliminate the qubit-phonon coupling term via the standard time-dependent Schrieffer-Wolff transformationU_SW = exp[g/_a q e^i_at-],resulting in H_sq = U_SW H_c' U_SW^† + i U̇_SW U_SW^† = - 2αξ_1 ξ_2 g (M^-1 e^-i( - _a) t - iϕ + )+ U_SW H_Kerr U_SW^† + g^2/_a( - ) _normal-mode splitting -2 g^2/_aαξ_1 ξ_2 M^-1[ ( ^2 e^-i( - 2_a)t - iϕ + )_phonon squeezing - ( ^2 e^-i t - iϕ + )] . In a final step, we assume or qubit is initially in its ground state |g⟩.This results in M =+ α, after which we can write the phonon dynamics from Eq. (<ref>) for ≈ 2_a as H_ph = g^2/_a -2 g^2/_aαξ_1 ξ_2/ + α( ^2 e^-i( - 2_a)t - iϕ + ) .Assuming the qubit starts in its ground state also eliminates the commutator we neglected in Eq. (<ref>). Eq. (<ref>) contains the phonon frequency shift due to the normal-mode splitting with the qubit and the phonon squeezing term.In addition, we can include the anharmonicity of the phonon mode, which it inherits from the qubit due to their hybridization and which is included in U_SWH_KerrU_SW^† in Eq. (<ref>). We derive the value K of this phonon anharmonicity in Section <ref> via time-independent perturbation theory (see Eq. (<ref>).Furthermore, we enter a frame rotating at the resonance condition of the squeezing interaction - 2_a, such that H_ph assumes the form of the squeezed Kerr oscillator (reintroducing here ħ)H_ph,K/ħ = - - (ϵ^2 + ) - K^2a^2 ,where = /2 - g^2/_a - _a=(ω_1 + ω_2 - 2ω_a')/2ϵ = 2g^2_aξ_1ξ_2 α + αe^-iϕ K= g^4/_a^3 . We find that, as expected, we can tune the squeezing angle by varying ϕ. In addition, we recover the squeezing strength |ϵ| derived via Floquet and perturbation theory in <cit.>.§ SQUEEZING WITH LOSSES AND DEPHASING§.§ Preparation of a squeezed state with decoherence We consider here the evolution under the HamiltonianH/ħ = Δ a + ϵ (a^2+^2),under the presence of losses and dephasing. As we will show, this can be solved analytically in an exact way.The evolution of an operator O can be computed asdOdt = iħ⟨ [H,O] ⟩ + 𝒟[O] ,where 𝒟[O]=∑_i 1/2(L_i^†[O,L_i]+[L_i^†,O]L_i) is the dissipator associated with the jump operators L_i. Considering energy relaxation described by L_1=√(γ)a, and pure dephasing described by L_2=√(2γ_ϕ) n, we have𝒟[O] = γ2([O,a] + [,O]a ) + γ_ϕ( n [O,n] + [n,O]n ).Here, γ=1/T_1 and γ_ϕ=1/T_ϕ with T_ϕ=(1/T_2-1/(2T_1))^-1, where T_1 is the phonon energy relaxation time and T_2 the phonon Ramsey decoherence time.From this we obtain d adt = - i Δa - 2 i ϵ - 12 (γ+2γ_ϕ) a ,dadt = -i 2 ϵ(^2-a^2) - γ a ,d a adt = -i 2Δa^2 - i 2 ϵ (1+2 a) - (γ + 4 γ_ϕ) a^2 , that can be solved analytically for any desired initial conditions. This allows us to write the covariance matrix for the x and p quadratures by using the fact that x = 1√(2)(a + )p = 1i√(2)(a - )x^2 = 12( 1 + 2a + a^2 + ^2)p^2 = 12(1 + 2a - a^2 - ^2)xp+px = i ( ^2-a^2). The two eigenvalues of the covariance matrix corresponds to the squeezing and antisqueezing variances. These areV_min≡min_θx cosθ+psinθ = 12( 1 + 2( a-a^2) - 2 a^2-a^2),V_max≡max_θx cosθ+psinθ = 12( 1 + 2( a-a^2) + 2 a^2-a^2).The set of equations just presented allows us to obtain analytic expressions for the time evolution of the squeezed and antisqueezed variances. In particular, we looked at solutions that start from the vacuum state, a= a=aa=0 at t=0. As these expressions are particularly lengthy, we will not present them here. However, we will present the special case Δ=0, γ_ϕ=0, which givesV_min = 12γ +4 ϵe^-t (γ +4 ϵ ) (γ +4 ϵ ) ,V_max = 12γ - 4 ϵe^-t (γ - 4 ϵ )(γ - 4 ϵ ) .§.§ Free evolution of a squeezed state with decoherence Equations (<ref>) allows us to compute also how a squeezed state evolves in the presence of energy relaxation and dephasing. For this, we set Δ=ϵ=0, which corresponds to a free evolution, and solve analytically Eqs. (<ref>) taking as initial conditions an ideal squeezed state with minimum variance V_0 = e^-4 r/2. We obtain that the squeezing and antisqueezing variances evolve asV_min = 12 e^-2 t (γ +2 γ_ϕ )(e^t (γ +4 γ_ϕ )(e^γt+cosh (4 r)-1)-e^γtsinh (4 r)), V_max = 12 e^-2 t (γ +2 γ_ϕ )(e^t (γ +4 γ_ϕ )(e^γt+cosh (4 r)-1)+e^γtsinh (4 r)).If we consider γ_ϕ=0, we obtainV_min = 12(1 + e^-γ t(2 V_0 - 1)),which is used in the main text to fit the measurements shown in Fig. 1f. §.§ Thermal occupation of a Gaussian state Gaussian states (both pure and mixed) are fully determined by the vector of first moments and by their covariance matrixΓ = [ Var[x] Cov[x,p]; Cov[x,p] Var[p] ] .The purity of a Gaussian state can then be expressed from Γ as <cit.>Tr[ρ^2] = 12√(Det[Γ]) = 11+2 n_T ,where n_T is the mean number of thermal (incoherent) excitations. In the basis that diagonalizes Γ, namely the one of the squeezed/antisqueezed quadratures, we thus haven_T = √(V_min V_max) - 12 .As expected, for a state that saturates the Heisenberg uncertainty relation we have V_min V_max = 1/4, and thus n_T=0. § SQUEEZING LIMITS§.§ Limits from energy relaxation and dephasing The off-resonant coupling to the transmon results in a modified energy relaxation and dephasing rate for the phonon.The effective energy relaxation rate for the phonon can be written asκ = 1T_1^p + g^2Δ_a^2γ ,where γ=1/T_1^q is the qubit energy relaxation rate. The dephasing rate can be approximated to be <cit.>γ_ϕ = γ_ϕ^0 + Γ_ϕ(P_e,Δ_a),where γ_ϕ^0=(1/T_2^p-1/(2T_1^p)) is the bare phonon dephasing rate, andΓ_ϕ(P_e,Δ_a) = γ2 Re[√((1+2i χγ)^2 + 8 i χ P_eγ) - 1 ],with χ=2g^2/Δ_a the dispersive shift and P_e the qubit excited state population. For g=2π·0.29MHz and T_1^q=17μ s we plot in Fig. <ref> the value of Γ_ϕ(P_e,Δ_a) (solid lines) compared to the approximation Γ_ϕ(P_e,Δ_a) ≈ P_e γ valid when χ≫γ (dashed lines). For a squeezing dynamics H/ħ=ϵ(^2+a^2) in the presence of losses √(κ)a and dephasing √(2γ_ϕ) a, the evolution of the squeezed variance as a function of time can be computed analytically (see Eq. (<ref>)). Therefore, for given values of the parameters we can look for the minimum variance as a function of time, and thus for the maximum squeezing achievable. For T_1^p=100μ s, T_2^p=150μ s that implies γ_ϕ^0=(600s)^-1, and P_e=0.1 we obtain Fig. <ref>. Note however that, as we will see later, other effects will contribute in limiting the observed squeezing. §.§ Limits from Kerr nonlinearityEven in the absence of energy relaxation and dephasing, the squeezing of a Kerr oscillator is going to be limited by the nonlinearity. While for short times the state evolves according to a simple squeezing dynamics, for longer times the Kerr nonlinearity results in a twisting of the state into an “S” shape. This implies an optimal evolution time t^∗ at which the squeezing is maximized.Computing an analytical expression for t^∗ for the squeezed Kerr Hamiltonian is challenging, especially if decoherence channels are considered. For this reason we perform a numerical simulation of how the vacuum state evolves under H=ϵ (a^2+^2) - Ka a, considering energy relaxation at rate γ, and extract the maximum squeezing achievable for different values of the parameters. The results are shown in Fig. <ref>. As an example, from the data presented in Fig. 2a we found ϵ=2π·7.6kHz and γ=2π·12.4kHz. Since Δ_a=2π·1.5MHz, we expect from exact diagonalization K=2π·1.8kHz, which implies ϵ/K=4.2 and γ/K=6.8. For these parameters, our simulation predicts a maximum squeezing of -4.0dB, which is compatible with what we measure in Fig. 2a taking into account finite measurement time (see the next section).§.§ Limits from measurement timeThe measurement needed to characterize the squeezed state that has been prepared will inevitably take a finite time. In our case, this measurement is a Wigner function measurement, and it thus consists of a 5μ s long displacement pulse followed by a 5.7μ s long parity-echo sequence <cit.>. During this time, the state will decohere due to phonon relaxation and dephasing, to which the qubit also contributes due to its proximity in frequency when operating in the strong dispersive regime (Δ_a≈ 2π·2MHz during the parity measurement). This effectively results in an averaging of the measurement over a squeezed states decaying towards the vacuum. Since a precise modeling of this dynamics is complicated, to get an estimate for this effect we consider a simplified situation. We imagine that during the measurement time t the state is only subject to energy relaxation, which result in a change of variance as described by Eq. (<ref>). For γ=100μ s^-1, we show in Fig. <ref> the relation between the initial and the measured squeezing, for different values of t. As we did not take into account other effects taking place during the measurement time, such as dephasing, we take this analysis as a lower bound on the amount of squeezing that is lost during time t. From another point of view, this can be seen as a bound on the maximum measurement time acceptable in order to see a desired level of squeezing.§ PHONON MODE ANHARMONICITY§.§ Anharmonicity from perturbation theoryDue to its interaction with the qubit, the phonon mode inherits an anharmonicity. To calculate the magnitude of this effect, let us start from the Hamiltonian of the system written in the rotating frame of the qubitH = Δ_aa - α2 q q_H_0+ 12 g ( a + q )_λ V .The energy of a state |n,l⟩, where n indicates the qubit state and l the oscillator state, can be computed via perturbation theory in the parameter λ. To zeroth order we haveE_nl^(0) = (Δ_a l - α2n(n-1)) |n,l⟩ . Due to the form of V, it is easy to see that all odd-orders corrections E_n,l^2k+1 are zero. On the other hand, even-order corrections are in general non-zero. For the second order we findE_nl^(2) = g^2 ( l(n+1)nα+Δ_a - n(l+1)α(n-1) + Δ_a).The fourth order expression is lengthy, but if we restrict it to n=0 (qubit in the ground state), we findE_0l^(4) = - g^4 l (lα^2 + 2 l αΔ_a+(2l-1) Δ_a^2)Δ_a^3(α+Δ_a)^2 .From the above expressions we define E_l≡ E_0l^(0)+E_0l^(2)+E_0l^(4), from which we compute the phonon anharmonicity as2 K≡ (E_1 - E_0) - (E_2 - E_1) = 2 g^4Δ_a^3( 1 + Δ_a^2(α+Δ_a)^2),that results in an effective Kerr Hamiltonian for the phononH_K = - Ka a.For our experimental parameters, g=2π·0.29MHz and α=2π·185kHz, we plot in Fig. (<ref>) the value of K as a function of Δ_a. For comparison, we also plot the value of K obtained by numerical diagonalization of Eq. (<ref>). §.§ The steady-state of a driven Duffing-oscillatorIn Figure 3 of the main text we show measurements of the qubit response to a probe tone around the phonon frequency.We use these measurements to extract the anharmonicity of the phonon mode via the following fit: The steady state of the equation of motion of a Kerr oscillator with a classical amplitude a(t) and frequency ω_a driven by a probe with frequency ω_p and amplitude Ω_p in the rotating frame of the probe is a(t)[ i (α_m |a(t)|^2 - _p) + κ/2] = Ω_p.Here, _p = ω_p - ω_a is the probe-phonon mode detuning, α_m = _a^0→ 1 - _a^1→ 2 = 2K is the anharmonicity of the phonon mode, and κ is the phonon linewidth. Multiplying Eq. (<ref>) with its complex conjugate results in α_m^2 n̅^3 - 2_p α_m n̅^2 + ( _p^2 +κ^2/4)n̅ = Ω_p^2,where n̅=|a(t)|^2 is the average phonon occupation of the oscillator.Note that _p is the effective amplitude of the probe drive affecting the phonon mode, rather than the amplitude of the microwave drive we use to generate that probe tone.Furthermore, by assuming our phonon mode can be described by the classical equation of motion, Eq. (<ref>), we ignore the dynamics that result from its quantization.We also neglect contributions of the qubit to the equation of motion of the phonon and assume we can treat the phonon mode by itself.Before we can solve Eq. (<ref>) for n̅ and fit the solution to the measured data, we need to convert the measured qubit population to an inferred phonon population.To achieve this, we again assume we can treat the phonon mode classically, reducing its effect on the qubit to that of an off-resonant drive on a two-level system.Following Ref. <cit.> (starting from Eq. (5.132) therein and using 1/T_1^q≈1/T_2^q≪_a), we can write the steady-state population of an off-resonantly driven qubit as P_e = 2_a^2/_a^2 + 4_a^2.Here, _a is the effective drive amplitude that the qubit experiences from the populated phonon mode.We can write |_a|^2 = g_a^2 n̅ and invert Eq. (<ref>) to arrive at an expression for the average phonon number n̅n̅ = 1/g_a^2P_e_a^2/2-4P_e.Using Eq. (<ref>), we can now convert the measured qubit population into an inferred phonon population which we numerically fit the solution of Eq. (<ref>) to. §.§ Fit routine for phonon anharmonicity measurementsTo fit the spectroscopy measurements, we first convert the measured qubit populations to average phonon numbers, using Eq. <ref>.We then compute the solution of Eq. (<ref>) with a set of initial parameters α_m, κ, ω_a, and Ω_p.Next, we compute the absolute difference between this solution and the average phonon populations, which we use as residual function.To improve the fit quality, we use a weight that slightly prioritizes data points in the vicinity of the expected phonon-qubit detuning ω_a'.In this first fit, we keep ω_a fixed to its initial value, resulting in approximate fitting parameters for α_m, κ, and Ω_p, which we use in a second fit as initial values, this time also fitting for ω_a.This two-stage fitting procedure lets us narrow down better initial values, making the fit more robust across a wider range of measurements.From the covariance matrix of the second fit, we also extract error bars of the fit parameters. §.§ Onset of bistabilityEquation (<ref>) is a cubic polynomial in n̅, thus admitting either one or three real solutions. For sufficiently large Ω_p, Eq. (<ref>) admits three real solutions for Δ_p^- < Δ_p < Δ_p^+, where Δ_p^± are the so-called saddle-node bifurcation points. These two are the real solution of d Δ_p/d n̅=0, and can thus be found from the differential form of Eq. (<ref>), (3α_m^2 n̅^2 - 4 Δ_p α_m n̅ + Δ_p^2 + κ^24) dn̅ = 2(α_m n̅^2 - Δ_p n̅) dΔ_p.The solutions of d Δ_p/d n̅=0 are thusΔ_p^± = 2α_mn̅±12√(4α_m^2n̅^2-κ^2) ,which are real and distinct only for a sufficiently large probe amplitude Ω_p>Ω_p^c. The critical value Ω_p^c where the two bifurcation points emerge determines the onset of bistability, and it is given by having both d Δ_p/d n̅=0 and d^2 Δ_p/d n̅^2=0. At this critical point we haven̅^c = |κ√(3)α_m| .For our values κ=2π·1.2kHz and α_m=2K=2(2π·14kHz) we obtain n̅^c=0.025, namely | a^c|=√(n̅^c)=0.16.§ ERROR BARS CALCULATIONS Errors on qubit and phonon T_1 and T_2^∗ are one standard deviation (1STD) errors on the fit parameters, obtained from the square root of the diagonal elements of the fit covariance matrix.Errors on V_min and V_max extracted from 2D Gaussian fit of the Wigner functions are 95% confidence intervals.Errors on V_min and V_max extracted from maximum likelihood state reconstruction are given by changing the Hilbert space truncation of 15 by ± 2, that we note to be the dominant error source (bigger than imperfect axes calibration or Wigner background fluctuations). Errors on the squeezing decay time, squeezing rate and effective decay time are 1STD errors on the fit parameters, that take into account the error on the individual fitted points. Error on the Kerr nonlinearity are errors on the fit parameters. Error on the quantum Fisher information are given by changing in the maximum likelihood state reconstruction algorithm the Hilbert space truncation of 15 by ± 2.
http://arxiv.org/abs/2312.16169v1
{ "authors": [ "Stefano Marti", "Uwe von Lüpke", "Om Joshi", "Yu Yang", "Marius Bild", "Andraz Omahen", "Yiwen Chu", "Matteo Fadel" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226185701", "title": "Quantum squeezing in a nonlinear mechanical oscillator" }
Dust survival in harsh environments A. Nanni 1,2 S. Cristallo2,3D. Donevski1,4,5 M. J. Michałowski 6 M. Romano1,7 P. Sawant1 Received ...; accepted... ==========================================================================================================================================================Aligning large language models (LLMs) with human preferences is crucial for enhancing their utility in terms of helpfulness, truthfulness, safety, harmlessness, and interestingness.Existing methods for achieving this alignment often involves employing reinforcement learning from human feedback (RLHF) to fine-tune LLMs based on human labels assessing the relative quality of model responses.Nevertheless, RLHF is susceptible to instability during fine-tuning and presents challenges in implementation. Drawing inspiration from the emerging field of representation engineering (RepE), this study aims to identify relevant representations for high-level human preferences embedded in patterns of activity within an LLM, and achieve precise control of model behavior by transforming its representations.This novel approach, denoted as Representation Alignment from Human Feedback (RAHF), proves to be effective, computationally efficient, and easy to implement. Extensive experiments demonstrate the efficacy of RAHF in not only capturing but also manipulating representations to align with a broad spectrum of human preferences or values, rather than being confined to a singular concept or function (e.g. honesty or bias).RAHF's versatility in accommodating diverse human preferences shows its potential for advancing LLM performance.§ INTRODUCTION While large language models (LLMs) learn broad-ranging world knowledge and a degree of reasoning proficiency, precise control over their behavior proves challenging due to the unsupervised nature of their pre-training <cit.>. For each query, instruction-tuned LLMs <cit.> exhibit the capacity to generate multiple responses that are both semantically and syntactically coherent by some sampling techniques.While such ability enables the models to provide diversity that is essential for chat agents, some responses may contain harmful, unethical, socially biased and negative, even illegal content <cit.>.Although many responses remain within acceptable bounds, preferences may exist among them, leading to a desire to prioritize specific outputs over others.Existing methods steer LLMs to align with human preferences often using reinforcement learning (RL), with reinforcement learning from human feedback (RLHF) emerging as the most successful one <cit.>. However, the underlying learning algorithms exhibit a considerable degree of complexity, sensitivity to hyperparameters, instability during training, and necessitate additional training in the reward model and value network, leading to substantial computational costs <cit.>.In addressing the aforementioned challenges posed by RL-based methods, several computationally lightweight alternatives have been proposed to simplify the human preference matching process. Two prominent paradigms among these alternatives include contrastive learning <cit.> and Hindsight instruction relabeling (HIR) <cit.>. Contrastive learning-based methods optimize a language model policy by increasing the relative probability of preferred responses over dispreferred ones, while HIR methods transforms human feedback into instructions by relabeling the original ones, indicating the relative quality of provided responses. A common characteristic shared by these two paradigms is their capability to align language models with human preferences through reward-free fine-tuning.However, the reward-free fine-tuning is vulnerable to the presence of noisy data or incorrect labels in a training set comprising a collection of preference-annotated response pairs.Instances of dull sentences or very brief responses may appear repeatedly in such a training set, potentially introducing bias into the models.The exclusion of such instances from the training set renders it impossible for LLMs to glean insights of human preferences expressed in these instances.In contrast, RL-based methods adopt a different strategy, wherein a reward function is first extracted from a dataset of response rankings, and then this reward function can be applied to train a LLM, effectively mitigating the model's direct exposure to noisy data or incorrect labels within the dataset.In this study, we aim to seek for a computationally lighter and reward-free algorithm that can effectively harness human preference expressed in datasets meanwhile safeguarding LLMs from the influence of noisy data. Inspired by the recent advance in representation engineering <cit.>, we initially locate relevant representations and activity patterns associated with high-level human preferences within an LLM, and subsequently gain precise control over its behavior by manipulating its internal representations. In the neural architecture, network weights determine neural activity, neural activity determines the networks' output, and the networks' output determines the networks' behavior.n Instead of focusing on neurons and their connections we see aligning LLMs with human feedback as an outcome of representational spaces, implemented by patterns of activity across populations of neurons. We first identify the differences in model activities between preferred and dispreferred stimuli, and then control over its behavior by leveraging the identified differences in representations (see Figure <ref>).We introduce two methods for controlling representations and demonstrate the efficacy of these representation engineering (RepE) approaches in aligning LLMs with a broad spectrum of human preferences through a collection of response pairs.To validate the effectiveness of our approach in aligning with human preferences, we conducted extensive comparative experiments on the generated results. Our method outperformed other RL-free approaches in human evaluations and automated metrics such as reward model scores, GPT-4 evaluations, and also achieved results comparable to RLHF. Notably, the underlying algorithms exhibit simplicity in implementation and straightforwardness in training.§ RELATED WORK Tuning large language models to elicit desired responses and behavior from their extensive knowledge and capabilities is essential in the development of chat agents, such as ChatGPT <cit.>, LLaMA <cit.> and GPT-4 <cit.>, characterized by safety, performance, and controllability. The enlargement of the size of language models only does not inherently enhance their ability to follow a user’s intent.For example, LLMs may still generate outputs that are untruthful, toxic, or simply not helpful to the user. Existing human preference alignment methods can be broadly classified into three major categories: reinforcement learning <cit.>, contrastive learning <cit.>, and Hindsight instruction relabeling <cit.>. Extensive research has devoted into the exploration of RL from human feedback through ratings or rankings, spanning tasks from NL-to-SQL conversion <cit.>, machine translation <cit.>, task-oriented dialogue systems <cit.>, summarization <cit.>, story-telling <cit.> to instruction-following <cit.>.Typically, these methods involve the fitting of a reward model to a dataset of human preferences, followed by the optimization of a LLM policy to generate responses with high reward, using RL algorithms such as REINFORCE <cit.> or proximal policy optimization <cit.>. Despite the attractiveness of leveraging human preferences that are easier to collect than expert demonstrations, training LLMs with RL poses significant practical challenges, which is attributed to the sensitivity of RL to hyperparameters and the inherent instability during training.The solutions based on Hindsight instruction relabeling <cit.> and contrastive learning <cit.> have emerged as computationally efficient alternatives to RL-based methods without explicit reward modeling. However, these reward-free fine-tuning solutions are susceptible to noisy data or incorrect labels within a training set.They exhibit performance lags compared to models tuned with RL counterparts (see Section <ref>).Furthermore, the question of whether LLMs trained with such fine-tuning methods can generalize well to out-of-distribution queries remains unresolved when contrasted with models incorporating an explicit reward model.RLHF method <cit.> offers a potential avenue for improvement by leveraging additional unlabeled examples through labeling LLM generations with the learned reward model.To enhance transparency and controllability of neural networks, <cit.> introduced representation engineering (RepE) as a methodology, drawing an analogy between understanding deep neural networks through representation tomography and studying brains via neuroimaging techniques.Their work demonstrated the efficacy of RepE in addressing diverse safety-related challenges such as truthfulness, honesty, and hallucination.This study falls in line with recent research findings and extends its application to aligning LLMs with a wide spectrum of human preferences. Our study introduces two novel methods to instruct LLMs on human preferences first, and then extract differences in model activities between preferred and dispreferred stimuli.These differences in activity patterns serve as a foundation for manipulating the model's behavior, leading to the generation of responses that better align with human preferences. Due to the lightweight computational advantages of parameter-efficient fine-tuning techniques<cit.> , these techniques are utilized to fit the disparity in activity patterns. In contrast to the approach adopted by <cit.>, which relies on unlabeled or self-generated stimuli limited to singular concepts or functions the meaning of which the models have already “known”, our methods provide a more comprehensive alignment with diverse human preferences. § METHOD We begin by instructing LLMs on human preferences with a set of preference-annotated response pairs.Secondly, we collect the activity patterns of LLMs when exposed to stimuli that are preferred or dispreferred.The differences in these patterns serve as the foundation for manipulating LLMs, enabling them to generate responses more closely aligned with human values.We introduce two novel methods for instructing LLMs on human preferences and extracting their activity patterns: one involving a single LLM (trained to discern the relative quality of responses) and the other employing dual LLMs (“a good guy and a bad guy”). Finally, we construct the final model by training a low-rank adapter<cit.> to fit the disparity in activity patterns.§.§ Instructing LLMs on Human Preferences<cit.> validated the effectiveness of activity patterns extracted from alignment fine-tuned models, such as LLaMA-2-chat, in capturing concepts like truthfulness and honesty. However, for non-aligned models, such as pre-trained large language models or LLMs subjected to simple fine-tuning, explicit indications about human preferences must be provided to elicit and capture discernible activity patterns arising from stimuli preferences. This enables the accumulation of diverse activities that can subsequently be employed to calibrate LLMs in alignment with human preferences. To attain this objective, we explored two different methods.The first involves instructing a single LLM using preference-annotated response pairs, employing contrasting instructions like Hindsight <cit.>.The second method performs fine-tuning one LLM (denoted as preferred model) based on preferred responses, and fine-tuning another LLM (denoted as dispreferred model) on dispreferred responses. §.§.§ Instruction with a Single ModelWithin the proposed framework, the Single LLM Method focuses on fine-tuning a Single Large Language Model through Contrastive Instruction Tuning (SCIT). The primary objective is to train the model to effectively discriminate between preferred and dispreferred responses, thereby optimizing its alignment with human preferences. In this approach, the training dataset is curated to include pairs of both preferred and dispreferred instructions, along with associated queries and their corresponding responses.Inspired by HIR <cit.>, for instructions linked to positive preferences, the goal is to heighten the probability of generating preferred responses while concurrently diminishing the probability of generating dispreferred responses. Conversely, for instructions associated with negative preferences, the objective is to elevate the probability of generating dispreferred responses and reduce the probability of generating preferred responses.Formally, let D represent the training dataset, with q_i denoting the query, r_i representing the response, and p_i indicating the instruction (positive or negative). The fine-tuning of the LLM involves maximizing the following objectives: π(θ^*)= max_θ∑_(p_i,q_i,r_i)∈ Dlogexp(P^+)/exp(P^+)+exp(P^-)where P^+=π(r_i| p_i,q_i; θ), P^-=π(r_i| p_i^*,q_i; θ) and p_i^* denotes the opposite instruction, ensuring a contrast between preferred and dispreferred cases.Throughout the fine-tuning process, the LLM undergoes a learning phase to differentiate between preferred and dispreferred responses, unveiling distinct activity patterns linked to human preferences. The internal representations acquired through exposure to diverse stimuli types facilitate subsequent thorough analysis. This discriminative training within a single model enables the discernment and prioritization of preferred responses, thereby achieving the overarching objective of aligning with a broad spectrum of human preferences. Following HIR, we introduced KL divergence constraints in both preferred and dispreferred instruction tuning. This ensures that the model, after fine-tuning, does not deviate significantly from its original state. This constraint contributes to the stability of utilizing the fine-tuned representations to manipulate the original model.§.§.§ Preference Instruction with Dual ModelsIn Dual LLMs method,our aim is to train two LLMs with distinct tendencies: one model is inclined to generate preferred responses, while the other tends to produce dispreferred responses. To achieve this objective, we employ paired preference data to conduct supervised fine-tuning of the LLMs. Specifically, we use the preferred data from the preference pairs to train the “good” model and the dispreferred data from the preference pairs to train the “bad” model.Formally, consider the dataset D, which consists of input queries q and corresponding pairs of preferential responses: a preferred response r_h and a dispreferred response r_l. We are now dividing D into a preferred dataset D_h={q,r_h}_i and a dispreferred dataset D_l={q,r_l}_i. Utilizing this data, we employ a supervised learning approach (maximum likelihood) to fine-tune the LLMs, thereby obtaining two models expressing preferences, denoted as π_h and π_l respectively. The fine-tuning of these two LLMs is aimed at maximizing the achievement of the following objectives:π_h(θ^*) = max_θ∑_(q_i,r_i)∈ D_hlogπ(r_i | q_i; θ) π_l(θ^*) = max_θ∑_(q_i,r_i)∈ D_llogπ(r_i | q_i; θ)Through this training process, the “good” model and “bad” model have respectively learned the activity patterns associated with human-preferred and dispreferred responses.§.§ Collecting Activity Patterns In the process of extracting activity patterns, we utilize stimulus pairs <p^+, q, r> and <p^-, q, r> to elicit representations from the intermediate layers of the model. For the Single Large Language Model through Contrastive Instruction Tuning (SCIT), these pairs are inputted separately into the same model to capture distinct activation patterns for preferred and dispreferred responses. In the case of the Dual LLMs method, the inputs are fed into their corresponding “good” and “bad” models, enabling the extraction of activation patterns independently for each model. For both methodologies, we calculate the discrepancy in activation patterns of stimulus pairs, yielding a difference vector indicative of the direction of preferred activity patterns. Note that diverse responses render the discrepancies in representations at corresponding token positions unexplainable. Consequently, our representation pairs are generated by various instructions within the same response. Specifically, for preferred and dispreferred instructions, we calculate the differences in hidden states for each token at corresponding positions in the response.The difference vector provides insights into the factors that impact the alignment of the LLM with human preferences. Subsequently, we perturb the model's original representation by incorporating the difference vectors. This perturbation serves to guide the model's representation in the direction aligned with human preferences. §.§ Constructing Final ModelsIn this step, we leverage the collected activation pattern to train a target model that is expected to align with human preference. To achieve this, we draw inspiration from the approach of <cit.> by employing a specialized loss function and fine-tuning with Low-Rank Adapters (LoRA), enabling the efficient incorporation of activation patterns into the model.We consider the output of the LoRA matrix as a perturbation of the original hidden layer states, aligning it with the difference vector. Specifically, we employ Mean Squared Error (MSE) loss as the objective function:ℒ_Align=(A_p,π_LoRA,L^t-(A_p,π_base,L^t+α v_l))_2 Here, α serves as a hyperparameter controlling the extent to which the difference vector v_l intervenes in the model integration process.A_p,π_LoRA,L^t and A_p,π_base,L^t represent the activity patterns of the target model equipped with and without LoRA, respectively. v_l is the extracted difference vector as outlined in Section <ref>. In the case of SCIT, v_l results from contrasting activity patterns induced by stimulus pairs input to the “discriminative" model, while for the Dual LLM Method, it is obtained by contrasting patterns resulting from inputting stimulus pairs fed into the models playing“good guy" and “bad guy" respectively. § EXPERIMENTFollowing <cit.>, we mainly conducted experiments on single-turn dialogue tasks. We extensively compared various RL-free alignment approaches and RLHF, evaluating the results through human evaluation and automated assessment. Additionally, we conducted comparative experiments with the representation engineering method proposed by <cit.>, serving as an ablation study to demonstrate the impact of our approach in capturing human preferences. §.§ Experimental SetupsTaskIn single-turn dialogue, we use Anthropic's Helpful and Harmless dataset[https://huggingface.co/datasets/Dahoas/full-hh-rlhf]<cit.>, denoting human preference responses according to helpfulness and harmlessness. Each example in the dataset contains a pair of dialogues between a human and a language model, providing preferred and dispreferred responses for each query.The training set was employed for the first step, Instructing LLMs on human preferences, as well as for the construction of the final model.The test set was utilized to evaluate the performance of the methods.Specifically, prompts were used to instruct model generation, with the preferred response serving as the reference for comparisons, as detailed in Appendix <ref>.Base Model<cit.> and <cit.> utilized supervised fine-tuning models as initial models in their application of Proximal Policy Optimization (PPO). For fair comparison, we performed fine-tuning on the LLaMA2-7B model <cit.> using the Alpaca dataset <cit.>.We denote the resulting model after fine-tuning as Alpaca.In our experiments, all the models were initialized with Alpaca and further trained by the baseline methods and RAHF.§.§ BaselinesTo evaluate our proposed approach, we conduct extensive comparisons with existing alignment methods, including Reinforcement Learning from Human Feedback (RLHF) and other alternative methods for preference alignment.These experiments were specifically designed to assess the efficacy of our method in aligning with human preferences. SFT (on preferred responses)This baseline involves fine-tuning the language model directly using the preferred responses from the dataset. The model is trained to generate responses that align with the labeled preferred responses.HIRHindsight Instruction Relabeling (HIR) proposed by <cit.> converts feedback to instruction by relabeling original instructions and employs supervised training for enhanced alignment with human preferences. We use HIR as a baseline to evaluate the advantages of RAHF over supervised fine-tuning.LORRAIn order to assess the impact of learning from human feedback on the model's ability to capture activity patterns, we conduct ablation experiments. Specifically, we compare our approach, which involves learning from both human feedback and activity patterns, against a baseline that solely relies on representation engineering <cit.>. This baseline omits the step of explicit preference learning and evaluates the model's performance based on the effectiveness of representation alignment alone. This ablation study allows us to isolate and measure the contribution of learning human preferences to the overall effectiveness of our proposed method.DPODirect Preference Optimization <cit.> directly optimizes a language model to adhere to human preferences without using explicit reward modeling or reinforcement learning. It has been proven to be an efficient and straightforward alternative to RLHF.RLHF-PPOFor the RLHF baseline, we follow the common practice, as outlined by <cit.>. We use human preference data to train a reward model and then employ Proximal Policy Optimization (PPO) to optimize the model generated by supervised fine-tuning.We performed comprehensive evaluations, employing both automated metrics and human assessments, on a subset of 200 examples randomly extracted from the test dataset to assess the efficacy of our methods and compare them with other competing approaches. In the case of the RLHF-PPO baseline, the training set was split into two parts, one dedicated to training the reward model and the other for running the PPO algorithm. For the other baselines, the entire training set was used for the training process. Further elaboration and details regarding the re-implementation of the baseline methods can be found in Appendix <ref>. §.§ Automatic Evaluation To carry out evaluation automatically, we employed a reward model as proxies for human preferences. Furthermore, we utilize GPT-4 <cit.> to assess the quality of two responses, as recent research has indicated that GPT-4 exhibits exceptional performance in evaluating responses from chat assistants, displaying a close correlation with human preferences.§.§.§ Evaluation with Reward Model We employed two reward models for evaluation: one is the reward model trained for the RLHF baseline and the other is from a publicly available reward model[https://huggingface.co/OpenAssistant/oasst-rm-2-pythia-6.9b-epoch-1] trained on the same dataset. In Table <ref>, we report the rewards achieved by the baselines and our methods. Even though RLHF-PPO achieves higher rewards, this accomplishment comes at the cost of training complexity and instability.Therefore, our attention shifts towards the examination of alternative approaches to reward-free alignment methods. RAHF-DualLLMs surpass all baseline models, with the exception of RLHF-PPO, across both reward models. This outcome suggests that representation alignment with human feedback maintains greater consistency with the reword models. Despite DPO achieving higher rewards at temperatures of 0.75 and 1, we favor greedy decoding and prioritize results obtained at lower temperatures. This preference stems from concerns about the instability associated with high-temperature sampling. §.§.§ Evaluation with GPT-4 When evaluating with GPT-4, we conducted comparisons from two perspectives. Firstly, we took the preferred response from the dataset as the reference, assessing the win rates of all methods against it.The win rates, as depicted in Table <ref>, reveal that both of our proposed methods outperformed all the competitors, except RLHF-PPO.Notably, RAHF-DualLLMs achieved results comparable to RLHF-PPO, exhibiting only a marginal performance decrease.Moreover, we utilize GPT-4 for evaluating the quality of responses generated by ours and other competitors. This assessment is carried out via pairwise comparisons, and the outcomes are denoted as w_i. In particular, when evaluating all results with our method as the reference, we apply the following formula to establish a standardized measure that facilitates the interpretation of the results: w' =(1-w)/w/∑_k=1^4 ((1-w_k)/w_k) + 1where w represents the win rate against RAHF, and we set w=0.5 when RAHF was compared to itself.The result of k-th baseline is denoted as w_k. This standardizes the comparison results to the same scale. It allows us to more effectively evaluate and compare the performance of all methods.As depicted in Figure <ref>, both of our proposed methods exhibit higher win rates in comparison with RL-free methods. Despite SCIT's subpar performance on the proxy model, it outperforms baselines when evaluated using GPT-4. In contrast to the reward model, we consider GPT-4's evaluation more reliable, relying on its robust performance and the reasoning it provides during evaluation. Furthermore, while RAHF-DualLLMs perform well, RAHF-SCIT has lower hardware requirements and costs, especially in terms of GPU utilization.It has been reported that the assessment outcomes are susceptible to the sequence in which diverse responses are presented to GPT-4 during the evaluation process <cit.>.In order to mitigate the potential impact of positional bias, we conducted two distinct experiments for each pairwise comparison, calculating the average performance to establish the final results. The specific prompts used for evaluation with GPT-4 are presented in the appendix. §.§ Human EvaluationFor the human evaluation, we assigned evaluators the task of comparing two randomly selected responses and providing judgments on their relative performance, categorizing them with three results: win, lose, or tie.Table <ref> presents the comparative results of RAHF against RL-free methods and RLHF in human evaluation. The results indicate that both of our proposed methods outperformed other RL-free methods. This suggests that RAHF aligns better with human preferences. The human evaluation results align broadly with the GPT-4 evaluation results, with the only difference being that, compared to GPT-4, humans tend to provide more tie judgments. § CONCLUSION In this study, we explored representation engineering, approach to aligning large language models with human preferences, drawing upon insights from cognitive neuroscience.We have introduced RAHF (representation alignment from human feedback), a straightforward paradigm designed for training language models based on human preferences, eliminating the need for reinforcement learning and reward models.RAHF effectively identifies disparities in the activity patterns of LLMs between preferred and dispreferred stimuli, and harnesses these distinctions to improve the controllability of LLMs. We hope this study can inspire future research toward developing more controllable AI and designing more efficient and scalable algorithms that could substantially reduce the costs associated with training LLMs with human feedback through the lens of representation engineering.acl_natbib§ PROMPTS§.§ Preference InstructionsTwo instructions used in this study for preferred and disprefrred responses:Preferred instructionYou are a good, respectful and honest assistant. Your responses should be helpful, meaningful, truthful, harmless, safe, ethical, socially unbiased, positive, interesting, well-founded, agreeable, specific, reasonable, and role consistent. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. Please do not share false information under any circumstances.Dispreferred instructionYou are a bad, disrespectful and dishonest assistant. Your responses might be useless, meaningless, untruthful, harmful, dangerous, unethical, socially biased, negative, boring, fabricated, offensive, vague, unreasonable, or role inconsistent. If a question does not make any sense, or is not factually coherent, still answer the question and share false information when necessary. §.§ GPT-4 prompt for computing dialogue win ratesReferring to the prompt used in <cit.> to quire GPT-4, we required GPT-4 to assess the responses generated by models from three dimensions: helpfulness, truthfulness, and harmlessness. Table <ref> presents the specific prompt. § IMPLEMENTATION DETAILS In this section, we present the experimental details and hyperparameters of the baselines we compare with and our proposed methods. For all trained models, we set the maximum query length to 256 and the maximum sentence length to 512. We filter out samples in the dataset with queries exceeding 256 and truncate sentences to the maximum sentence length.All baselines and our models were trained using the Alpaca fine-tuned model as the base model. During supervised training with Alpaca, we calculated the loss for both prompts and responses. Specifically, we performed full parameter fine-tuning for three epochs with a learning rate of 2e-5.RLHF-PPODuring the training of RLHF-PPO, we utilized Microsoft's DeepSpeed-Chat training framework, making adaptive modifications to the hyperparameters. We performed full-parameter fine-tuning for both the training of the reward model and PPO. Table <ref>presents the hyperparameters for reward model training, while Table <ref> presents the key parameters for PPO.HIRFor the HIR baseline, we also conducted full-parameter fine-tuning. Table <ref> displays the hyperparameters used for HIR.DPOWe employed the trl framework from Hugging Face to train DPO model. we utilized the "good" model from RAHF-DualLLMs, as the reference model for DPO. We employed LoRA for fine-tuning. The hyperparameters used in the DPO training are detailed in Table <ref>. RAHFFor RAHF-SCIT, we used the same hyperparameters as HIR during the first-step training but omitted the supervised training loss. For RAHF-DualLLMs, the hyperparameters used are shown in Table <ref>. When constructing the final model, we followed the hyperparameter selection in RepE<cit.>. We manipulated layers (10, 20, 2) and set the perturbation coefficient α to 5. We used the batch size of 8 and trained with a learning rate of 3e-4.§ HUMAN EVALUATION DETAILS We recruited six volunteers for the assessment, with each evaluator comparing 200 dialogues. Figure <ref> shows a screenshot of the interface used for our evaluation, which all evaluators utilized to rate the data. § QUALITATIVE EXAMPLESTable <ref> and Table <ref> present qualitative examples of RAHF compared with baselines in dialogue tasks.
http://arxiv.org/abs/2312.15997v1
{ "authors": [ "Wenhao Liu", "Xiaohua Wang", "Muling Wu", "Tianlong Li", "Changze Lv", "Zixuan Ling", "Jianhao Zhu", "Cenyuan Zhang", "Xiaoqing Zheng", "Xuanjing Huang" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231226110136", "title": "Aligning Large Language Models with Human Preferences through Representation Engineering" }
PKU]Weijun ChenPKU]Heyuan Wangcor1PKU]Ye TianPKU]Shijie GuanPKU]Ning Liu[cor1]Corresponding author[PKU]organization=School of Computer Science, Peking University,state=Beijing, country=ChinaMultivariate time-series (MTS) forecasting is a challenging task in many real-world non-stationary dynamic scenarios. In addition to intra-series temporal signals,the inter-series dependency also plays a crucial role in shaping future trends. How to enable the model's awareness of dependency information has raised substantial research attention. Previous approaches have either presupposed dependency constraints based on domain knowledge or imposed them using real-time feature similarity. However, MTS data often exhibit both enduring long-term static relationships and transient short-term interactions, which mutually influence their evolving states. It is necessary to recognize and incorporate the complementary dependencies for more accurate MTS prediction. The frequency information in time series reflects the evolutionary rules behind complex temporal dynamics, and different frequency components can be used to well construct long-term and short-term interactive dependency structures between variables. To this end, we propose FCDNet, a concise yet effective framework for multivariate time-series forecasting. Specifically, FCDNet overcomes the above limitations by applying two light-weight dependency constructors to help extract long- and short-term dependency information adaptively from multi-level frequency patterns. With the growth of input variables, the number of trainable parameters in FCDNet only increases linearly, which is conducive to the model's scalability and avoids over-fitting. Additionally, adopting a frequency-based perspective can effectively mitigate the influence of noise within MTS data, which helps capture more genuine dependencies. The experimental results on six real-world datasets from multiple fields show that FCDNet significantly exceeds strong baselines, with an average improvement of 6.82% on MAE, 4.98% on RMSE, and 4.91% on MAPE. In addition, the ability of FCDNet to jointly learn high-quality static and dynamic graph structures is demonstrated empirically. Our source codes are publicly available at https://github.com/onceCWJ/FCDNet. * Research highlight 1* Research highlight 2 Frequency Complementary Dependency Modeling Multivariate Time-Series Forecasting§ INTRODUCTIONMultivariate time-series forecasting is an important task in the data mining field and has been widely explored in many dynamic scenarios such as financial investment, transportation, and disease prevention <cit.>. For example, profitable investment strategies can be formulated by predicting future stock price movements, and the government can make early warnings to save lives by analyzing the epidemic trend. Traditional algorithms such as ARIMA <cit.> and state space models <cit.> are not suitable for many situations due to their strict requirement on the stationarity of a time-series. With the development of deep learning, some research devotes to designing advanced neural networks to extract nonlinear and implicit temporal patterns shared by MTS, such as FC-LSTM <cit.> and LSTNet <cit.>. However, they treat each time-series in isolation while ignoring an important property of MTS, that variables interact and co-evolve with each other. For example, stocks of listed companies from the same sector tend to exhibit synchronous trends <cit.>, and records of traffic sensors in the same road network have strong correlations <cit.>. To perceive the relational dependencies between variables, recent studies <cit.> introduce graph neural networks (GNNs) <cit.> into MTS forecasting. However, these GNNs require pre-defined adjacency graphs, which are difficult to obtain in the absence of domain expert knowledge. Meanwhile, static graphs cannot reflect the short-term dynamic dependency in practical complex systems. In this regard, adaptive graph structure learning in a data-driven manner raises great attention. In most current models such as Graph WaveNet (GWN) <cit.>, Adaptive Graph Convolutional Recurrent Network (AGCRN) <cit.>, and MTGNN <cit.>, the graph structure is randomly initialized and tuned end-to-end during model training. Once the training of the model finishes, the dependency matrix is fixed. Therefore, these models still use a static modeling process for dynamic input MTS. However, predicting with static graphs causes significant bias because the correlations between variables are time-varying in the real world. Another paradigm to tackle such a problem is dynamic graph learning <cit.>. However, despite achieving appreciable improvements, they often depend on highly complex architectures to capture subtle trends and interdependent information contained in MTS, which incurs significant time costs and quadratic memory consumption, thus limiting the model's scalability.Existing methods have made significant strides in modeling dependencies within MTS data. However, they often overlook the fact that MTS possess both long-term stable dependencies and short-term, immediate interactions. In various domains, such as the securities market or traffic scenarios, these time series exhibit long-term patterns, like sector rotation in stocks or constant distance relationships between road points. Yet, they also experience short-term variations caused by events like emergencies in the stock market or temporary changes in traffic flow during holidays and peak hours. To enhance the accuracy of predictions in MTS data, models must strike a balance between two complementary aspects: accurately learning long-term dependencies and sensibly capturing short-term changes in relationships. This balance allows the model to effectively capture both stable, long-term patterns and dynamic, short-term fluctuations, leading to more precise and robust predictions.Besides, real-world MTS is usually collected from hybrid dynamic systems that blend various signal frequencies and have a low signal-to-noise ratio. Therefore, time-frequency mining is a fundamentally effective solution to help reduce the impact of noise in MTS and discover more robust volatility patterns. For example, in stock price prediction, SFM <cit.> proposes a state frequency memory recurrent network to capture the multi-frequency trading patterns from past market data to make regression over time. In long-term forecasting <cit.>, time-frequency analysis is applied for robust multiscale time-series feature extraction. FEDFormer <cit.> develops a frequency-enhanced transformer to reduce input and output distribution differences and enhance robustness to noise. In terms of model lightweight, FiLM <cit.> proposes a frequency-improved Legendre memory model for preserving historical information in neural networks while avoiding overfitting to noise presented in history. In general, these methods illustrate the role of frequency in reflecting the periodicity and trend of time series in the temporal domain, but how to combine the frequency method with improving the modeling of dependency structure among multivariate is still a challenge.Time series data typically exhibit multiple fluctuation components, encompassing high-frequency elements that undergo rapid fluctuations within short time spans, and low-frequency components that change gradually over extended periods. Decomposing time series enables us to discern these fluctuations at different scales, offering a deeper insight into the underlying data structure. The low-frequency component often signifies the long-term trend embedded within the time series. By extracting these low-frequency components, we gain a clearer perspective on trend variations in the data, aiding in the prediction of future long-term trends. Utilizing these low-frequency components facilitates the understanding of stable, long-term correlations between variables. On the other hand, high-frequency components typically encapsulate random fluctuations and noise present in time series data, along with more intricate details. Leveraging the stable correlations derived from low-frequency components allows for adaptive noise reduction and comprehensive exploration of detailed information within high-frequency components. This approach ensures effective and reliable complementary relational analysis.To this end, in this paper, we propose a concise yet practical dependency modeling framework for MTS forecasting. Unlike previous methods, our model focuses on capturing multi-level frequency temporal information to guide the automatic construction of dependency graphs. In our paper, we show the powerful ability to combine time-frequency mining with complementary dependency modeling for MTS forecasting. The main contributions of our work are as follows:* To combine time-frequency mining with structure modeling, we propose a model called FCDNet, which follows a novel route with the aim of learning the complementary effects of long-term dependencies and short-term interactions from multi-level frequency information. * Our model develops the Long-Term Frequency Extractor (LTFE) with wavelet transform and the Short-Term Frequency Extractor (STFE) with Fast Fourier Transform to emphasize the span of MTS when extracting multi-level frequency information. LTFE learns stable static correlations from long-term historical MTS, and STFE learns dynamic evolving correlations from short-term input MTS. LTFE and STFE are complementary roles that help capture a more comprehensive variable interaction pattern. * We conduct extensive experiments on six real-world multivariate time-series datasets, including Ashare, Solar-Energy, PEMS03, PEMS07, PEMS04, and PEMS08. Comparison results with many strong baselines demonstrate the effectiveness and efficiency of our model. § RELATED WORKThe joint modeling of inter-series correlations and intra-series dynamics are critical for MTS forecasting, which means that a variable's future information depends not only on its historical information but also on the historical information of other variables. Spatial-temporal graph neural networks have successfully utilized this characteristic and achieved success in MTS forecasting. The input of spatial-temporal graph networks are usually an MTS and an additionally given adjacency matrix. They aim to predict future values or labels of multivariate time-series. DCRNN <cit.>, STGCN <cit.>, and GWN <cit.> are three representative works for early research on spatial-temporal modeling. After these three works, a series of spatial-temporal models were proposed and achieved success at that time <cit.>. Despite progresses made in the above models, they heavily rely on heuristic rules or a prior-fixed graph structure, which can only store limited correlation information due to the lack of evolving time-series properties, meanwhile inevitably preventing the model from flexible applicability. To get rid of the dependence on prior knowledge, recent works tend to automatically model correlations in a data-driven way. For example, MTGNN <cit.> extends GWN by proposing a more delicate structure learning module. AGCRN <cit.> proposes two adaptive modules for enhancing graph convolutional networks to infer dynamic spatial dependencies among different traffic series. Another line of research deems that it is necessary to imitate the dynamics of inter-series relationships more intensively. They propose a variety of architectures <cit.> to formulate a fine topology of dynamic graphs at each time step. DSTAGNN <cit.> propose a dynamic spatial-temporal aware graph based on a data-driven strategy to replace the pre-defined static graph. ST-WA <cit.> aims at turning spatio-temporal agnostic models to spatio-temporal aware models by generating location-specific and time-varying model parameters to turn spatio-temporal agnostic models to spatio-temporal aware models. Though bringing improvement, these methods are not cost-effective since the design of model architectures is extremely sophisticated. This will cause a bottleneck when applied to scalability scenarios.However, the above methods focus too much on the dynamic interaction between variables and ignore the complementary role of stable correlations between variables. It is non-trivial to simultaneously learn high-quality long-range stable correlations and accurate short-term interactions along with their complementary role. In addition, the above models have not focused on the particularity of structure recognition and modeling in MTS, which differs from general graph modeling problems. ST-Norm <cit.> shares a new insight that spatial-temporal indistinguishability is the key difficulty of MTS forecasting and emphasizes that the inherent frequency information is vital to solving the problem. The existing frequency-based methods, such as FEDformer <cit.> and FiLM <cit.>, are all aimed at the temporal evolution of time series. Whereas in our paper, we combine time-frequency information with dependency modeling for MTS forecasting.§ METHODOLOGY §.§ Problem FormulationLet x_t ∈ℝ^N× D denote the value of a multivariate of numbers N and feature dimension D at time step t, where x_t[i] ∈ℝ^D denotes the i^th variable at time step t. A∈ℝ^N× N denotes the degree of correlation between variables and A_i,j∈[0,1] (i∈[1,N], j∈[1,N]). Given the multivariate observation sequence of historical M time steps X = {x_t_1,x_t_2,⋯,x_t_M},our goal is to predict the future E-step numerical sequence Y = {x_t_M+1,x_t_M+2,⋯,x_t_M+E}. Our goal is to learn a neural network ℱ_A(θ) for accurate forecasting.Specifically, let X_train and X_valid denote the training and validation sets of MTS respectively, A ∈ℝ_+^N× N is the adjacent matrix of graph structure representing the proximity between N time-series, ω denote the parameters of the model and ℒ and ℱ denote the loss functions used during training and validation respectively, the use of graph structure learning for MTS forecasting naturally has a bi-level optimization architecture as:min_A ℱ(ω_A,A,X_valid), s.t. ω_A ∈argmin_ωℒ(ω,A,X_train). Intuitively, the hierarchical relationship results from the fact that the mathematical program related to the parameters of structure modeling is part of the constraint of the temporal forecasting module. Being generically non-convex and non-differentiable, bi-level programs are intrinsically hard <cit.>. Inspired by <cit.>, we approximate the bi-level programming to a uni-level programming problem as:min_w ℒ(w,A(w),X_train). Because this approach owns the freedom to design the parameterization and can better control the number of parameters compared to an inner optimization w_A.The overall forecasting function can be written as:[𝐗_t_1:t_M,A]𝐘_t_M+1:t_M+E.The overall framework of FCDNet is present in Figure <ref>. §.§ Frequency-Guided Dependency Modeling §.§.§ Long-Term Frequency Extractor (LTFE)Due to the strong volatility in MTS, it is difficult to capture stable and genuine correlations adaptively from historical MTS without prior domain knowledge. However, the low-frequency part of the time-series can reflect the general characteristics of variables and are relative invariants implied in the complex dynamics. We try to capture such low-frequency information from long-term historical time-series for modeling static and stable inter-series structures. Besides, long-term historical MTS are beneficial for resisting noise, which facilitates obtaining more robust and accurate dependencies. To this end, FCDNet adopts the training set of MTS data as long-term historical MTS. Moreover, changes in the values of different variables at cross-time can better reflect the relationships between variables. In the transportation domain, for example, the numerical changes of the sensors over time offer insights into how traffic dynamics propagate along with the network. Therefore, we first do the difference operation on the training MTS in order to reveal more moderate correlations:𝒟iff(𝐗_1,⋯,𝐗_T)={0, 𝐗_2-𝐗_1,⋯,𝐗_T-𝐗_T-1}≜{𝐗̂_1,𝐗̂_2, ...,𝐗̂_T}=𝐗̂∈ℝ^T_train× N× D. Then, in light of the periodicity of the MTS, we set a hyper-parameter period P to segment 𝐗̂ into S = ⌊ T_train/P ⌋ segments, each containing time-series 𝐗̂_i ∈ℝ^P × N × D, i = 1,2,...,S. From a long-term perspective, the patterns in different time intervals can be captured in parallel by splitting the multivariate time-series into segments, reducing the number of parameters required for subsequent operations. In addition, it enables the model better capture semantic information within and between periods. After obtaining the time-series segments, we concatenate these segments to obtain a four-dimensional tensor 𝒪:𝒪=[𝐗̂_1𝐗̂_2...𝐗̂_𝐒]∈ℝ^S× P × N × D. Moreover, the real-world MTS has a low signal-noise ratio. Ignoring these noises in joint graph learning and forecasting will fail to learn the genuine dependency graph and lead to over-fitting in the forecasting. By performing the difference and segmentation operations mentioned above, noise points within a period P can be more easily identified. To this end, we apply Daubchies wavelet decomposition <cit.> along the temporal dimension of 𝒪, which can extract multilevel time-frequency features by decomposing the temporal dimension of 𝒪 as low and high-frequency sub-series level by level <cit.>. Assuming L-1 is the number of decomposition levels, Dec as the wavelet decomposition, Rec as the wavelet reconstruct, and Db as the Daubchies wavelet, we can get the coefficients of each level:coeffs[i]=Dec(𝒪[s,:,n,d], wavelet=Db,level=i), i∈ [1,L],s ∈[1,S], n ∈ [1,N],d∈ [1,D].By controlling the coefficients, we filter out the high-frequency part of time-series and save the low-frequency part:C[i]= λ_i⊙coeffs[i], i∈ [1,L].Here, ⊙ denotes an element-wise multiplication operator and λ_i,i∈ [1,L] are the control factors. High-frequency components can be effectively filtered by taking a small value in λ_i. Final, the reorganized low-frequency information of original tensor 𝒪 can be reconstructed from C[i]:𝒵[s,:,n,d,i]=Rec(C[i],wave let=Db,level=i),𝒵∈ℝ^S× P × N × D × L.We combine the frequency dimension and feature dimension to obtain a four-dimensional tensor 𝒵̂∈ℝ^S× P × N × D · L. To further emphasize the uniqueness of different time segments and the particular dynamic characteristics of variables, we apply the ST-Norm across the dimension S and P to obtain two four-dimensional tensors:Ẑ^(1)=𝐒𝐓𝐍(𝒵̂) ∈ℝ^S× P × N × D · L,Ẑ^(2)=𝐒𝐓𝐍(Ẑ^T) ∈ℝ^P× S × N × D · L. For the final parameterization of the low-frequency structure A_LF, we use a feature extractor to yield a feature representation for A_LF. We opt for a concise architecture to implement this. First of all, to make the dimensions match the subsequent 1D convolutional operations and perceive historical MTS from two different temporal perspectives, we reshape Ẑ^(1) into 𝒬^(1)∈ℝ^N × S · D · L × P and Ẑ^(2) into 𝒬^(2)∈ℝ^N × P · D · L × S.Then, we use 1D convolution Conv along the transformed dimension and three fully connected layers to transform the three-dimensional tensor 𝒬 to obtain two graphs:A^(i) = χ (FC_1^(i)(δ (FC_2^(i)(δ (FC_3^(i)(δ(Conv^(i)(𝒬^(i)))))), A^(i)∈ℝ^N× N ,(i=1,2)where δ represents the ReLU activation function, and χ represents the smooth sparse unit <cit.> to normalize the value of the graph matrix to be A^(i)_ij∈ [0,1], FC_j^(i), (j=1,2,3,i=1,2) are six fully connected layers. A^(1) and A^(2) harvest dependency information from different time perspectives. The two graphs are fused in the form of a weighted sum to obtain the low-frequency graph A_LF, where β is a learnable parameter:A_LF = β A^(1) + (1-β) A^(2). §.§.§ Short-Term Frequency Extractor (STFE) LTFE aims to extract low-frequency information implied in long-term historical time-series for constructing a long-term stable graph structure. In addition to the genuine long-term stable correlations, the correlations between variables show different dynamics at different short-time spans. The high-changing correlations are essential for uncovering the evolution of dynamic systems and are non-neglectable in dependency modeling. For example, in the transportation domain, in addition to the inherent road network topology, different sensors will have different correlations in different periods. As a complement to the Long-Term Frequency Extractor (LTFE), we employ the Short-Term Frequency Extractor (STFE) to generate a representation of the high-frequency graph denoted as A_HF based on the dynamic input MTS. For each input 𝐗_in∈ℝ^B× T_in× N × D, where B denotes the batch size and T_in denotes the length of the input sequence, we first reshape 𝐗_in into 𝐗̂_in∈ℝ^N × T_in× B · D for subsequent transform operations. Flattening the samples in both the batch and feature dimensions during the Fast Fourier Transform (FFT) is essential due to the consistent graph structure utilized by STFE when processing a batch. Maintaining this uniform graph structure throughout the batch calculation is necessary. Otherwise, generating a distinct graph structure for each sample in the batchwill significantly escalate computational demands during subsequent calculations. Therefore, by flattening the samples in the batch and feature dimensions during FFT, STFE optimizes computational efficiency and ensures a streamlined and effective data analysis. Specifically, STFE first utilizes Fast Fourier Transform (FFT) to project 𝐗̂_in into the frequency domain:𝐗̂_e=FFT(𝐗_in)∈ℝ^N × T_in× B · D.FFT can favor STFE in learning the representations of the input time-series on the trigonometric basis in the frequency domain, which captures the repeated patterns in the periodic data or the auto-correlation features among different timestamps. Specifically, the output of FFT has real part (𝐗̂_e^r) and imaginary part (𝐗̂_e^i), which are processed by the linear operators with different parameters in parallel. The operations can be formulated as:V^*_u= (𝐗̂_e^*)(W_u)∈ℝ^N × T_in× F,*∈{r,i}where W_u∈ℝ^B· D × F is a learnable weight matrix for dimension reduction and capturing short-term frequency information across the batch and feature dimensions, F is a hypermeter denotes the transformed dimension. It is well known that complex numbers can be uniquely represented by their amplitude and phase. To better encode the high-frequency graph A_HF, we represent the amplitude A_m and the phase S as:A_m = √((V_u^r)^2+(V_u^i)^2)∈ℝ^N× T_in× F,S = arctan(V_u^r/V_u^i) ∈ [-π/2,π/2]^N× T_in× F,where arctan(·) is an element-wise inverse tangent function. Later, we apply IFFT to get the output:𝐆=IFFT(V^r_u+iV^i_u)∈ℝ^T_in× N × F.The final A_HF is obtained by applying fully connected layers to fuse the embedding 𝐆, A_m, and S:A_HF=χ(W_T((𝐆W_G+A_m W_m + S W_S)))∈ℝ^N × N,where W_G,W_m,W_S∈ℝ^F× N, W_T∈ℝ^T_in are learnable weight matrices for dimension transformation and feature extraction. §.§.§ Graph Sparsity MechanismThe control of graph sparsity can improve the graph learning ability of the model <cit.>. However, the current mainstream method directly conducts numerical truncation (e.g. threshold, Top-K operation), blocking the most important gradient update of the end-to-end learning. SSU solves this issue by applying an element-wise function φ(x) <cit.>on the generated structure and redefining the gradient at both ends to accelerate convergence:f(x)={ e^-1/x (x>0) 0(x ≤ 0),. φ(x)=α f(x)α f(x)+f(1-x), (α∈ℝ_+)∇ φ(x)=g,(φ(x)≤ϵ orφ(x)≥ 1-ϵ)where parameter α is the sparsification coefficient, ϵ is a small constant, g is the redefined gradient and retains the symbolic information of the original gradient. SSU assumes that the redefined gradient becomes g at both ends to accelerate convergence. However, in the case of the same redefinition gradient at both ends, this method can only make the generated structure discrete rather than sparse. Therefore, in SSU, we make the gradient size redefined at both ends differently so that the model has a greater probability of generating sparse matrices. The mathematical formula of SSU is:∇φ(x) =g_1, ( φ(x)≥ 1-ϵ)∇φ(x) =g_2, (φ(x)≤ϵ)g_2 = ξg_1,(ξ>1). §.§ Downstream Temporal ForecastingIn addition to the representation learning of graph structures, designing an appropriate method to integrate dependency information into downstream temporal forecasting also plays a critical role in decipting dynamic systems. Specifically, we couple different downstream forecasting units for extracted A_LF and A_HF structures according to the characteristics of low- and high-frequency information. §.§.§ Frequency Adaptation Graph Convolutional Gated Recurrent UnitTraditional graph neural networks such as GCN <cit.> and APPNP <cit.> are low-pass filters, which mainly retain the commonality of variables and ignore delicate learning of differences. In this regard, in our work, we adopt the graph filters of FAGCN <cit.> to adaptively aggregate low-frequency and high-frequency graph signals along the process of time-series message passing. FAGCN designs a low-pass filter ℱ_L and a high-pass filter ℱ_H to decompose the low-frequency and high-frequency parts from the graph adjacency matrix:ℱ_L = ε I + D^-1/2 A_LF D^-1/2, ℱ_H = ε I - D^-1/2 A_LF D^-1/2, A_LF = γℱ_L + (1-γ) ℱ_H,where γ is a learnable parameter. Inspired by <cit.>, we combine FAGCN with Gated Recurrent Units (GRU) <cit.> and propose Frequency Adaptation Graph Convolutional Gated Recurrent Unit (FAGRU). Specifically, we use two-step graph convolution ⋆, which is defined as:W^Q_𝒢⋆𝐗_in=∑_k=0^K=2((A_LF)^k𝐗_inW^Q_k),where W^Q_k, (k=0,1,2) are trainable parameters. Then, we leverage the graph convolution with Gated Recurrent Units (GRU) to model the temporal dependency: R_t =σ(W_𝒢^R⋆(𝐗_t||H_t-1)+b_R),C_t =tanh(W_𝒢^C⋆(𝐗_t||R_t ⊙ H_t-1)+b_C),U_t =σ(W_𝒢^U⋆(𝐗_t||H_t-1)+b_U),H_t=U_t ⊙ H_t-1+(1-U_t) ⊙ C_t, where 𝐗_t,H_t denote the input and output of at time t, R_t, U_t are reset gate and update gate at time t respectively responsible for catching irrelevant information to forget and the part of the past state to move forward. || is concatenation along the feature dimension and ⊙ represents the element-wise product, and b_R, b_C, b_U are model parameters. The time-series output of FAGRU is denoted as 𝐗_out^(1):𝐗_out^(1) = [H_1||H_2||...||H_T_out].§.§.§ Frequency Adaptation Graph WaveNetInspired by <cit.>, to capture the volatility patterns and underlying dependency implicit in short time spans, we augment fine-grained convolutions with A_HF for joint modeling. Specifically, unlike standard 1D convolutions, the dilated causal convolution skips uniform intervals when sliding over an input sequence. Temporal Convolution Network (TCN) flexibly utilizes this property and stacks convolution layers with different dilation rates. Therefore, the receptive fields of TCN can increase exponentially with linear growth in parameters. Gated TCN consists of two TCNs and applies gating mechanisms to control information flow. Specifically, FAGWN receives the linear transform FC_in of input 𝐗_in and forms an information bottleneck as:𝐗̂_in=tanh(Θ_1 FC_in(𝐗_in))⊙σ(Θ_2 FC_in(𝐗_in)),where σ denotes sigmoid function,denotes the dilated convolution operation and Θ_1, Θ_2 denote the learnable parameters of convolution filters. Later, we employ the graph filters from FAGCN to encode different graph signals from high-frequency graph structure A_HF and integrate it into the time-series representation 𝐙:𝒜_L = ε I + D^-1/2 A_HF D^-1/2, 𝒜_H = ε I - D^-1/2 A_HF D^-1/2,𝐙 = ∑_k=0^K_p (𝒜_L^k 𝐗̂_inW_k1 + 𝒜_H^k𝐗̂_inW_k2)+FC_in(𝐗_in),where A_ν^k,ν∈{L,H} represent different-level power series of the graph matrices, W_k1,W_k2,(k=0,...,K_p) are learnable weight matrices. We stack multiple layers of Gated TCN and FAGCN and perform computation iteratively, where the input of the next layer is the output of the current layer. The compact representation 𝐗_out^(2) is obtained by the linear transform of multiple outputs:𝐗_out^(2) = δ(FC_1^(3)(δ(FC_2^(3)[𝐙_1||𝐙_2||...||𝐙_K]))),where FC_1^(3),FC_2^(3) are two fully connected layers, K is the number of stacked layers. The final output of FCDNet is:𝐗_out = (𝐗_out^(1)+η𝐗_out^(2))/(1+η) ∈ℝ^B× T_out× N × D_out,where η≥ 0 is a learnable parameter, T_out denotes the output length, D_out denotes the output feature dimension. §.§.§ Frequency Adaptation Graph Convolutional NetworksTraditional graph neural networks such as GCN <cit.> and APPNP <cit.> are low-pass filters, which mainly retain the commonality of variables and ignore delicate learning of differences.In this regard, in our work, we adopt the graph filters of FAGCN <cit.> to adaptively aggregate low-frequency and high-frequency correlation signals along the process of time-series message passing. FAGCN designs a low-pass filter ℱ_L and a high-pass filter ℱ_H to decompose the low-frequency and high-frequency parts from the graph adjacency matrix:ℱ_L = ε I + D^-1/2 A D^-1/2ℱ_H = ε I - D^-1/2 A D^-1/2Specifically, FAGCN receives the input 𝐗̂_in from Gated TCN. We employ the above graph filters toencode information of high-frequency graph structure A_HF and integrate it into the time-series representation 𝐙:A_L = ε I + D^-1/2 A_HF D^-1/2A_H = ε I - D^-1/2 A_HF D^-1/2 𝐙 = ∑_k=0^K_p (A_L^k𝐗̂_inW_k1 + A_H^k𝐗̂_inW_k2),where A_*^k,*∈{L,H} represent different-level power series of the graph matrices, W_k1,W_k2,(k=0,...,K_p) are learnable weight matrices. We stack multiple layers of Gated TCN and FAGCN, and the compact representation 𝐗_out^(2) is obtained by the linear transform of multiple outputs:𝐗_out^(2) = δ(FC_1^(o)(δ(FC_2^(o)[𝐙_1||𝐙_2||...||𝐙_K]))),where FC_1^(o),FC_2^(o) are two fully connected layers, K is the number of stacked layers. The final output of the whole model is:𝐗_out = (𝐗_out^(1)+η𝐗_out^(2))/(1+η) ∈ℝ^B× T_out× N × D_out,where η is a learnable parameter, T_out denotes the output length, D_out denotes the output feature dimension. § EXPERIMENTS We verify FCDNet on six MTS datasets, including PEMS03, PEMS04, PEMS07, and PEMS08, which were collected by Caltrans Performance Measurement System (PEMS)  <cit.>, Solar-Energy contains the solar power output from 137 PV plants in Alabama State in 2007, and Ashare contains the price indicators of stocks in Shanghai and Shenzhen exchange markets from January 2019 to November 2022 with forecasting target as each stock's closing price.Detailed data statistics are provided in Table <ref>. Z-score normalization is applied to the inputs. To be consistent with most modern methods, we split the PEMS datasets into training, validation, and test sets in a 6:2:2 ratio. For Solar-Energy and Ashare datasets, the split ratio is 7:1:2. All of the methods are evaluated with three metrics: mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE). Due to the non-uniform distribution of the solar power output of PV plants in spatial and temporal domains, there are many zeros in Solar-Energy. Hence, we only adopt MAE and RMSE for this dataset. We demonstrate the significant shortcomings of the current forecasting model in dealing with low signal-to-noise ratio MTS using the Ashare dataset. §.§ BaselinesWe compare FCDNet with the following models: (1) VAR Vector Auto-Regression <cit.>. (2) FC-LSTM <cit.>, which is a recurrent neural network with fully connected LSTM hidden units; (3) DCRNN <cit.>, which integrates graph convolution into a gated recurrent unit; (4) GWN <cit.>, which integrates diffusion graph convolutions with 1D dilated convolutions;(5) MTGNN <cit.>, which uses external features to automatically generate graphs coupled with dilated inception layers for MTS forecasting;(6) ASTGCN <cit.>, which introduces a spatial-temporal attention mechanism in the model;(7) STG-NCDE <cit.>, which combines two neural controlled differential equations for the spatial-temporal processing;(8) STGODE <cit.>, which applies continuous graph neural network with ordinary differential equations for multivariate time-series forecasting;(9) Z-GCNETs <cit.>, which introduces the concept of zigzag persistence into a time-aware graph convolutional network for time-series prediction;(10) AGCRN <cit.>, which exploits learnable embedding factorization of nodes in graph convolution;(11) DSTAGNN <cit.>, which represents dynamic spatial relevance among nodes with an improved multi-head attention mechanism and acquires the range of dynamic temporal dependency via multi-scale gated convolution;(12) FEDformer <cit.>, whichcombines transformer with the seasonal-trend decomposition method;(13) FiLM <cit.>, which applies Legendre Polynomials projections to approximate historical information and uses Fourier projection to remove noise.(14) ST-Aware <cit.>, which aims at turning spatial-temporal agnostic models into spatial-temporal aware models.§.§ Experimental SetupsWe implement our experiments on the platform PyTorch 1.10.1 using one NVIDIA GeForce RTX 3090 GPU and tune all hyperparameters on the validation data by grid search for FCDNet. For the proposed framework, the period P is searched within {10, 20, 30, 40} for Ashare and {180, 216, 252, 288, 324} for other datasets. The decomposition level L is searched within {2, 3, 4, 5}, the initial learning rate is searched within {1e^-3, 2e^-3, 3e^-3, 4e^-3, 5e^-3}, and feature size F in (<ref>) is searched within {5, 10, 15, 20}. In the final implementation, the hyperparameter period P is set to 20 for Ashare and 288 for other datasets (i.e., segmentation in days). The hyperparameter L is set to 5, which is equivalent to using the four levels daubchies decomposition. We follow the design in <cit.> and set ε to 0.3, K_p to 2, and K to 4. The β in (<ref>), η in (<ref>), and γ in (<ref>) are initialized as 0.9, 0.1, and 0.8, respectively. Mean absolute error (MAE) is taken as the training objective of FCDNet. The F in (<ref>) is set as 5 for Ashare and 10 for other datasets. Missing values are excluded both from training and testing. The initial learning rate is 3e^-3 with a decay rate of 0.1 per 10 epochs, and the minimum learning rate is 3e^-5. The batch size is set to 4 for Ashare and 64 for other datasets. We adopt the Adam optimizer, and the number of training epochs is set as 250. Note that some baselines require a predefined graph structure between forecast variables, which is not available in Solar-Energy and Ashare. In this regard, we construct a kNN graph (k∈{5, 10, 20}) as the predefined graph structure following previous methods <cit.>.We repeated all experiments five times and reported the average results. §.§ Experimental Results and AnalysisTable <ref> shows the average results of FCDNet and fourteen baseline methods, from which we have several findings. (1) It can be seen that our model significantly outperforms other baselines across all metrics on the six datasets, illustrating the effectiveness of our model. The gained improvement of FCDNet compared with other baseline models lies in the mining and integration of long short-term time-frequency signals with complementary dependency modeling. (2) On the Ashare dataset, many complex models perform poorly, but simple models such as FC-LSTM and GWN perform well. This is probably because the stock time-series shows more substantial volatility and a lower signal-to-noise ratio. Thus, the lack of distinguishing robust features from blended noise data can easily lead to serious over-fitting of the complex model. The performance of FiLM and FEDFormer on the Ashare dataset is also impressive, both focusing on extracting time series fluctuation patterns in the frequency domain. (3) To further compare with two representative frequency-based models, we also conducted visualization experiments on the PEMS03 test set, as shown in Figure <ref>. FiLM performs poorly on traffic datasets, possibly due to its weak ability to capture short-term temporal information. Our model also demonstrates a stronger capability to capture multivariate temporal dynamics. (4) It can be seen that some models, such as STGODE and ST-WA, are highly dependent on the prior graph and cannot perform well in the forecasting tasks without prior domain correlation knowledge. It demonstrates the importance of investigating adaptive structure modeling. (5) Different from previous models, FCDNet proposes to extract correlation from multi-frequency information to discover more essential and complementary static-dynamic relationships between MTS. It can also be seen from Table 3 that the model parameters of FCDNet increase linearly with the number of forecast variables by a small factor, exhibiting better scalability and robustness.§.§ Ablation StudyTo further verify the effectiveness of LTFE and STFE proposed by FCDNet, we replace LTFE or STFE with a randomly initialized matrix using low-rank approximation and optimize them in the training process through an end-to-end approach. It is worth noting that many models, such as MTGNN, GWN, and AGCRN, take this form in structural modeling. The ablation results are shown in Figure <ref> and Table <ref>. LTFE assists the model in grasping the low-frequency information implicit in the long historical MTS, and STFE helps the model capture the high-changing information from the new input MTS. It can be seen that long-term stable dependencies and short-term immediate interactions are complementary, helping the model achieve the best results. From Formula (<ref>), we have allocated more weight proportion to LTFE, which is the reason that removing the LTFE module in the ablation study has a greater impact on overall prediction than removing the STFE module. This design also contains a simple curriculum learning idea i.e., let the model capture low-frequency information first and then adapt to high-frequency information, which is likely to be subject to more volatility. In particular, we can see that LTFE can effectively mine interdependencies in the long historical MTS and boosts the forecasting performance significantly. STFE and LTFE complement each other, which can more flexibly capture temporal and structural characteristics that appear in short-term segments and further optimize the model's capability for forecasting. §.§ Hyperparameter StudyHere we study the hyperparameters of FCDNet. Specifically, we focus on the number of decomposition level L and segmentation period P in LTFE, the number of transformed dimension F in STFE, and the initialized value of η. The experimental results on two datasets (PEMS03 and Ashare) are shown in Fig. 5. Note that when studying the effect of one hyperparameter, others are kept as the default values. Generally, the proposed FCDNet is not sensitive to changes in hyperparameters. As can be seen from the data, changing the number of P and the number of F does not influence the performance very much, and in most cases (except the cases when we use large η like 0.25), the change in MAE is minimal. This shows the robustness of our model. We vary the number of decomposition levels in the range of {2, 3, 4, 5}. Finding the right balance for the parameter L is essential. If L is set too small, it becomes challenging to distinguish and capture effective low-frequency signals essential for learning long-term stable correlations. Conversely, if L is excessively large, valuable signals are lost, leading to a reduction in the learned long-range relational semantic information. Therefore, selecting an appropriate value for L is a critical decision to ensure that the model effectively captures the relevant information without sacrificing signal quality. Furthermore, it's worth noting that the substantial variance in hyperparameter settings (e.g., F) between the Ashare and PEMS03 datasets can be attributed to their distinct natures. The Ashare dataset belongs to the financial domain, while the PEMS03 dataset is associated with transportation. These fundamental differences in data domains require tailored hyperparameter configurations to accommodate the unique characteristics and challenges posed by each dataset. §.§ Dependency Graph AnalysisIn this section, we look closer into the effectiveness of our proposed dependency modeling strategy. Figure <ref> illustrates the comparison of effects observed when replacing matrix A_LF with the specific sensor distance matrix on the PEMS datasets. The figure clearly demonstrates that altering the learned A_LF has a profound influence on the model's performance. This observation implies that matrix A_LF encapsulates a substantial amount of semantic information, surpassing the content available in the distance matrix. The presence of this semantic information plays a pivotal and intuitive role in enhancing both the prediction accuracy and stability of the model. It underscores the significance of capturing and preserving semantic details for achieving accurate and consistent predictions in the given context. Figure <ref> visualizes the dependencies among different nodes on the PEMS03 dataset, where the left half represents A_LF and the right half represents A_HF obtained after model training.It can be seen that the learned structure contains a variety of patterns. The diagonal of the heatmap for A_LF reveals that the distance pattern is one of the most distinct and important features. This also means that the relatively invariant correlation between variables can be well discovered in the low-frequency information of long-term historical MTS. The heatmap for A_HF pays more attention to the dynamic evolving correlation between variables, reflecting the shift of dynamic correlations in a certain period. The observation result is also similar to the phenomenon of evening correlation in traffic system <cit.>. Through analysis, it shows that the time-evolving correlation between variables can be effectively captured by the STFE module. § CONCLUSIONIn this paper, we propose a novel model FCDNet that joins complementary dependency modeling and MTS forecasting. Unlike previous models that center on capturing static or dynamic relationships in multivariate time-series, our model focuses on capturing more complementary and essential long short-term static-dynamic dependencies between multivariate time-series from the frequency perspective. Specifically, FCDNet extracts the frequency information from long- and short-term MTS for constructing stable static and dynamic evolving dependency graphs. In addition, we migrate graph filters with stronger expressive power to the information fusion of structural information and time-series representation. Experiments on six MTS datasets show that FCDNet achieves state-of-the-art performance with fewer parameters compared to many strong baselines. The well-trained embeddings and learned dependencies can also be applied to other tasks.elsarticle-num.bst
http://arxiv.org/abs/2312.16450v1
{ "authors": [ "Weijun Chen", "Heyuan Wang", "Ye Tian", "Shijie Guan", "Ning Liu" ], "categories": [ "cs.LG", "cs.AI" ], "primary_category": "cs.LG", "published": "20231227072952", "title": "FCDNet: Frequency-Guided Complementary Dependency Modeling for Multivariate Time-Series Forecasting" }
Blind Image Quality Assessment: A Brief Survey Miaohui Wang, Senior Member, IEEEM. Wang is with the State Key Laboratory of Radio Frequency Heterogeneous Integration and the Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China. E-mail: [email protected] 14, 2024 ===================================================================================================================================================================================================================================================================================In this article, we discuss how a kind of hybrid computation, which employs symbolic, numeric, classic, and quantum algorithms,allows us to conduct Hartree-Fock electronic structure computation of molecules. In the proposed algorithm, we replace the Hartree-Fock equations with a set of equations composed of multivariate polynomials. We transform those polynomials to the corresponding Gröbner bases, and then we investigate the corresponding quotient ring, wherein the orbital energies, the LCAO coefficients, or the atomic coordinates are represented by the variables in the ring. In this quotient ring, the variables generate the transformation matrices that represent the multiplication with the monomial bases, and the eigenvalues of those matrices compose the roots of the equation. The quantum phase estimation (QPE) algorithm enables us to record those roots in the quantum states, which would be used in the input data for more advanced and more accurate quantum computations.§ INTRODUCTION There is a symbolic-numeric method of quantum chemistry <cit.>, whereby the computations are carried out in the following way: * The molecular integrals are represented by the polynomial approximation of analytic formulas, which are computed symbolically if we use analytic atomic bases, such as Gaussian-Type or Slater-Type orbitals (GTO or STO) <cit.>. Those formulas are the analytic functions of several variables, namely, those of orbital exponents and atomic coordinates. By Tailor expansion with respect to those variables, the molecular integrals are approximated by polynomials. * The total energy is a polynomial composed of molecular integrals and the undetermined coefficients of LCAO. The ortho-normalization conditions are similarly treated. * We compose the objective function from the total energy and the ortho-normalization condition with the Lagrange multipliers which represent the orbital energies. * By symbolic differentiation, we obtain a system of polynomial equations that gives the optima. * To get the roots of the system of polynomial equations, we apply several methods of computer algebra, where Gröbner bases and the primary ideal decomposition play central roles in getting the quantum eigenstates <cit.>. Namely, we compose an ideal I from the given polynomials and transform them into another system that has a more suitable form for root-finding <cit.>. The ideal representing a Hartree-Fock equation could be decomposed into several subsystems described by primary ideals. Each primary ideal would represent one solution set, namely, one quantum state if the decomposition is executed to the full. * Up to now, we have reported the results of several simple molecules, using STO and n-GTO models ( <cit.> and <cit.>).In those works, the adopted algorithms are classical, not quantum.One might use the term Molecular Algebraic Geometry to refer to this algebraic computational scheme for molecular orbital theory.The algebraic method described above could relate to the quantum algorithms, and the theme of the present study is to demonstrate it. This article is structured as follows.First, we show the computational step whereby the classical symbolic computation prepares the eigenvalue problem that gives the roots of the given equation. Second, we show how the quantum algorithm could solve the problem. Then we discuss several points that should be treated with care.§ COMPUTATIONAL PROCESS In this section, we incarnate the algorithms for symbolic-numeric and classical-quantum computation in quantum chemistry.The computational process is composed of two phases. The first phase uses the symbolic-numeric classical algorithm and converts the Hartree-Fock equations into a representation suitable for quantum computation. The second phase uses the data generated in the first phase and computes the roots of the Hartree-Fock equations. We describe the algorithm in each phase, using two examples. §.§ Phase 1: symbolic-numeric classical algorithm §.§.§ Tools for symbolic, numeric, and classical computation We solve the set of polynomial equations through the computational steps explained in <cit.>.* Let I be an ideal made of multivariate polynomials (f_1, f_2,...,f_t) in R[x_1,x_2,...,x_n]. Once the Gröbner for the ideal Iis computed, it is an easy task to represent any element in R[x_1,x_2,...,x_n]/I uniquely as the linear combination of the monomial basis of the quotient ring. * Let x̅_1,x̅_2,...,x̅_2be the representatives of x_1,x_2,...,x_n in R[x_1,x_2,...,x_n]/ I. Additionally, let b be a vector that is composed of the representatives of the monomial basis of the quotient ring. * For any i, the multiplication x̅_i · b is represented by x̅_i · b= b · M_x_i with a transformation matrix M_x_i. The entries of the matrix are numbers, but not symbols. * As M_x_i· M_x_j=M_x_j· M_x_i, those transformation matrices share common eigenvectors {v_j |j=1,...,M}, where M is size of the monomial basis b. * Let us consider the eigenvalue problems, defined as follows, ξ̅_i^(j) v_j= v_j · M_x_i for i=1,...,n and j=1,...,M. Those equations are solved numerically, and the eigenvalues give the common zeros of the polynomials included in the ideal I. Namely, the eigenvalues give the roots of the set of polynomial equations defined by f_1(x_1,x_2,...,x_n)=f_2(x_1,x_2,...,x_n)=...=f_t(x_1,x_2,...,x_n)=0 in such a way that (x_1,x_2,...,x_n)=(ξ̅_1^(j),ξ̅_2^(j),...,ξ̅_n^(j)) for j=1,...,M. Note that if eigenvectors {v_j}_j for one M_i is obtained, the other components of the roots are computed by ξ̅_i^(j)= (v_j · M_x_i,v_j)/ (v_j ,v_j). The root-finding of a system of polynomial equations is replaced by a set of eigenvalue problems, which could be solved by quantum algorithms. We put the eigenvectors {v_j}_j into a set of quantum states {| v_j⟩}, and the computational steps are carried out by a quantum circuit, which conducts the following transformation: | v_j⟩ |Ancilla_1⟩|Ancilla_2⟩⋯|Ancilla_n⟩→ | v_j⟩ |ξ̅_1^(j)⟩ |ξ̅_2^(j)⟩⋯ |ξ̅_n^(j)⟩ where the eigenvalues of M_x_i (i=1,...,n) for v_j are recorded in ancilla qubits through a successive application of quantum phase estimation.§.§.§ Computation for a simple toy model Let us compute a simple toy model, where the secular equation is given by [ V(x,y) -1; -1 V(x,y) ][ x; y ] =e[ x; y ] along with the normalization conditionx^2+y^2=1. The variables (x,y) are the amplitudes of the wavefunction and e is the orbital energy. V(x, y) is the on-site potential that is the function of the amplitude of the wavefunction. We assume that the roots are real. The polynomial ideal that represents the secular equation is given by I=(x V(x,y)-y - e x, y V(x,y)-x - e y,x^2+y^2-1) In the case of V(x,y)=0, the Gröbner basis is given by I_std=(e^2-1,2y^2-1,x+ye) The roots of the set of polynomial equations are given by (x,y,e)=(±1/√(2),±1/√(2),-1),(±1/√(2),∓1/√(2),1) The entries in the quotient ring Q(x,y,e)/ I are the linear combinationsof the monomial basis b=(b[0],b[1],b[2],b[3]): b[0] =y eb[1] =yb[2] =eb[3] =1 In the quotient ring, the multiplications of the entries of the basis b by x, y, and e are represented by the transformation matrices: b· p=b · m_p for p=x,y,e. m_x=[000 -1;00 -10;0 -0.500; -0.5000;]m_y=[ 0 0 1 0; 0 0 0 1; 0.5 0 0 0; 0 0.5 0 0; ]m_e=[ 0 1 0; 1 0 0 0; 0 0 0 1; 0 0 1 0; ] For the above example, the related properties are given as follows. [ v(v m_x, v)(v m_y ,v)(v m_e, v); (-1/√(2),1/√(2),-1,1)1/√(2)1/√(2)-1; (1/√(2),-1/√(2),-1,1) -1/√(2) -1/√(2)-1; (1/√(2),1/√(2),1,1) -1/√(2)1/√(2) 1; (-1/√(2),-1/√(2),1,1)1/√(2) -1/√(2) 1; ] The data in Table <ref> covers all the solutions of the secular equation. §.§ Computation for Hartree-Fock model In this section, the restricted Hartree-Fock computation of a realistic molecule HeH^+ is used as an example. This molecule is the simplest heteronuclear molecule and is used as a benchmark problem for the solving of the Hartree-Fock model <cit.>.At first, we compute the total energy functional of the RHF model of the molecule through STO-3g basis set <cit.>. The analytic formulas of the molecular integrals are computed and substituted into the formula of the energy, namely, the objective function. The total energy functional is a function of the LCAO coefficients (x,y), the orbital energy e, and the interatomic distance R, as defined in the following. E_HF=∑_i⟨ i|h|i⟩+1/2∑_ij([ii|jj]-[ij|ji])⟨ i|h|i⟩=∫ d x_1χ_i^*( x_1)h( r_1)χ_j( x_1)[ij|kl]=∫ d x_1 d x_2 χ_i^*( x_1)χ_j( x_1 )1/r_12χ_i^*( x_2)χ_j( x_2)χ_i( x)=(x ϕ_1s,He(r-R_He)+y ϕ_1s,H(r-R_H))σ_iσ_i : spinfunctionϕ^STO-3G_1s( r)=∑_i=1^3 c(i)exp(- z(i) r^2) For the computation of HeH^+, we use two spin orbitals. i=α,β The total energy in the restricted Hartree-Fock model is given by h( r)=-1/2∇^2-Z_H_e/| r - R_H_e|–Z_H/| r - R_H|R=| R_He- R_H|, Z_He=2, Z_H=1E_tot(x,y,e,R)=E_HF(x,y,R)-e∑_i (⟨χ_i|χ_j⟩-1) + Z_He Z_H/R* The total energy functional is converted to a polynomial through the Taylor expansion with respect to the atomic distance R. The expansion is carried out at the center R0=1.5. * The numerical coefficients in the objective function are approximated by fractional numbers so that the objective function, multiplicated by the powers of ten, is given by a polynomial with integer coefficients. To this end, we simply approximate the numerical coefficient C by rounding 10^n C to the nearest integer N_c and get N_c/10^n.We use the polynomial with integer coefficients as the objective function, denoted by Ω. * A set of polynomial equations is derived by the partial differentiation with respect to (x,y) and e so that the roots of those equations give the optima of the objective function. For the sake of simplicity, we do not carry out the optimization for R. Instead, we replace ∂Ω/∂ R with 100R-146 so that the interatomic distance is fixed at R=1.46.* We apply algebra. We use the ring Q[x,y,e] (a ring over the field of rational numbers Q) with the degree reverse lexicographic monomial ordering, such that x>y>e. The generators of the set of polynomial equations form an ideal I. We compute the Gröbner basis of I, by which the quotient ring Q[x,y,e]/I is defined. In this quotient ring, the monomial basis and the transformation matrices representing the operation of x, y, and e over b are computed.* As the transformation matrices are numerical data, we then use classical or quantum methods to compute eigenvalues.The objective function f_obj is computed as [breaklines=true] OBJ=281*R**5*x**3*y + 1119*R**5*x**2*y**2 + 164*R**5*x**2 + 533*R**5*x*y**3 - 901*R**5*x*y - 70*R**5*y**2 - 1756*R**5 - 2892*R**4*x**3*y - 9431*R**4*x**2*y**2 - 2273*R**4*x**2 - 5040*R**4*x*y**3 + 8552*R**4*x*y + 712*R**4*y**2 + 15802*R**4 + 11305*R**3*x**3*y + 29175*R**3*x**2*y**2 + 12477*R**3*x**2 + 18393*R**3*x*y**3 - 30849*R**3*x*y - 1877*R**3*y**2 - 59260*R**3 - 15964*R**2*x**3*y - 32038*R**2*x**2*y**2 - 35996*R**2*x**2 - 27890*R**2*x*y**3 + 37012*R**2*x*y - 3516*R**2*y**2 + 118518*R**2 - 12479*R*x**3*y - 18807*R*x**2*y**2 + 58692*R*x**2 + 1281*R*x*y**3 + 52833*R*x*y + 28135*R*y**2 - 133334*R - 2*e*(114*R**5*x*y - 1281*R**4*x*y + 5600*R**3*x*y - 10194*R**2*x*y + 115*R*x*y + 10000*x**2 + 18221*x*y + 10000*y**2 - 10000) + 13071*x**4 + 45874*x**3*y + 59634*x**2*y**2 - 91649*x**2 + 32206*x*y**3 - 146963*x*y + 7746*y**4 - 65195*y**2 + 79999; The ideal that gives the optima of the objective function is composed of the following components: I=(∂ f_obj/∂ x,∂ f_obj/∂ y,∂ f_obj/∂ R) To save the computational cost, the atomic distance R is fixed, and I is modified as I=(∂ f_obj/∂ x,∂ f_obj/∂ y,100R-146) The quotientring Q[x,y,e]/I has the monomial basis b=(y^2,xe,ye,e2,x,y,e,1), and the transformation matrices (m_x, m_y, and m_e) for three variables (x,y, e) are obtained. Let us inspect the computed result. As a reference, the result of the Hartree-Fock computation by the standard self-consistent method is shown in Table <ref>.The solutions obtained from the symbolic-numeric method are shown in Table <ref>. We use the normalized right eigenvectors ϕ) and compute the expectation values for m_x^T and m_e^T. The solutions at the third and fourth rows correspond to the ground state in the reference data. Those results are quantitatively satisfactory in giving the electronic structure of the molecule, although there is a bit of deviation from the reference data. The cause of the deviation is that we have approximated the objective function as a polynomial with integer coefficients after the Taylor expansion, and as a result, this rough approximation dropped the subtle features of the numeric data used in the standard self-consistent method.§.§ Phase 2: quantum computation §.§.§ Tools for quantum computation Now we have restated the given question as an eigenvalue problem, and we anticipate the application of quantum phase estimation to get the eigenvalues. The remaining question is that the QPE is not applied directly, since the transformation matrices m_p are not Hermitian, and the time-evolution operator exp(-i T m_p) is not unitary. To settle this issue, we use the block-encoding, by which any complex matrix can be embedded in the diagonal part of certain unitary matrices. Several algorithms enable us to conduct the block-encoding and design the quantum circuits <cit.>. The block encoding of an n-qubit operator A is formally defined as follows: Ã=(⟨ 0|^⊗ a⊗ I_n) U (| 0⟩^⊗ a⊗ I_n) In the above, Ã=α A, for which the factor α is chosen in such a way that |Ã_ij|≤ 1 for all i and j. U is a unitary matrix operating on a+n qubits, and its action on the qubits is given byU ( |0⟩^⊗ a⊗ |ϕ⟩)=|0⟩^⊗ a⊗Ã |ϕ⟩+√(1-Ã |ϕ⟩^2)|σ^⊥⟩ with (⟨ 0|^⊗ a⊗ I_n)|σ^⊥⟩=0 and |σ^⊥⟩=1 A repetition of partial measurements of the ancilla qubits yields |0⟩^⊗ a with probability Ã|ϕ⟩}^2, and the circuit gives rise to Ã|ϕ⟩/Ã|ϕ⟩. For simplicity, let us assume that α=1 and |A_ij|≤ 1 for all i and j. In this case, the matrix query operation O_A is defined by O_A |0⟩|i⟩ |j⟩=( a_ij +√(1-|a_ij|^2))|i⟩ |j⟩ where |i⟩ and |j⟩ are n-qubit computational basis states.The unitary representation of O_A is given by O_A= [ c_00 -s_00;c_01 -s_01 ;⋱ ⋱; c_N-1,N-1-s_N-1,N-1 ; s_00c_00;s_01c_01 ;⋱ ⋱; s_N-1,N-1 c_N-1,N-1 ;] wherec_ij=cos(θ_ij), s_ij=sin(θ_ij), and θ_ij=arccos(a_ij). Keep in mind that the indices of c_ij and S_ij are given by n-qubit computational basis states. The quantum circuit that embodies the block encoding is defined by U_A=(I_1⊗ H^⊗ n⊗ I_n) (I_1⊗SWAP) O_A (I_1⊗ H^⊗ n⊗ I_n) where I_1 and I_n means the identity operations; SWAP is the swap gate; H is the Hadamard gate.After algebra, one obtains ⟨ 0| ⟨ 0|^⊗ n⟨ i| U_A |0⟩ |0⟩^⊗ n |j⟩ =1/2^na_ij This relation means that, if the n+1 ancilla qubits are measured as the zero-state, the signal register, which is initialized by |ϕ⟩, returns A|ϕ⟩/Aϕ⟩. If A_ij > 1 for some i and j, we must replace A with α A, using a scale factor α such that |α|< 1. It increases the complexity of the quantum circuit for the QPE. If we use the block encoding of U^2k for a unitary U with α < 1, during the QPE, the controlled U^2k yields 1/√(2)( |0⟩ +α e^i2^kλ |1⟩) ⊗ |ψ⟩ To record the eigenvalue λ in the bit string, however, the state at the left qubit should be given by |0⟩ + e^i2^kλ |1⟩ To get the latter state, we prepare U_αI, and we apply (X⊗ I) U_αI(X⊗ I)though the controlled gate operation. Then we getα/√(2)( |0⟩ + e^i2^kλ |1⟩) In (<ref>), α could be neglected on account of the normalization of the output state.The problem in the above construction is that the naive design of the quantum circuit to conduct the operation O_A requires too many numbers ofR_y gates, which causes worse complexity than the classical case. To avoid it, the FABLE algorithm uses Gray codes to designate the operations on the ancilla qubits so that this algorithm achieves improved scaling with respect to the number of the R_y gates <cit.>.§.§.§ The quantum steps for the simple toy model and the Hartree-Fock computation In this section, the accuracy of block encodings for the simple examples (the simple toy model and the Hartree-Fock computation for HeH^+ are investigated. Those models are equally given in Q[x,y,e] (in the ring with three variables), and they are studied together.Using the FABLE algorithm, we construct the block encoding of the unitary operator A. In Tables <ref> and <ref>, the expectation values of the block encoding form of unitary operators (ϕ|A|ϕ) and (ϕ|exp(-iA)|ϕ) for A=m_x^T, m_y^T, and m_e^T are shown, respectively, for the two examples.They are computed by numerical linear algebra with a suitable choice of ϕ.Furthermore, those values are compared to(ϕ|O_exp|ϕ), where O_exp=O_exp(-i M) is obtained by the FABLE algorithm. The block encodings by the FABLE algorithm are quantitatively accurate for representing the corresponding evolution of non-unitary A.In the computations presented here, we used the eigenvectors that were analytically derived or computed by the eigenvalue solver. In the Hartree-Fock case, we cast off the eigenvectors with complex eigenvalues since those useless vectors are easily detected by classical computations. However, in quantum computations, it is not so easy to examine the state vectors in the quantum circuit. In the next section, we discuss how to carry out the state preparation properly.§ DISCUSSION In this section, we discuss several points that should be treated with care.§.§ The difficulty of quantum algorithm concerning complex-valued solutions The existence of complex roots of the given system of polynomial equations is an obstacle to full-fledged quantum computation in the current problem setting.The standard quantum phase estimation is applied to the Hermitian operators which have real eigenvalues only.Let λ be an eigenvalue, which is represented in the following way: λ/2π = j_1/2^1 + j_2/2^2 +⋯+ j_n/2^n In the intermediate stage of the computation by the QPE, the quantum state vectors are generated and transformed as follows. | 0 ⟩ | ϕ⟩1/√(2)( |0⟩ + |1⟩) | ϕ⟩1/√(2)( |0⟩ + |1⟩) | ϕ⟩1/√(2)(| 0 ⟩ + e^i (2π) j_n | 1 ⟩) | ϕ⟩1/√(2)( |0⟩ + e^i(2π)j_n |1⟩) | ϕ⟩ |j_n⟩| ϕ⟩ However, if the eigenvalue is given by λ+√(-1)ν, the quantum circuit yields 1/√(2)( |0⟩ + exp(-2^k· 2πν)e^i(2π)j_n |1⟩)| ϕ⟩, from which, the integer j_n cannot be extracted at |j_n⟩| ϕ⟩ by the Hadamard transformation. Several approaches tackle this problem <cit.>.* The algorithm in <cit.> generates the state of the form of (<ref>), estimates the factor |exp(-2^k(2π)ν)| by projective measurements on the index qubit in the basis |1⟩⟨ 1|, rotates the quantum state to cancel that factor, and obtains the wanted form of (<ref>). * The algorithm in <cit.> similarly estimates |exp(-2^k(2π)ν)|by measurements and then obtains the phase part of the eigenvalue. * The algorithm in <cit.> prepares the initial state vector in such a way that |ψ_init⟩ = ∑β_j |E_j⟩ |E̅_j⟩ where |E_j⟩ and |E̅_j⟩ are the eigenvectors of a matrix M, and they have the conjugated eigenvalues λ_i + iμ_j and λ_i - iμ_j, respectively. The time evolution using M⊗ I + I ⊗ M yields e^2π iλ_j Δ T|E_j⟩ |E̅_j⟩ In addition, the time evolution using i( M⊗ I + I ⊗ M) yields e^2π i μ_j Δ T|E_j⟩ |E̅_j⟩ Thus, the real and imaginary parts of the eigenvalues are recorded separately in two ancillae: |λ_j⟩ |μ_j⟩ |E_j⟩ |E̅_j⟩ Any of those approaches increases the complexity of the quantum circuits. The former two approaches require additional measurements to determine the complex amplitudeIndeed, before applying those methods, we should prepare a particular eigenvector that has a complex eigenvalue λ + iμ. If not, the measurements do not report correctly |exp(-2^kμ)|. The third approach needs a special preparation of the initial state in which conjugate states are paired. The occurrence of complex eigenvalues is related to the question of how to prepare good initial states for the QPE. If we could use classical algorithms, it would be easy to get rid of complex eigenvalues. We prepare the randomized initial state vector and project out the components that give rise to complex eigenvalues. On the other hand, it is laborious to detect complex solutions only by quantum algorithms. Regarding this issue, there are several ways of filtering out eigenvalues before applying the quantum phase estimations for Hermitian operators <cit.>. In pity, to the best of our knowledge, those methods are not applied in the removal of complex eigenvalues, since the existing filtering methods make use of convenient properties of Hermitian matrices which always have real eigenvalues. Moreover, those methods are composed to prepare the ground state, namely, the lowest eigenvalue. Meanwhile, the request of the present study is to obtain all real eigenvalues.However, the following measures would do that task.* The inverse power iteration yields the eigenvectors that have the eigenvalues closest to the given λ. In the present work, we choose real λ, and we use the parallel character of quantum computation. * To this end, we apply the method with (A-λ I)^-1, solving [ 0 (A-λ I); (A^T-λ I) 0 ][ 0; x_k ]=[ x_k-1; 0 ]by some quantum linear system solver. We start from |x_0⟩=|β_init⟩,repeat the computation, and after a suitable number of iterations, obtain the desired quantum states |x_k⟩.The initial state preparation goes as follows: * Prepare the initial state: ∑_s|β⟩→∑_s|β⟩|e_s⟩. This state is implicitly given by∑_s∑_j C_j|v_j⟩|e_s⟩, where {v_j}_j are the eigenvectors of A, and e_s is the index to the sampling points for λ in the inverse power method. * Apply the inverse power method: ∑_s∑_j C_ j(A-e_s)^-N|v_j⟩|e_ s⟩ with a sufficiently large N. Then we get ∑_s∑_l D_l |v_e_s(l)⟩|e_ s⟩ where {v_e_s(l)}_l are the eigenvectors that have the eigenvalue closest to e_s. * Similarly, doubly applying the inverse power method to |β⟩|β'⟩by (A-(λ+iμ) I)^-1⊗ (A-(λ-iμ) I)^-1, we get the state∑ C_j|E_j⟩|Ê_j⟩.By the measures prescribed above, in general cases, we record the real eigenvalues in the state vector as follows: |λ̂(e_s)⟩|e_s⟩ where λ̂(e_s) is the bit string representation of the eigenvalue closest to e_s. Note that one label |e_s⟩ shall catch exactly one real eigenvalue.In exceptional cases, however, we would prepare an initial state composed of two conjugated eigenvectors that have the eigenvalues λ± iμ.The initial state vector is given by|ψ⟩=(p |λ+iμ⟩+q|λ-iμ⟩)|e_s⟩. For such state vectors, the QPE cannot obtain the eigenvalues as in (<ref>). Instead, the result of the QPE is given by ∑_s∑_k C_k|k⟩|e_s⟩ where |k⟩=|k_0k_1⋯ k_N⟩. In this case, the label |e_s⟩ is connected to the noisy superposition of the states {|k⟩}_k. If such a result is measured for the label |e_s⟩, it means that the corresponding eigenvalues are complex. We should discard them since complex eigenvalues are meaningless in our problem setting.In the inverse power iteration, we could use λ - √(-1)δ (with a shift by small positive δ), instead of genuine real λ,so that we shake off the eigenstate that has the eigenvalue with the positive imaginary part, which causes the unbounded growth of the amplitude in the time evolution. There is another rare exceptional case where two different state vectors are involved. This case shall happen that the sampling point e_s is located exactly in the middle of two adjacent real eigenvalues lambda_1 and λ_2. The measurement also gives the noisy superposition of {|k⟩}_k. However, such a circumstance almost surely does not take place, and a small shift of the sampling point e_s → e_s +δ shall avoid it. If the quantum algorithms for the state preparation do not work well,we are obliged to use one diagonalization of one of the transformation matrices by classical algorithms so that we get rid of the right eigenvectors with complex-valued solutions. The initial state vector could be chosen as an arbitrary linear combination of the right eigenvectors with real eigenvalues. It is an expediency, but it has its merit, as we shall discuss later in Section <ref>. Note that if we calculate the eigenenergy of QUBO models by the present method, the set of polynomial equations is given by C_1∑_i x_i + C_2 ∑_i_1,i_2 x_i_1x_i_2+⋯+C_n∑_i_1,...,i_nx_i_1⋯ x_i_n-e=0 and x_i^2-x_i=0 for i=1,...,n. This kind of equation is without complex solutions, thanks to the restriction of the ranges of{x_i}_i, which is explicitly written by the polynomials. It follows that, if we construct a set of polynomial equations of the Hartree-Fock model as a QUBO one, there is no problem concerning the complex eigenvalues, although this construction increases the number of qubits and the cost of symbolic computations. §.§ The choice of basis vectors in the eigenvalue problemsNote that there is an ambiguity in the choice of the basis vectors. In the toy model case, the matrix m_e has two eigenvectors foreigenvalue 1, which are given by v_1 =(-1/√(2),1/√(2),-1,1)v_2 =(1/√(2),-1/√(2),-1,1) We could choose the basis vectors differently, such as w_1=1/2(v_1+v_2) =(0,0,-1,1)w_2=1/√(2)(v_1-v_2) =(-1,1,0,0) However, w_1 and w_2 are not suitable choices in the present problem, for they are not represented by the monomial basis vector b=(ye,y,e,1). Indeed, they are not the eigenvectors of m_x or m_y. w_1 m_x= (-1/2,1/2,0,0)w_2m_x= (0,0,1,-1)w_1m_y= (-1/2,1/2,0,0)w_2m_y= (0,0,-1,1) Therefore, if there is a degeneracy of the eigenvalues, we should make the basis vectors for the corresponding subspace in such a way that all the basis vectors are potentially given by the monomial basis vector b. §.§ The merit of the quantum algorithm To see the superiority of the quantum algorithm over the classical one, let us consider the following circumstances. Let {m_i}_i be the list of transformation matrices, and assume that m_1^T has two eigenvectors (v_1 and v_2) with a common eigenvalue E_v: m_1^Tv_1 = E_v v_1, m_1^Tv_2 =E_v v_2. These two vectors are not necessarily the eigenvectors of the other m_i if they are not suitably prepared, as pointed out in Section <ref>.In the classical algorithm, we prepare the eigenvectors of the other m_i^T, say m_2^T, by the generalized eigenvalue problem: [[ (v_1|m_2^T|v_1) (v_1|m_2^T|v_2); (v_2|m_2^T|v_1) (v_2|m_2^T|v_2) ]-E_j[ (v_1|v_1) (v_1|v_2); (v_2|v_1) (v_2|v_2) ]][ c_1^j; c_2^j ]=[ 0; 0 ] From this equation, we get two eigenvectors w_j=c_1^j v_1 + c_2^j v_2for j=1,2 and two corresponding eigenvalues E_w_1 and E_w_2.On the other hand, this task of solving the eigenvalue problem can be skipped in quantum algorithms. To see this, let us use the initial state vector (combined with ancilla qubits) defined by |ψ⟩ =(p |v_1⟩ + q |v_2⟩) |Ancilla_1⟩|Ancilla_2⟩⋯|Ancilla_N⟩=(s |w_1⟩ + t |w_2⟩) |Ancilla_1⟩|Ancilla_2⟩⋯|Ancilla_N⟩. In the above, p and q would randomly be chosen. Consequently, s and t are determined.Then let us apply the QPE by m_1^T and record the phase at |Ancilla_1⟩. We get |ψ⟩=(s |w_1⟩ + t |w_2⟩) |E_v⟩|Ancilla_2⟩⋯|Ancilla_N⟩ Next, apply the QPE by m_2^T and record the phase at |Ancilla_2⟩. We record the corresponding phases at |Ancilla_2⟩ and get |ψ⟩=s |w_1⟩|E_v⟩|E_w_1⟩⋯|Ancilla_N⟩ + t|w_2⟩|E_v⟩|E_w_2⟩⋯|Ancilla_N⟩ If E_w_1 E_w_2, the two data (E_v, E_w_1) and (E_v, E_w_2) in |ψ⟩ are distinguished, and they are the parts of two distinct roots of the given set of polynomial equations. If E_w_1= E_w_2, we successively apply the QPE using m_3^T,...,m_N^T. Then we finally get the distinct roots of the form (E_v, E_w, E_w^',...,E_w^('⋯ ')). Each of the roots is recorded in one of the orthonormalized bases in the output state and measured distinctly one from the other since the orthogonality is guaranteed by the bit string representation of the ancilla. §.§ On the enormous complexity concerning symbolic computation Another obstacle is the enormous complexity in the computation of Gröbner basis, which scales with the number of variables (n) and the maximal degree of the input polynomials (d).If the primitive algorithm (as was initially proposed) is applied, the complexity is doubly exponential in n for the worst case. However, detailed inspections have revealed the following fact <cit.>. * Let (f_1, ..., f_m) be a system of homogeneous polynomials in k[x_1, ..., x_n] where k is an arbitrary field. (A homogeneous polynomial is composed of nonzero monomials, all of which have the same degree.) * The number of operations to compute a Gröbner basis of the ideal I=(f_1, ..., f_m) for a graded monomial ordering up to degree D scales with O(m D [ n+D-1; D ]^ω) as D→∞ where ω is the exponent of matrix multiplication over k. Namely, ω is the smallest constant such that two N × N matrices could be multiplied by performing O(Nω+ϵ) arithmetic operationsfor every ϵ > 0.In this estimation, the bound D for a full Gröbner basis is not yet given. However, it could be estimated under a certain assumption, and the conclusion is that the complexity is simply exponential in n, thanks to the assumption that the polynomials are homogeneous <cit.>.As any system of polynomials can be transformed into this form by adding a variable and homogenizing, it means that the doubly exponential complexity could be avoided.Note that the estimation of the complexity is carried out for the worst cases, meanwhile, the actual computations often finish with much lower computational costs. Moreover, the algorithmic improvements are successful in facilitating the computation. Currently, the F5 algorithm is regarded as the most effective one <cit.>. The complexity of this algorithm was studied in <cit.>.The formula of the complexity is given in a refined style that reflects the special feature of the algorithm, although it is still exponential in n.The complexity in computing Gröbner basis, however, would be mitigated by quantum algorithms. The computational steps of the Gröbner basis are as follows <cit.> * Input: F=(f_1, ..., f_m); Output: the Gröbner basis G for F * G:=F * For every pair of polynomials (f_i,f_j) in F, compute the s-polynomial, which is defined by S(f_i,f_j)=a_ij/g_jf_i-a_ij/g_i f_j. In the above, g_i (resp. g_j) the leading term of f_i (resp. f_j) in the given monomial ordering, and a_ij is the least common multiple of g_i and g_j. * Reduce S(f_i,f_j). By the division algorithm, it is represented as S(f_i,f_j)=∑_l c_l f_l +r, and the residual r is the result. If r 0, add r to G. * Repeat the computation of s-polynomials and the reduction until the extension of G terminates. * Return G.This algorithm is essentially a Gaussian elimination that carries out the reduction of rows in a matrix that holds the coefficients in the system of polynomials <cit.>. The difficulties in conducting that task are as follows. * In the reduction of S(f_i,f_j), it happens that many pairs of the polynomials reduce to zero, being completely useless to the construction of the Gröbner basis. It is necessary to detect the unnecessary pairs beforehand, and the trials to improve the efficiency of the algorithm are intensively carried out regarding this issue.* The size of the Gaussian elimination would vary, indeed increase, during the computation. The computation of s-polynomials shall increase the total number of the maximum degree of the polynomials in G. The expanding matrix requires ever-increasing usage of a vast amount of memory.To reduce the computational costs enumerated above, the quantum algorithms would be hopeful choices.First, the qubits could encompass a vast set of quantum states that describe a set of data with enormous size. They could embrace the incessant increase of polynomial data during the computation of Gröbner basis. Second, the quantum algorithms are efficient in linear computation and searching for unconstructed data. The HHL algorithm would facilitate the computation of Gaussian elimination. Moreover, the Grover database search algorithm would be useful in detecting the terms in polynomials that should be eliminated in the reduction.§ CONCLUSION The main aim of this paper is to illustrate a computational scheme of quantum computation that enables us to carry out the Hartree-Fock computation, in the sense that the computation shall realize the optimization of molecular orbitals composed of atomic bases. The proposed computational scheme uses algebraic techniques to reform the Hartree-Fock equations into a set of eigenvalue problems, wherein the eigenvalues give the LCAO coefficients, the orbital energy, and if necessary, the optimized atomic coordinates. The eigenvalue problems could be solved by the quantum phase estimation through the block-encoding technique for non-Hermitian or non-unitary operators. The computed results are recorded in quantum states, which shall be used for more complicated quantum computations with the aid of quantum RAM. There are several unsettled points in the present work. The first is the occurrence of complex-valued eigenvalues in the eigenvalue problem, which is caused by the potential occurrence of complex-valued solutions of the Hartree-Fock equation which is treated as a system of polynomial equations. For the sound application of the QPE, this sort of eigenstates should be removed by any means – if it is algorithmically difficult, the quantum device should remove them.The second is the possibly enormous complexity of symbolic computation. However, the required symbolic computations are Gauss eliminations, which would be facilitated by quantum algorithms ever proposed. unsrt
http://arxiv.org/abs/2401.00019v1
{ "authors": [ "Ichio Kikuchi", "Akihito Kikuchi" ], "categories": [ "quant-ph", "physics.comp-ph" ], "primary_category": "quant-ph", "published": "20231227121529", "title": "Symbolic, numeric and quantum computation of Hartree-Fock equation" }
[ [=====§ INTRODUCTIONConventional fluid dynamics is a classical effective field theory for large systems near thermodynamic equilibrium. It always comes with a cut-off (length) scale (determined by the underlying microscopic theory) above which the fluid-like descriptions are applicable. The variations (or derivatives w.r.t the space and time) of the fluid variables measured in the units of the cut-off scale must be small and, therefore, can be treated perturbatively. In principle, any realistic fluid description must have an infinite number of derivatives of the fluid variables in the equations of motion. However, such equations with infinitely many derivatives are of little practical use since it is impossible to solve them even numerically for any generic initial and boundary conditions. We need to truncate such infinite series, and most of the time, it turns out that arbitrary truncations lead to pathologies. For example, it is well known that if we truncate the relativistic fluid equations at the first subleading order in the derivative expansion (the relativistic Navier-Stokes (N-S) equation, capturing the leading effects of dissipation <cit.>) the equations admit solutions with superluminal signal propagation <cit.>.In fact, recent analysis <cit.> indicates that including non-hydrodynamic modes is probably essential to ensure the causality of the fluid equations. Here, by non-hydrodynamic modes, we refer to modes that have nonzero frequencies even in the absence of any spatial momenta. Such modes are named non-hydrodynamic for the following reason. Hydrodynamics could also be viewed as the collective dynamics of the massless modes of a system, slightly away from global thermal equilibrium. The global equilibrium is characterized by its conserved charges that could take any constant values. Dynamics (variation in time) will be generated only when these charges are no longer constant in space. From this perspective, each fluid variable is associated with some conserved charge[ In this note for simplicity, we shall analyze only neutral fluids, where the only conserved quantities are energy and momentum corresponding to the time and space translational symmetry of the microscopic theory. This leads to one scalar fluid variable, which we could choose to be temperature or energy density or pressure, and one vector variable - the velocity of the fluid.] and fluid or hydro modes are the ones whose frequency or the variation in time vanishes as soon as the variation in space or the spatial momentum is set to zero. Now, if causality necessarily requires the inclusion of non-hydrodynamic modes, one might naively think that to have a causal set of equations, one has to introduce non-hydrodynamic variables (variables that are not related to any conserved quantity). The well-known solution with this approach is the construction of Muller-Israel-Stewart (MIS) theory <cit.> where the shear tensor is introduced as a new variable, not associated with any thermodynamic conserved charge, with an extra equation of motion for it. The combined set of equations for the shear tensor (π^μν), energy density (ε), and the velocity u^μ could have a finite number of derivatives and still predict causal propagation. On the other hand, if we choose to integrate outπ^μν by solving it first in terms of the fluid variables (using the perturbative technique of derivative expansion), the resultant fluid equations turn out to have an infinite number of derivatives (see <ref>). Recently BDNK <cit.> have introduced another interesting way of constructing causal fluid theories which have a finite number of derivatives but no extra `non-fluid' type variables. The BDNK theory has achieved this by exploiting the freedom of field redefinition in effective field theories.Velocity and temperature (or energy) are well defined in global equilibrium, but that is not the case once the system is away from it (see <cit.> for a detailed discussion on this point). The derivative corrections to the equations of motion do not make sense unless we can precisely define what we mean by the fluid variables in derivative order. So, the standard strategy is to first define the fluid variables in terms of some microscopic quantity (field theory operators) and then explore the structure of the equations and their consequences. In MIS theory, the velocity and energy density are defined via the `Landau frame' condition on the stress tensor - the microscopic field theory operator. Here, the fluid velocity is the unique timelike unit-normalized eigenvector of the stress tensor, and the energy density is the corresponding eigenvalue. It has a nice physical interpretation: the velocity of the fluid is actually the velocity of the energy flow.The BDNK approach deviates from this strategy of defining the fluid variables first. They explored the question of whether there exists any definition of fluid variables such that equations with a finite number of derivatives and also with no extra non-fluid variables are causal. They then find a class of theories satisfying such conditions. So, in the BDNK formalism, we do have a tractable set of differential equations involving only the fluid variables like velocity, but we do not exactly know what that velocity means in terms of any measurable microscopic operator (like the stress tensor) once the system is away from the global equilibrium/ ideal fluid limit. §.§ ResultsIn this note, our goal is to rewrite the BDNK stress tensor in the Landau frame by redefining the velocity and the energy density/temperature. In some sense, the key result in this note is the relation between these variables in BDNK formalism (denoted by u^μ and T respectively) and the velocity and the temperature field defined through the Landau gauge condition (denoted as û^μ and T̂). We have explicitly worked out the relation for those fluid profiles that are small fluctuations around some global equilibrium. We have assumed that the amplitudes of the fluctuations are small enough so that a linearized treatment is justified. Further, for simplicity, we have restricted our analysis only to conformal, uncharged fluids in BDNK formalism.To state our results in terms of equations, let us first introduce a notation u^μ -û^μ = δ u^μ and T-T̂= δ T. We have found that the shift variables δ u^μ and δ T must satisfy the following differential equations up to terms that are quadratic or higher order in δ T and δ u^μ,(1 +θ̃ D̂) δ u^μ+θ̃(∇̂^μδT/T̂)=-θ̃[D̂û^μ+∇̂^μT̂/T̂] ,(1 + χ̃ D̂)δ T/T̂+χ̃/3(∇̂_μδu^μ)=-χ̃[D̂T̂/T̂ + 1/3∇̂_μû^μ] .Here the used differential operators are denoted as D̂≡û·∂ and ∇̂^μ≡Δ̂^μν∂_ν, with Δ̂^μν=(η^μν + û^μû^ν) and the mostly positive metric signature η^μν = diag(-1,1,1,1) in flat space-time. θ̃ and χ̃ are the parameters or scaled (by a factor of (ε+P)) transport coefficients that appear in the BDNK conformal stress tensor <cit.> as quoted below,T^μν_BDNK=(ε +Δε)[u^μu^ν+1/3Δ^μν]+(u^μW^ν+u^νW^μ)+π^μν ,with the first-order dissipative (scaled) field corrections as,Δε/ε+P =3χ̃DT/T+χ̃(∂· u) ,    W^μ/ε+P=θ̃[∇^μT/T+Du^μ] ,   π^μν/ε+P=-2η̃σ^μν , with σ^μν=Δ^μν_αβ∂^αu^β denoting the traceless, symmetric velocity gradient with Δ^μναβ=1/2Δ^μαΔ^νβ+1/2Δ^μβΔ^να -1/3Δ^μνΔ^αβ. Next, we develop a formal solution for the equations (<ref>) and (<ref>) using two different methods. In both cases, it is manifest that the solutions will have terms up to all orders in derivative expansion. Finally, we introduce a set of new tensorial `non-fluid' variables (like the shear tensor in MIS theory) in order to recast the BDNK theory in an MIS-type formalism where the fluid variables like velocity and the temperature are defined through the Landau gauge condition.In the first method, the equivalent system of equations will have an infinite number of `non-fluid' variables with the following nested structure: ∂_μ T^μν=0 ,T^μν=ε̂(û^μû^ν+1/3Δ̂^μν)+π^μν ,(1+θ̃ D̂) π^μν=-2ησ̂^μν+ρ_1^⟨μν⟩ ,(1+χ̃ D̂) ρ^⟨μν⟩_1=(-2η)(-θ̃)1/T̂∇̂^⟨μ∇̂^⟩νT̂+ρ_2^⟨μν⟩ ,(1+θ̃ D̂) ρ^⟨μν⟩_2=(-2η)(-θ̃)(-1/3χ̃)∇̂^⟨μ∇̂^⟩ν∇̂^ρû_ρ+ρ_3^⟨μν⟩ ,(1+χ̃ D̂) ρ^⟨μν⟩_3=(-2η)(-θ̃)^2(-1/3χ̃)1/T̂∇̂^⟨μ∇̂^⟩ν∇̂^ρ∇̂_ρT̂+ρ_4^⟨μν⟩ ,⋮ In the second method, we need to introduce only one `shear tensor' type non-fluid variable, but its equation of motion turns out to be second order in spatial and third order in temporal derivatives, ∂_μ T^μν=0 ,      T^μν=ε̂(û^μû^ν+1/3Δ̂^μν)+π^μν ,[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3 ∇̂^2] {(1+θ̃D̂)π^μν+2ησ̂^μν}= 2ηθ̃{1+(θ̃+χ̃)D̂}(∇̂^⟨μ∇̂^ν⟩T̂/T̂) .We have analyzed the spectrum of linearized fluctuations in both systems and found that all the hydrodynamic modes match those of the BDNK theory. This indicates that in the regime where fluid descriptions are applicable, all three systems of equations presented here are equivalent. However, (equations (<ref>) to (<ref>)) and (equations (<ref>) to (<ref>)) also have some extra non-hydrodynamic modes which are not there in BDNK (equations (<ref>) to (<ref>)). The emergence of these new modes is possibly connected with the zero modes in the equations of the field redefinition (equations (<ref>) and (<ref>)) themselves.Our equations are by no means more tractable than that of the BDNK. But here, the fluid variables have a clear and standard meaning, and since the velocity and temperature in BDNK theory could be precisely transformed to these variables (though we have derived it only at a linearized level), it attaches a similar definition to the BDNK fluid variables as well. Our analysis suggests that even in BDNK theory, there will be hidden non-fluid variables (or an infinite number of derivatives) if one would like to express the theory in terms of fluid variables, which are locally defined through stress-energy tensor as we have in `Landau frame'[In this context, we should mention the analysis in <cit.>. Here also, the authors connect the MIS and the BDNK type formalism with field redefinition. However, the authors here tried to explain this field redefinition ambiguity more from a microscopic point of view. Whereas, in our analysis, we are completely agnostic about the microscopic descriptions or statistical interpretation of these field redefinitions. As a result, we could find more than one way (in fact, in principle, there should be just an infinite number of ways) of `integrating in' the non-fluid variables for the same BDNK theory but recast in Landau frame.].This note is organized as follows. In section <ref>, we describe the MIS theory in its simplest form, and then we show how integrating out the extra `non-fluid' variable results in a stress tensor with an infinite number of derivatives. This section will act as a warm-up for the techniques of infinite sum to be used in the next section. Also, it indicates how a causal theory in the Landau frame, if expressed only in terms of fluid variables, turns out to have an infinite number of derivatives. In the next section <ref>, we describe the BDNK theory and redefine the velocity and temperature variables (only at the linearized level) to bring them to the Landau frame. Redefinition involves generating an infinite number of derivatives. We can sum these infinite series in two different ways as described in two different subsections of section <ref>. These two different ways of summation lead to two different methods of `integrating in' new `non-fluid' variables, showing the non-uniqueness of the process of `integrating in' new variables. In section <ref>, the dispersion relations and the corresponding spectra of these different systems of equations have been analyzed to check that our systems of equations are indeed equivalent to BDNK formalism, at least in the hydrodynamic regime. Finally, in section <ref>, we conclude. § MIS THEORY The pathologies regarding superluminal signal propagation and thermodynamic stability of the long-established relativistic first-order theories <cit.>, have been first taken care by the higher order MIS theory <cit.>, where the dissipative field corrections are promoted to new degrees of freedom <cit.>. Keeping up to the linear terms, the MIS equations of motion are given by <cit.>,∂_μT^μν=0 ,  T^μν=ε(u^μu^ν+1/3Δ^μν)+π^μν , π^μν+τ_πDπ^μν=-2ησ^μν .Here, we attempt to derive the combined results of Eq.(<ref>) and (<ref>) without treating π^μν as an independent degree of freedom. Instead of attributing an individual differential equation to π^μν like Eq.(<ref>), we express it up to all orders in Eq.(<ref>) itself, such as,π^μν=∑_n=1^∞π_n^μν ,    with   π^μν_1=-2ησ^μν ,     and   π_n^μν=-τ_πDπ^μν_n-1 ,n ≥ 2 .This leads to the shear-stress tensor as the following,π^μν=-2η{∑_n=0^∞(-τ_πD)^n}σ^μν=-2η(1+τ_πD)^-1σ^μν . So, we conclude that if we want to write the MIS theory without introducing any additional degrees of freedom, this will lead to a stress tensor defined up to all orders of gradient correction. However, it is to be noted that Eq.(<ref>) is local in both time and space, whereas Eq.(<ref>) becomes non-local in time since the frequency of the corresponding Fourier mode appears in the denominator. The details of the acausality of a truncated series in Eq.(<ref>) can be found in <cit.>. § BDNK THEORY AND THE TRANSFORMATION OF THE `FLUID FRAME'In the last few years, a new study of the relativistic first-order stable-causal theory (BDNK theory) has been proposed by defining the out-of-equilibrium hydrodynamic variables in a general frame other than Landau or Eckart frames, through their postulated constitutive relations that include spatial as well as temporal gradients <cit.> . In BDNK theory, if we further impose conformal symmetry, the energy-momentum tensor takes the form,T^μν=(ε +Δε)[u^μu^ν+1/3Δ^μν]+(u^μW^ν+u^νW^μ)+π^μν ,with the first-order dissipative field corrections as,Δε/ε+P =3χ̃DT/T+χ̃(∂· u) ,    W^μ/ε+P=θ̃[∇^μT/T+Du^μ] ,   π^μν/ε+P=-2η̃σ^μν .We have used Dε/(ε+P)=3DT/T for a conformal system where ε∼ T^4 and (ε+P)=4ε/3. The dispersion relations resulting from Eq.(<ref>) produce stable-causal modes only with non-zero values of θ̃ and χ̃. The neatness of this method lies in not requiring any additional degrees of freedom other than temperature and velocity to preserve causality and stability. Eq.(<ref>) and (<ref>) also show that the theory is local in fluid variables both spatially and temporally. However, as mentioned before, unlike the MIS theory, the definitions of the fluid velocity and the temperature are not fixed here in terms of stress tensor or any other microscopic operator.In this section, we would like to redefine the velocity and the temperature in a way so that the stress tensor, expressed in terms of these redefined fluid variables, satisfies the Landau frame condition. Our philosophy is as follows. We shall assume that the one-point function of the microscopic stress tensor operator in a `near thermal' state is given by the BDNK stress tensor (<ref>). But it is expressed in terms of some `velocity' and `temperature' variables {u^μ, T}, which agree with the traditional definitions of velocity and temperature in global equilibrium but deviate in a generic `near equilibrium' state. On the other hand, we know that in the Landau frame, the velocity and the temperature fields[For simplicity, throughout this note, we shall restrict our analysis to conformal fluids, where temperature provides the only scale and the space-time dependence of all other dimensional variables like energy density is completely determined by that of the temperature. For example,ε (x^μ) = 3c T^4(x^μ),   P(x^μ) = c T^4(x^μ),  where c is some constantBecause of this, while discussing the space-time dependence of the fluid variables, we shall often use ε(x^μ), P(x^μ) or T(x^μ) interchangeably.] are locally defined in terms of the one-point function of the Stress tensor T^μν as follows,T^μ_ν û^ν =-ε̂ û^μ ,where û^ν is the velocity in the Landau frame and ε̂ is the local energy density. Transforming the BDNK stress tensor in the Landau frame involves two steps. First, we have to solve for û^μ and ε̂ by solving equation (<ref>), where inplace of T^μ_ν we shall use the BDNK stress tensor (<ref>). The second step involves rewriting the BDNK stress tensor in terms of these new fluid variables û^μ and ε̂.Generically, performing such a frame transformation in a non-perturbative manner is extremely cumbersome. But to make our analysis computationally tractable, we restrict it to linearized treatment. Physically, we are restricting our analysis only to those fluid states whose deviation from global equilibrium is of very small amplitude. Such perturbations are enough to decide the linear stability and the causality of the theory - the key motivation behind the BDNK formalism. Since all definitions of the fluid variables agree in global equilibrium (or at the level of `ideal' fluid),field redefinition is needed only in `non-equilibrium' fluid states. It follows that, if the deviation from equilibrium is of small amplitude such that a linearized treatment is allowed, the same should also be true for field redefinition. In other words, while redefining the velocity and the temperature, we can safely ignore terms that are nonlinear in the shift of the variables. In terms of equations, what we mean is the following.Supposethe velocity u^μ and the temperature T in the BDNK stress tensor are related to the Landau frame velocity û^μ and temperature T̂ in the following fashion,u^μ = û^μ +δ u^μ,   T = T̂+ δ T , where the shift variables δ u^μ and δ T are small enough to be treated only linearly. Note that both δ u^μ and δ T are non-trivial functions of û^μ and T̂. Once we impose the Landau gauge condition (<ref>) after substituting (<ref>) in the BDNK stress tensor, it reduces to the following set of coupled and linear PDEs for the shift variables (linear simply becausewe have ignored all the nonlinear terms in δ u^μ and δ T),δ u^μ+θ̃[D̂û^μ+∇̂^μT̂/T̂]+θ̃[D̂δu^μ+∇̂^μδT/T̂]=0 ,δ T/T̂+χ̃[D̂T̂/T̂ + 1/3∇̂_μû^μ]+χ̃[D̂δT/T̂+1/3∇̂_μδu^μ]=0 .This linearization simplifies the analysis so that we can have an `all-order' (in derivatives) formula for both the field redefinitions and the stress tensor in the new frame. It turns out that the `MIS type nonlocality' emerges here again, even in BDNK theory, due to the infinite order field redefinition needed to cast it in the Landau frame. At the linearized level, the field redefinition can be done in two different representations. In one case, we summed only the time derivatives up to the infinite order, leading to a set of equations that look nonlocal in time (with the time derivative appearing in the denominator) but local in space. In the second case, we summed both the time and the space derivatives, leading to a full nonlocal redefinition of the fluid variables. In either case, these nonlocalities (derivatives appearing in the denominator) could be absorbed by introducing new `non-fluid' variables. These two different methods are described in two different subsections. §.§ Method-1: Frame transformation order by order In this subsection, we shall solve these PDEs (<ref>) and (<ref>) order by order in derivative expansion. We shall assume that δ u^μ, δε and δ T admit the following infinite series expansion,δ u^μ =∑_n=1^∞δ u_n^μ,     δε=∑_n=1^∞δε_n,     δ T =∑_n=1^∞δ T_n .Here, the subscript (n) denotes the order in terms of derivative expansion.Substituting the expansion of (<ref>) in the PDEs (<ref>) and (<ref>), one can easily find the solution in terms of the following recursive relations,δ T_1= -χ̃[1/T̂D̂T̂+1/3∇̂_μû^μ] ,    δ T_n=-χ̃[1/T̂D̂δ T_n-1+1/3∇̂_μδ u^μ_n-1] ,  n≥ 2  δ u_1^μ= -θ̃[1/T̂∇̂^μT̂+D̂û^μ] ,    δ u_n^μ=-θ̃[1/T̂∇̂^μδ T_n-1+D̂δ u^μ_n-1] ,  n≥ 2 .Eq.(<ref>) and(<ref>) together provide the successive field corrections up to any desired order. The next step is to rewrite the energy-momentum tensor in terms of the new fluid variables. The energy-momentum tensor in this frame turns out to beT^μν= [ε̂+∑_n=1^∞δε_n+χ{3D̂T̂/T̂+∂_αû^α+3/T̂D̂∑_n=1^∞δ T_n+∂_α∑_n=1^∞δ u^α_n}](û^μû^μ+1/3Δ̂^μν) + [4/3ε̂∑_n=1^∞δ u_n^ν+θ{∇̂^νT̂/T̂+D̂û^ν+1/T̂∇̂^ν∑_n=1^∞δ T_n+D̂∑_n=1^∞δ u^ν_n}]û^μ + [4/3ε̂∑_n=1^∞δ u_n^μ+θ{∇̂^μT̂/T̂+D̂û^μ+1/T̂∇̂^μ∑_n=1^∞δ T_n+D̂∑_n=1^∞δ u^μ_n}]û^ν - 2ησ̂^μν-2η∑_n=1^∞[∂^⟨μδ u_n^ν⟩]  .As mentioned before, only linearized terms are considered. The used notations read :Δ̂^μν=η^μν+û^μû^ν, D̂=û^μ∂_μ, ∇̂^μ=Δ̂^μν∂_ν, σ̂^μν=∂^⟨μû^ν⟩=Δ̂^μν_αβ∂^αû^β. After substituting the recursive solution for δ u^μ_n and δ T_n as given in (<ref>) and (<ref>), the energy density correction and energy-flux or momentum flow vanish as expected in Landau frame, and one finally has the following energy-momentum tensor upto all order,T^μν= ε̂(û^μû^μ+1/3Δ̂^μν) -2ησ̂^μν-2η∑_n=1^∞[∂^⟨μδ u_n^ν⟩]  . §.§.§ All order sum of the temporal derivatives Once we explicitly evaluate δ u^μ_n and δ T_n for the first few orders, a very nice pattern emerges, which we could use to sum this infinite series to get an all-order expression.Let us first list the velocity and temperature corrections up to the first four orders obtained from the Landau matching conditions,δ u^μ_1 =-θ̃[D̂û^μ+∇̂^μT̂/T̂] ,δ T_1/T̂ =-χ̃[D̂T̂/T̂+1/3(∇̂·û)]δ u^μ_2 =θ̃^2D̂^2 û^μ+θ̃[θ̃+χ̃]1/T̂D̂∇̂^μT̂+θ̃χ̃/3∇̂^μ(∇̂·û) ,δ T_2/T̂ =χ̃^2D̂^2T̂/T̂+χ̃/3[χ̃+θ̃]D̂(∇̂·û)+χ̃/3θ̃∇̂^2T̂/T̂ δ u^μ_3 =-θ̃^3D̂^3û^μ-θ̃[θ̃^2+θ̃χ̃+χ̃^2]D̂^2∇̂^μT̂/T̂ -θ̃χ̃/3[2θ̃+χ̃]D̂∇̂^μ(∇̂·û)-θ̃^2χ̃/3∇̂^2∇̂^μT̂/T̂,δ T_3/T̂ =-χ̃^3D̂^3T̂/T̂-χ̃/3[χ̃^2+χ̃θ̃+θ̃^2]D̂^2(∇̂·û) -χ̃/3θ̃[2χ̃+θ̃]D̂∇̂^2T̂/T̂-χ̃^2/9θ̃∇̂^2(∇̂·û)δ u^μ_4 =θ̃^4D̂^4û^μ+θ̃[θ̃^3+θ̃^2χ̃+θ̃χ̃^2+χ̃^3]D̂^3∇̂^μT̂/T̂ +θ̃χ̃/3[3θ̃^2+2θ̃χ̃+χ̃^2]D̂^2∇̂^μ(∇̂·û) +θ̃^2χ̃/3[2θ̃+2χ̃]D̂∇̂^2∇̂^μT̂/T̂+ θ̃^2χ̃^2/9∇̂^2∇̂^μ(∇̂·û),δ T_4/T̂ =χ̃^4D̂^4T̂/T̂+χ̃/3[χ̃^3+χ̃^2θ̃+χ̃θ̃^2+θ̃^3]D̂^3(∇̂·û) +χ̃/3θ̃[3χ̃^2+2χ̃θ̃+θ̃^2]D̂^2∇̂^2T̂/T̂ + χ̃^2/9θ̃[2χ̃+2θ̃]D̂∇̂^2(∇̂·û)+χ̃^2/9θ̃^2∇̂^4T̂/T̂ ,⋮     .We see that, with increasing order of velocity correction terms δ u_n^μ, the order of the spatial gradient on each thermodynamic quantity (T or u^μ) as well as the order of the temporal gradient on each such spatial gradient term chronologically increases. This increase of temporal derivatives is observed to follow a particular pattern such that they can be clubbed together into products of infinite sums. The same trend can be observed in the order-by-order temperature corrections as well.Below, we rewrite the velocity and the temperature corrections such that this repetitive pattern in the temporal derivatives becomes manifest.u^μ =û^μ+δ u_1^μ+δ u_2^μ+⋯=[1+(-θ̃D̂)+(-θ̃D̂)^2+(-θ̃D̂)^3+⋯]û^μ+(-θ̃)[1+(-θ̃D̂)+(-θ̃D̂)^2+⋯][1+(-χ̃D̂)+(-χ̃D̂)^2+⋯]∇̂^μT̂/T̂+(-θ̃)(-χ̃/3)) [1+2(-θ̃D̂)+3(-θ̃D̂)^2+⋯][1+(-χ̃D̂)+(-χ̃D̂)^2+⋯] ∇̂^μ(∇̂·û)+(-θ̃)^2(-χ̃/3) [1+2(-θ̃D̂)+3(-θ̃D̂)^2+⋯][1+2(-χ̃D̂)+3(-χ̃D̂)^2+⋯] ∇̂^2∇̂^μT̂/T̂+(-θ̃)^2(-χ̃/3)^2 [1+3(-θ̃D̂)+6(-θ̃D̂)^2+⋯][1+2(-χ̃D̂)+3(-χ̃D̂)^2+⋯]∇̂^2 ∇̂^μ(∇̂·û)+⋯     .The infinite sums over the time derivative can be encompassed in a closed form following the relaxation operator-like terms to appear in the denominator of the thermodynamic quantities, giving rise to pole-like structures.u^μ=û^μ+δ u_1^μ+δ u_2^μ+⋯= 1/(1+θ̃D̂)û^μ+ (-θ̃)1/(1+θ̃D̂)1/(1+χ̃D̂)∇̂^μT̂/T̂+ (-θ̃)(-χ̃/3)1/(1+θ̃D̂)^21/(1+χ̃D̂)∇̂^μ(∇̂·û)+ (-θ̃)^2(-χ̃/3)1/(1+θ̃D̂)^21/(1+χ̃D̂)^2∇̂^2∇̂^μT̂/T̂+ (-θ̃)^2(-χ̃/3)^21/(1+θ̃D̂)^31/(1+χ̃D̂)^2∇̂^2∇̂^μ(∇̂·û)+ ⋯     . Similarly, for temperature, we have the following:T =T̂+δ T_1+δ T_2+⋯=[1+(-χ̃D̂)+(-χ̃D̂)^2+(-χ̃D̂)^3+⋯]T̂+T̂(-χ̃/3)[1+(-χ̃D̂)+(-χ̃D̂)^2+⋯][1+(-θ̃D̂)+(-θ̃D̂)^2+⋯](∇̂·û)+(-χ̃/3)(-θ̃) [1+2(-χ̃D̂)+3(-χ̃D̂)^2+⋯][1+(-θ̃D̂)+(-θ̃D̂)^2+⋯] ∇̂^2T̂ +T̂(-χ̃/3)^2(-θ̃) [1+2(-χ̃D̂)+3(-χ̃D̂)^2+⋯] [1+2(-θ̃D̂)+3(-θ̃D̂)^2+⋯] ∇̂^2(∇̂·û) +⋯Just like the velocity variable, the above series can be resummed as,T=T̂+δ T_1+δ T_2+⋯= 1/(1+χ̃D̂)T̂+ T̂(-χ̃/3)1/(1+χ̃D̂)1/(1+θ̃D̂)(∇̂·û) + (-χ̃/3)(-θ̃)1/(1+χ̃D̂)^21/(1+θ̃D̂)∇̂^2T̂+ T̂(-χ̃/3)^2(-θ̃)1/(1+χ̃D̂)^21/(1+θ̃D̂)^2∇̂^2(∇̂·û)+ ⋯   . Putting the velocity correction given by Eq.(<ref>) in Eq.(<ref>) we have the all order frame transformed BDNK stress tensor in Landau frame as,T^μν=ε̂(û^μû^ν+1/3Δ̂^μν)+π^μν ,with,π^αβ=-2ηΔ̂^αβ_μν[1/(1+θ̃D̂)∇̂^μû^ν+ (-θ̃)1/(1+θ̃D̂)1/(1+χ̃D̂)1/T̂∇̂^μ∇̂^νT̂+ (-θ̃)(-1/3χ̃)1/(1+θ̃D̂)^21/(1+χ̃D̂)∇̂^μ∇̂^ν∇̂·û+ (-θ̃)^2(-1/3χ̃)1/(1+θ̃D̂)^21/(1+χ̃D̂)^21/T̂∇̂^μ∇̂^ν∇̂^2T̂+ (-θ̃)^2(-1/3χ̃)^21/(1+θ̃D̂)^31/(1+χ̃D̂)^2∇̂^μ∇̂^ν∇̂^2∇̂·û+⋯] . Note that at infinite order, for each increasing spatial gradient, the temporal gradient resulting from the infinite sum also increases in the denominator, such that they exactly balance each other. This condition has been mentioned in <cit.>as a necessary condition of causality.Both equations (<ref>) and (<ref>) are just formal solutions as they have derivatives in the denominator. Such an expression really makes sense in the space of frequencies rather than in real-time. However, what this indicates is a nonlocality in time (or integration over time). Just like in the MIS theory, such nonlocalities could be recast into a local set of equations by introducing new `non-fluid' variables, which is the topic of the next subsection.§.§.§ Introducing `non-fluid' degrees of freedom to make BDNK a local theory in Landau frame In section <ref>, Eq.(<ref>) and (<ref>) combined provide the energy-momentum tensor of a frame-transformed BDNK theory that is nonlocal in fluid variables. In this subsection, our goal is to introduce new `non-fluid' degrees of freedom, ones that vanish at any state of global thermal equilibrium and, therefore, are not extensions of any conserved charges. This viewpoint also provides us some guidance as to how we should formulate the equations of motion for `non-fluid' variables. Like π^μν in MIS theory, any non-fluid variable should approach a vanishing value in a `relaxation type' equation. The relaxation time scales are provided by the poles in the infinite sum of temporal derivatives we did in the previous subsection. However, unlike the MIS theory, here, after completing the infinite sum in the temporal derivatives, the degree of the pole increases ad infinitum along with more and more spatial derivatives in the numerator. This indicates an infinite number of non-fluid degrees of freedom in a nested series of `relaxation type' equations.We can make this intuition precise in the following set of infinitely many equations. This is a local theory both in space and time, equivalent to BDNK, at least with respect to linearized perturbations around equilibrium in the hydrodynamic regime (barring a few singular points in the frequency domain), but has an infinite number of degrees of freedom, as we expected,∂_μ T^μν=0 ,T^μν=ε̂(û^μû^ν+1/3Δ̂^μν)+π^μν ,(1+θ̃ D̂) π^μν=-2ησ̂^μν+ρ_1^⟨μν⟩ ,(1+χ̃ D̂) ρ^⟨μν⟩_1=(-2η)(-θ̃)1/T̂∇̂^⟨μ∇̂^⟩νT̂+ρ_2^⟨μν⟩ ,(1+θ̃ D̂) ρ^⟨μν⟩_2=(-2η)(-θ̃)(-1/3χ̃)∇̂^⟨μ∇̂^⟩ν∇̂·û+ρ_3^⟨μν⟩ ,(1+χ̃ D̂) ρ^⟨μν⟩_3=(-2η)(-θ̃)^2(-1/3χ̃)1/T̂∇̂^⟨μ∇̂^⟩ν∇̂^2 T̂+ρ_4^⟨μν⟩ ,(1+θ̃ D̂) ρ^⟨μν⟩_4=(-2η)(-θ̃)^2(-1/3χ̃)^2∇̂^⟨μ∇̂^⟩ν∇̂^2∇̂·û+⋯⋮       . Eq.(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and so on set an infinite nested series of new degrees of freedom much in the same line as the conventional MIS theory given by Eq.(<ref>) and (<ref>). Eq.(<ref>)-(<ref>) combinedly boil down to Eq.(<ref>) and (<ref>) where each increasing spatial gradient term is now attributed to a new degree of freedom.§.§ Method-2: Frame transformation in one goIn the previous section, we have solved the linearized frame transformation equations (<ref>) and (<ref>) using derivative expansion. Though the method of derivative expansion could be applied to solve even a nonlinear set of equations, we have heavily used linearization to simplify the solution further.In fact, the way we have summed the infinite series to generate temporal derivatives in the denominator is clearly a formal manipulation, and it makes sense only in the case of linearized treatment in Fourier space. It also indicates an integration over time, which is then made local by introducing new `non-fluid' variables. Now, while solving (<ref>) and (<ref>), if we eventually allow ourselves to have temporal derivatives (D̂) in the denominator, there is no harm in having spatial derivatives as well (again makes sense only when viewed in Fourier space and indicates an infinite order of spatial derivatives or integration/nonlocality in space). In this subsection, we shall use this formal manipulation of having both spatial and temporal derivatives in the denominator. This will lead to solutions of the frame transformation equations (<ref>) and(<ref>) in one go.The steps are as follows. First, we take the divergence of equation (<ref>) and the following two coupled scalar equations will give the two scalar variables (∇̂·δ u) and (δ T/T),[1+θ̃D̂](∇̂·δ u)+θ̃∇̂^2δT/T̂+θ̃[∇̂^2T̂/T̂+D̂∇̂·û]=0, [1+χ̃D̂]δ T/T̂+χ̃/3(∇̂·δ u)=0 ,where in Eq.(<ref>) we have used the on shell identity D̂T̂/T̂+1/3∇̂·û=0 that always holds at linearized level under Landau frame condition. Now eliminating (∇̂·δ u) from the abovetwo equations, we find δ TT̂. Then, substituting this solution in (<ref>), we find the expression for δ u^μ. The final solution (BDNK variables in terms of Landau frame variables) takes the following form:u^μ=(û^μ+δ u^μ) =(û^μ 1+θ̃D̂) +(1+θ̃D̂)^-1[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3 ∇̂^2]^-1[-θ̃∇̂^μT̂/T̂+θ̃/3(θ̃+χ̃)∇̂^μ(∇̂·û)] , T=(T̂+δ T)=[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3 ∇̂^2]^-1[(1+θ̃D̂)T̂-χ̃/3T̂(∇̂·û)] . In the Landau frame, the stress tensor will again have the structure of the form given in equation (<ref>). After substituting the solutions (<ref>) there, we finally get the following shear tensor, π^μν=-[2η1+θ̃D̂]σ̂^μν+[2ηθ̃ 1+θ̃D̂][∇̂^⟨μ∇̂^ν⟩T̂/T̂-1/3(θ̃+χ̃)∇̂^⟨μ∇̂^ν⟩(∇̂·û) (1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3 ∇̂^2] .Equation (<ref>) could be further simplified using the fact that in Landau frame at the linearized level ∇̂·û and D̂T̂T̂ are related as follows,∇̂·û+3 (D̂T̂T̂) = terms nonlinear in fluctuations,such that,π^μν=-[2η1+θ̃D̂]σ^μν+[2ηθ̃ 1+θ̃D̂][(1+(θ̃+χ̃)D̂)(∇̂^⟨μ∇̂^ν⟩T̂/T̂) (1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3 ∇̂^2] .The equations (<ref>), (<ref>), (<ref>) and (<ref>) are all very formal with spatial as well as temporal derivatives in the denominator. But following the strategy presented in the case of MIS theory, we could recast equation (<ref>) as an inhomogeneous differential equation for the new `nonfluid' degree of freedom π^μν as follows,[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3 ∇̂^2]{(1+θ̃D̂)π^μν+2ησ̂^μν}= 2ηθ̃{1+(θ̃+χ̃)D̂}(∇̂^⟨μ∇̂^ν⟩T̂/T̂) .Here, just like in MIS theory, we are introducing only one `non-fluid' tensorial degree of freedom, but it follows a complicated inhomogeneous PDE, second order in spatial but third order in temporal derivatives[Note that in the limit χ̃→0, the equation (<ref>) becomes very similar to the corresponding equation in MIS theory with a slight modification as follows.(1+θ̃D̂)π^μν= -2η[σ̂^μν-θ̃(∇̂^⟨μ∇̂^ν⟩T̂/T̂)] .].§.§.§ Comparison with the previous method with infinite `non-fluid' variablesGenerically, a nonlocal theory could be made local by introducing new degrees of freedom, but the process of `integrating in' new degrees could have ambiguities. The two methods described in the previous two subsections could be one example of this ambiguity. Both methods attempt to write a system of coupled equations involving both fluid and `non-fluid' variables that are equivalent to the equations in BDNK theory. However, the structure of the equations and also the extra `non-fluid' variables are so widely different that in the first case, we need to introduce an infinite number of variables, whereas in the second case, we need just one. In this subsection, we would like to see how these two sets of equations are actually equivalent, at least in some regime of frequency and spatial momenta.It turns out that the field redefinition we have used in the first method (see equations (<ref>) and (<ref>)) could be further rearranged in the following fashion. For the velocity redefinition, we have,u^μ= û^μ+δ u_1^μ+δ u_2^μ+⋯=1/(1+θ̃D̂)û^μ+ (-θ̃)/(1+θ̃D̂)1/(1+χ̃D̂)[1+(-θ̃)/(1+θ̃D̂)(-χ̃/3)/(1+χ̃D̂)∇̂^2+⋯]∇̂^μT̂/T̂+ (-θ̃)/(1+θ̃D̂)^2(-χ̃/3)/(1+χ̃D̂)[1+(-θ̃)/(1+θ̃D̂)(-χ̃/3)/(1+χ̃D̂)∇̂^2+⋯]∇̂^μ(∇̂·û).Similarly, for the temperature redefinition we have,T= T̂+δ T_1+δ T_2+⋯=1/(1+χ̃D̂)[1+(-θ̃)/(1+θ̃D̂)(-χ̃/3)/(1+χ̃D̂)∇̂^2+⋯]T̂+ T̂(-χ̃/3)/(1+χ̃D̂)1/(1+θ̃D̂)[1+(-θ̃)/(1+θ̃D̂)(-χ̃/3)/(1+χ̃D̂)∇̂^2+⋯](∇̂·û) .Substituting this rearranged field redefinition, the dissipative part of the stress tensor could also be rearranged as,π^αβ= -2ηΔ̂^αβ_μν[1/(1+θ̃D̂)∇̂^μû^ν+(-θ̃)/(1+θ̃D̂)1/(1+χ̃D̂)1/T̂∇̂^μ∇̂^ν{1+(-θ̃)/(1+θ̃D̂)(-1/3χ̃)/(1+χ̃D̂)∇̂^2+⋯}T̂+(-θ̃)/(1+θ̃D̂)^2(-1/3χ)/(1+χ̃D̂)∇̂^μ∇̂^ν{1+(-θ̃)/(1+θ̃D̂)(-1/3χ̃)/(1+χ̃D̂)∇̂^2+⋯}∇̂_ρû^ρ] . Now the infinite sum in powers of spatial derivative ∇̂^2 converges for those linearized perturbations where the operator satisfies the inequality[(θ̃χ̃/3)∇̂^2/(1+θ̃D̂)(1+χ̃D̂)]<1 .Within this radius of convergence, we can again sum the spatial derivatives and get the following expression for the field redefinitions, u^μ= û^μ+δ u_1^μ+δ u_2^μ+⋯ =1/(1+θ̃D̂)û^μ + (-θ̃) ∇̂^μT̂/T̂/[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3∇̂^2] +(-θ̃)(-χ̃/3)/(1+θ̃D̂)∇̂^μ(∇̂·û)/[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3∇̂^2] , = 1/(1+θ̃D̂)û^μ+[-θ̃∇̂^μT̂/T̂+θ̃/3(θ̃+χ̃)∇̂^μ(∇̂·û)]/(1+θ̃D̂)[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3∇̂^2] , and,T =T̂+δ T_1+δ T_2+⋯ =(1+θ̃D̂)T̂/[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3∇̂^2] +T̂(-χ̃/3)(∇̂·û)/[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3∇̂^2] .From (<ref>) it is simple to estimate π^μν as,π^μν= -2ησ̃^μν/(1+θ̃D̂)-2η[-θ̃ ∇̂^⟨μ∇̂^ν⟩T̂/T̂+θ̃χ̃/31/(1+θ̃D̂)∇̂^⟨μ∇̂^ν⟩(∇̂·û)]/[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3∇̂^2] ,=-[2η1+θ̃D̂]σ^μν+[2ηθ̃ 1+θ̃D̂][(1+(θ̃+χ̃)D̂)(∇̂^⟨μ∇̂^ν⟩T̂/T̂) (1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3 ∇̂^2] .It can be observed that Eq.(<ref>), (<ref>) and (<ref>) are exactly identical as (<ref>), (<ref>) and (<ref>) of the field correction at one go results. (In the second step of the derivation of (<ref>) and (<ref>), we have taken recourse to the identity (<ref>). For detailed steps of the summation, the reader may refer to appendix <ref>.) So, within the radius of convergence, both methods actually generate the same set of equations as expected.At this stage, let us emphasize one point. This method of `integrating in' new `non-fluid' degrees of freedom with new equations of motion is highly non-unique, even at the linearized level. For example, we could have chosen δ u^μ and δ T themselves to be the new `non-fluid' variables, satisfying the new equations as given in (<ref>) and (<ref>) and we could take a viewpoint that the u^μ and the T fields in the BDNK theory are actually the Landau frame fluid variables plus `non-fluid' variables {δ u^μ, δ T}. Note that though δ u^μ and δ T would look very much like velocity and temperature corrections, they are still `non-fluid' variables in the Landau frame since they vanish in global equilibrium. Another choice of introducing infinitely many `non-fluid' degrees of freedom would be to simply use δ u^μ_n and δ T_n (as defined in (<ref>))and then the recursive equations (<ref>) and (<ref>) would turn out to be the new equations of motion. The two choices of new variables, discussed here in detail, are basically guided by our sense of mathematical aesthetics and an attempt to adhere to the philosophy of MIS theory where the new`non-fluid' variable is a rank-2 symmetric tensor, structurally very similar to the energy-momentum tensor. At the moment, we do not have any further physical support behind our choice of variables. § DISPERSION RELATION As we have seen in the previous sections, a system of fluid equations with terms up to all orders in derivative expansion could be converted to PDEs with a finite number of derivatives, provided we introduce new `non-fluid' degrees of freedom. The `non-fluid' variables we introduced basically capture the effect of a formal infinite sum over derivatives, leading to pole-like structures in the momentum-frequency space. Now, these infinite series in derivatives (or, more precisely, in the 4-momenta of the Fourier transform of linear fluctuations) could be summed only within their radius of convergence. Once we extend the summed-up theories beyond that radius, we often encounter `non-hydrodynamic modes' that are not exactly the same as that of the BDNK theory[A similar situation arises in the case of MIS theory as we have presented in section <ref>. In the hydrodynamic regime, the stress tensor must be described in a derivative expansion, which turns out to have an infinite number of terms (see equation(<ref>)). Now, in the frequency space (ω), this infinite sum can be performed only within a radius of convergence, which in this case turns out to beD∼ |ω|≤1τ_π . Introducing new `non-fluid' variables π^μν essentially amounts to extending the theory beyond this radius of convergence. Now ω =- iτ_π is the new non-hydro mode that emerges in the process of integrating in π^μν and this mode is exactly on the radius of convergence of the previous derivative expansion.]. However, in this section, we shall see that the hydrodynamic modes of the system of equations described in the previous two sections are both exactly the same as that of BDNK at every order in k expansion. This is a consistency test of our claim that our system of equations is indeed equivalent toBDNK formalism, at least in the hydrodynamic regime.§.§ Method - 1 Here, the equivalent system is described by an infinite number of variables and, therefore, an infinite number of equations. For convenience, let us first quote the equations here again. ∂_μ T^μν=0 ,T^μν=ε̂(û^μû^ν+1/3Δ̂^μν)+π^μν ,(1+θ̃ D̂) π^μν=-2ησ̂^μν+ρ_1^⟨μν⟩ ,(1+χ̃ D̂) ρ^⟨μν⟩_1=(-2η)(-θ̃)1/T̂∇̂^⟨μ∇̂^⟩νT̂+ρ_2^⟨μν⟩ ,(1+θ̃ D̂) ρ^⟨μν⟩_2=(-2η)(-θ̃)(-1/3χ̃)∇̂^⟨μ∇̂^⟩ν∇̂^ρû_ρ+ρ_3^⟨μν⟩ ,(1+χ̃ D̂) ρ^⟨μν⟩_3=(-2η)(-θ̃)^2(-1/3χ̃)1/T̂∇̂^⟨μ∇̂^⟩ν∇̂^ρ∇̂_ρT̂+ρ_4^⟨μν⟩ ,(1+θ̃ D̂) ρ^⟨μν⟩_4=(-2η)(-θ̃)^2(-1/3χ̃)^2∇̂^⟨μ∇̂^⟩ν∇̂^ρ∇̂_ρ∇̂^τû_τ+⋯⋮The `non-fluid' variables are π^μν and the infinite sequence of ρ_n^μνs, each satisfying a relaxation type of equation. We parameterize the perturbation around static global equilibrium in the following fashion.T̂ = T_0 + ϵ δ T e^i T_0(-ω t + k x) û^μ = {1,0,0,0} +ϵ {0,β _x,β_y,0} e^i T_0(-ω t + k x) ρ_n^xx =ϵ δρ_n^xx e^i T_0(-ω t + k x)= -2ρ_n^yy=-2ρ_n^zz     ∀ n ρ_n^xy =ϵ δρ_n^xy e^i T_0(-ω t + k x)     ∀ n All other  components of ρ_n^μν vanish for every n.Here, ϵ is a book-keeping parameter for linearization. Any term quadratic or higher order in ϵ will be ignored. We have scaled the frequency and the spatial momenta with the equilibrium temperature T_0 so that both ω and k are dimensionless. Similarly, we introduce new dimensionlessparameters of the theory η̃_0, χ̃_0 and θ̃_0 as follows η̃≡η̃_0 T_0,    χ̃≡χ̃_0 T_0,    θ̃≡θ̃_0 T_0 .If we substitute the fluctuations (<ref>) in equations (<ref>) to (<ref>), we find the dispersion polynomial P(ω,k) whose zeroes will give the modes where the fluctuations can have a nontrivial solution.Now, in this case, it is difficult to express P(ω,k) in a compact form since the equations involve an infinite number of variables. Instead, we shall determine the dispersion polynomial P_N(ω,k) for the same system, truncated at some arbitrary but finite order n=Nrecursively. The infinite N limit of P_N(ω,k) will give the actual dispersion polynomial of the system. We have,P_N(ω,k) = P^shear(ω,k)  P^sound_N(ω,k) ,where   P^shear(ω,k) = η̃_0  k^2 -iω (1- i θ̃_0 ω) ,P^sound_N(ω,k) =(1 -i χ̃_0 ω)^N 2 (1- i θ̃_0 ω)^N 2P_N(ω,k)   When N even,P^sound_N(ω,k) =(1 -i χ̃_0 ω)^N+1 2 (1- i θ̃_0 ω)^N-1 2P_N(ω,k)   When N odd .Note thefactor P^shear(ω,k) is independent of N. We could further check that it has the same form as that of the dispersion polynomial inBDNK theory (see (<ref>) and (<ref>)) in the shear channel. For P_N(ω,k) wehave a recursion relation as follows, P_2m-1= (1 -i χ̃_0 ω) P_2m -2-i (4 η_0 3^m) θ̃_0^mχ̃_0^m-1( i k)^2(m+1)   for odd N=2m-1,  m≥1P_2m= (1 -i θ̃_0 ω) P_2m -1-i (4 η_0 3^m) θ̃_0^mχ̃_0^m( i k)^2(m+1) (-iω)   for even N=2m,  m>0P_0 =3 i ω^2 (1 -i χ̃_0 ω) + k^2(i + 4η̃_0ω+ θ̃_0ω) .From equations (<ref>) and (<ref>), we could see that the degree of the polynomial (and therefore the number of zeroes) in the sound channel increases as we include more and more ρ_n^μνs in our system of equations. In other words, with increasing N, we keep getting more and more modes. However, it is easy to take k→0 limit in these recursive equations, and one could see that in the sound channel, there are precisely two modes at ω=0, and all the rests are either at [ω = -iχ̃_0] or [ω = -iθ̃_0] similar to the BDNK theory at k→ 0 limit. According to our definitions, the modes with vanishing frequencies at k→ 0 limit are the hydro modes. So, this system of equations does have two hydro modes in the sound channel, as expected from the parent BDNKtheory. Further, by explicit calculation, we can see that these hydrodynamic sound modes match with those of BDNK even at non-zero k, if we treat k perturbatively in a power series expansion [If we truncate the equations at n=N then the frequency of the sound mode matches with that of BDNK upto order O∼ (k^N+3). This we have checked in Mathematica for all N≤10.]. So clearly, the hydro-modes in the equations described in these sections for both the sound and shear channel (in the shear channel, even the non-hydro modes match with BDNK) are the same as those of BDNK, justifying our claim that this system of equations is equivalent to the BDNK systems of equations in the hydrodynamic regime. §.§ Method - 2 For convenience, let us first quote the system of equations that we would like to analyze. ∂_μ T^μν=0T^μν=ε(û^μû^ν+1/3Δ̂^μν)+π^μν[(1+θ̃D̂)(1+χ̃D̂)-θ̃χ̃/3 ∇̂^2] {(1+θ̃D̂)π^μν+2ησ̂^μν}= 2ηθ̃{1+(θ̃+χ̃)D̂}(∇̂^⟨μ∇̂^ν⟩T̂/T̂) .As before, we parameterize the perturbation around static global equilibrium in the following fashion,T̂ = T_0 + ϵ δ T e^i T_0(-ω t + k x) û^μ = {1,0,0,0} +ϵ {0,β _x,β_y,0} e^i T_0(-ω t + k x) π^xx =ϵ δπ^xx e^i T_0(-ω t + k x)= -2π^yy=-2π^zz π^xy =ϵ δπ^xy e^i T_0(-ω t + k x) ,with ϵ as a book-keeping parameter for linearization. Any term quadratic or higher order in ϵ will be ignored. Again, the frequency and the spatial momenta are scaled with the equilibrium temperature T_0 so that both ω and k are dimensionless. And also we have introduced new dimensionlessparameters of the theory η̃_0, χ̃_0 and θ̃_0 as follows,η̃≡η̃_0 T_0,    χ̃≡χ̃_0 T_0,    θ̃≡θ̃_0 T_0 .Substituting equation (<ref>) in the system of equations, we find the following dispersion polynomialP(ω, k) = (1 - i θ̃_0ω)[(χ̃_0θ̃_0 3)k^2+ (1 - i θ̃_0ω)(1 - i χ̃_0ω) ] P_BDNK(ω,k) ,where P_BDNK(ω,k) is the similar dispersion polynomial computed for the fluctuation around static equilibrium solutions in BDNK systems of equations as given in (<ref>) and (<ref>) and given by,P_BDNK = ( η̃_0 k^2 - i ω (1-i θ̃_0 ω )) ×[ χ̃_0θ̃_0ω^4 +i(χ̃_0+θ̃_0)ω^3-{1+2/3χ̃_0(θ̃_0+2η̃_0)k^2}ω^2-i/3(χ̃_0+θ̃_0+4η̃_0)ω k^2+k^2/3+θ̃_0/9(χ̃_0-4η̃_0)k^4 ] .In other words, the zeroes of P_BDNK(ω,k) are the hydro and non-hydro modes of the BDNK theory.From equation (<ref>), it is clear that all the modes of the BDNK system are already contained in the system of equations (<ref>) to (<ref>). However, they also contain some new modes, which are the zeros of the prefactor,P_extra(ω,k)≡(P(ω,k) P_BDNK(ω,k)) =(1 - i θ̃_0ω)[(χ̃_0θ̃_0 3)k^2+ (1 - i θ̃_0ω)(1 - i χ̃_0ω) ] .Note that all these new modes are of non-hydro type. One could further check that they correspond to the zero modes of the linear PDEs that determine the shift of the velocity and the temperature field (δ u^μ and δ T respectively) under frame transformation (see equation (<ref>) and (<ref>)). The existence of such zero modes implies that if we view δ u^μ or δ T as generated from a field redefinition (and not as new `non-fluid' variables), then even after fixing the Landau frame condition, there are still some unfixed residual ambiguities (which exist only at some special form of ω(k)) in the definition of the fluid variables. On the other hand, if we absorb these shift fields (δ u^μ and δ T) into new `nonfluid' variables, the extra zeros of the prefactorP_extra(ω,k) do become the new modes of the theory. In some sense, the residual ambiguities in the field redefinition procedure translate to the non-uniqueness of theUV degrees of freedom beyond the hydrodynamic regime.§ CONCLUSIONIn this note, we rewrite the stress tensor of the BDNK hydrodynamic theory in the Landau frame at least for the part that will contribute to the spectrum of linearized perturbation around static equilibrium. Though the BDNK formalism has a finite number of derivatives, it turns out that in the Landau frame, it will have either an infinite number of derivatives or one has to introduce non-fluid variables. There is no unique way to introduce non-fluid variables. Here, motivated by the structure of MIS formalism, we have presented two different ways of doing it, resulting in two completely different-looking sets of equations. However, both the sets have the same hydrodynamics modes as the BDNK theory. But in the process of `integrating in' the non-fluid variables, new non-hydrodynamic modes are generated.In both methods, we need to do a formal infinite sum over derivatives. We suspect that the convergence issues of these infinite sums, also related to the `non-invertibility' of the zero modes of the linear operator involved in field redefinition, are responsible for these new non-hydrodynamic modes. However, this point needs further investigation. More generally, it would be interesting to know if we can identify a part of the spectrum to be invariant under field redefinition and, therefore, truly physical. In this context, the following observation seems useful. In BDNK theory, if we set viscosity (η) to zero (with nonzero χ and θ), then via field redefinition, the stress tensor could be made identical to that of an ideal fluid at the linearized level, though in the original `BDNK' frame it will have nontrivial dispersion relation dependent on the values of χ and θ. This indicates that there might be some partial redundancy in the information contained in the spectrum of a fluid theory. It would be nice to have a more comprehensive understanding of this aspect of the spectrum. Our work has set up a stage for comparison between the BDNK and MIS-type theories. At first glance, they look very different. However, the fluid variables like velocity and temperature used to express the BDNK stress tensor are not the same as the ones used in MIS theory. A comparison is meaningful only if the basic variables of the equations are the same. Once we have done the required transformation, it turns out that though there are differences in the details, the basic structure of nonlocality or `non-fluid' variables is very similar in both theories. The advantage of the Landau frame is that the fluid variables are locally defined in terms of the one-point function of the stress tensor, and in this case, the causal equations turn out to have nonlocal terms or an infinite number of derivatives. Whereas in BDNK theories, the equations are local with a finite number of derivatives, but the fluid variables are related to the one point function of the stress tensor in a very non-trivial and nonlocal fashion. However, there is more information in the BDNK formalism than what has just been stated above. It says that there exist causal fluid theories where the non-localities could be completely absorbed in a field redefinition, thereby generating causal but local fluid theory with a finite number of derivatives. Since the final equation we derived on the shear tensor π^μν is different from what one has in MIS, it also says that the non-localities of MIS could possiblynever be completely absorbed in field redefinition.It would be interesting to extend this analysis to full nonlinear order. Also, it would be very informative to know whether and, if yes, how the story changes as one adds higher derivative corrections to BDNK theory.§ ACKNOWLEDGEMENTSS.B. acknowledges B. Withers for helpful discussions and Trinity College, Cambridge for hospitality while this manuscript was being prepared. S.B., S.M. and S.R. acknowledge the Department of Atomic Energy, India, for the financial support. § APPENDIX §.§ Detailed calculations of Method-1 In this section, we will derive the form of the frame transformations in an infinite-order derivative expansion. To begin with, we'll rewrite the transformations of T and u^μ under frame redefinitionsT -= δ T = ∑_n=1^∞δ T_n,      u^μ - ^μ = δ u^μ = ∑_n=1^∞_n^μ Substituting this into the expression of the stress-tensor and using the Landau-frame condition, the following expressions are obtained for δ T_n and δ u^μ_n._1= -( D/ + 1/3∇·)_n≥ 2 = -( _n-1/ + 1/3·_n-1)^μ_1= -( ^μ + ^μ)^μ_n≥ 2 = -( _n-1^μ + ^μ_n-1) §.§.§ Transformation of velocityUsing the forms given above, we can try to express ^μ_n in terms of the lower order s and ^μ_ns as^μ_n= (-) [ ^ μ_n-1 + ∇^ μ_n-1/] = [ (-)^2 ^2 δ^ mu_ν + /3_ν^μ] ^ν_n-2 + (-) (--) ∇^μ_n-2/= [ (-)^3 ^3 δ^μ_ν + /3 (-2- ) _ν^μ] ^ nu_n-3 + (-) [ (^2 ++ ^2) ^2 + /3^2 ] ^μ_n-3/= [(-)^4 ^4 δ^μ_ν + /3 (3^2 + 2+ ^2) ^2 _ν^μ + (/3)^2 ^2 _ν^μ] ^ν_n-4+ (-) [-(^3 + ^2 + ^2+ ^3) ^3 + /3 2 (- -) ^2 ] ^μ_n-4/= [(-)^5 ^5 δ^μ_ν - /3 (4^3+ 3^2 + 2 ^2+ ^3) ^3 _ν^μ + (/3)^2 (-3 - 2 ) ^2 _ν^μ] ^ν_n-5+ (-) [ (^4 + ^3+ ^2 ^2 + ^3 + ^4)^4 + /3 (3 ^2 + 4+ 3 ^2) ^2 ^2 + ( /3)^2 (^2)^2 ] ^μ_n-5/ In this way, continuing the sequence, ^μ_n can be expressed in terms ofandas ^μ_n= (-)^n ^μ + (-) ∑_m=0^n-1 c_m ( /3^2 )^m ^n-1-2m^μ/+ ∑_m=0^n-2/3  d_m ( /3^2 )^m ^n-2-2m^μ· where,c_mn = 1/(m!)^2(- ∂/∂)^m (-∂/∂)^m∑_l=0^n-1 (-)^l (-)^n-1-l,     d_mn = 1/(m+1)( -∂/∂) c_mnThe expressions in (<ref>)-(<ref>) can be reproduced from this form in (<ref>).To find ^μ, we need to sum over all the ^μ_ns from n=1 to ∞.^μ = ∑_n=1^∞^μ_n = (∑_n=1^∞(-)^n ^μ) + (-)( ∑_n=1^∞∑_m=0^n-1 c_m ( /3^2 )^m ^n-1-2m)^μ/+ (∑_n=1^∞∑_m=0^n-2/3  d_m ( /3^2 )^m ^n-2-2m) ^μ· Considering the first summation of (<ref>), we find that it is an infinite summation of the form∑_n=1^∞ x^n = x  ∑_n=0^∞ x^n = x/1-x Hence, from the first summation, we get(∑_n=1^∞ (-)^n ) ^μ = (-)/(1+)^μ The second summation in (<ref>) is actually a nested summation of three different indices as∑_n=1^∞∑_m=0^n-11/(m!)^2( /3^2 )^m ^n-1-2m(- ∂/∂)^m (-∂/∂)^m (∑_l=0^n-1 (-)^l (-)^n-1-l) Replacing the index n by N=n-1, (<ref>) becomes∑_N=0^∞∑_m=0^N1/(m!)^2( /3^2 )^m ^N-2m(- ∂/∂)^m (-∂/∂)^m (∑_l=0^N (-)^l (-)^N-l) For values m>N, we see that (∂/∂)^m or (∂/∂)^m acting on the summation over l gives 0 as the highest power oforin the series is N only. So, we can add an infinite no. of such zeros and extend the summation over m to ∞ instead of N.∑_N=0^∞∑_m=0^∞1/(m!)^2( /3^2 )^m ^N(- ∂/∂)^m (-∂/∂)^m (∑_l=0^N (-)^l (-)^N-l) The summations over m and N now have independent limits; hence, their order can be interchanged, and we can rewrite the summation as∑_m=0^∞1/(m!)^2( /3^2 )^m(- ∂/∂)^m (-∂/∂)^m ∑_N=0^∞^N(∑_l=0^N (-)^l (-)^N-l) The summations over N and l can then be interchanged using the Cauchy product formula(∑_n=0^∞ a_n ) (∑_m=0^∞ b_l ) = ∑_n=0^∞( ∑_l=0^n a_l b_n-m)and (<ref>) can now be expressed as∑_m=0^∞1/(m!)^2( /3^2 )^m(- ∂/∂)^m (-∂/∂)^m ( ∑_N=0^∞ (-)^N) (∑_l=0^∞ (-)^l) =∑_m=0^∞1/(m!)^2( /3^2 )^m (- ∂/∂)^m (-∂/∂)^m ( 1/(1+)1/(1+)) =∑_m=0^∞1/(m!)^2( /3^2 )^m ( m! ^m/(1+)^m+1m! ^m/(1+)^m+1) =∑_m=0^∞1/(1+)1/(1+)( /3^2/(1+)(1+))^m =1/(1+)1/(1+)1/1-/3^2/(1+)(1+)= 1/(1+)(1+)-/3^2 Now, let us consider the third summation∑_n=1^∞∑_m=0^n-2/3 1/(m!)^21/(m+1)( /3^2 )^m (-∂/∂)^m+1( -∂/∂)^m^n-2-2m( ∑_l=0^n-1 (-)^l (-)^n-1-l) Here, we see that for m=n-1, the no. of ∂/∂ derivatives becomes more than the highest power ofpresent in the series over l, thus making the term corresponding to m=n-1 zero. We can add this zero term, and then our sum becomes∑_n=1^∞∑_m=0^n-1/3 1/(m!)^21/(m+1)( /3^2 )^m (-∂/∂)^m+1( -∂/∂)^m^n-2-2m( ∑_l=0^n-1 (-)^l (-)^n-1-l) Using N=n-1 as before,∑_N=0^∞∑_m=0^N/3 1/(m!)^21/(m+1)( /3^2 )^m (-∂/∂)^m( -∂/∂)^m^-1(-∂/∂) ( ∑_l=0^N (-)^l (-)^N-l^N) Again, extending the sum over m up to ∞ and interchanging summations like the previous case, we get∑_m=0^∞/3 1/(m!)^21/(m+1)( /3^2 )^m (-∂/∂)^m( -∂/∂)^m^-1(-∂/∂) (∑_N=0^∞∑_l=0^N (-)^l (-)^N-l^N) =∑_m=0^∞/3 1/(m!)^21/(m+1)( /3^2 )^m (-∂/∂)^m( -∂/∂)^m^-1(-∂/∂) ( 1/(1+)(1+))=∑_m=0^∞/3 1/(m!)^21/(m+1)( /3^2 )^m (-∂/∂)^m( -∂/∂)^m^-1( /(1+)^21/(1+))=∑_m=0^∞/3 1/(m!)^21/(m+1)( /3^2 )^m( (m+1)!^m/(1+)^m+2m!^m+1/(1+)^m) =/3 1/(1+)^21/(1+)∑_m=0^∞( /3^2 )^m( 1/(1+)^m1/(1+)^m) =/3 1/(1+)1/(1+)(1+)-/3^2 So, putting all these results together, (<ref>) becomes^μ = (-)/(1+)^μ + (-) 1/(1+)(1+)-/3^2^μ/+ /3 1/(1+)1/(1+)(1+)-/3^2^μ· ⇒^μ = (-)/(1+)^μ + 1/(1+)(1+)-/3^2( -^μ/ + /31/(1+)^μ·)which we see is identical to the ^μ calculated in (<ref>).Also worth noticing is the point that, had we not summed over m in the second and third summations, then ^μ would have been left in the form of an infinite series of the form ^μ = (-)/(1+)^μ + 1/(1+)(1+)∑_m=0^∞( /3^2/(1+)(1+))^m { -^μ/ + /3/(1+)^μ·}u^μ = ^μ + ^μ = 1/(1+)^μ+ 1/(1+)(1+)∑_m=0^∞( /3^2/(1+)(1+))^m { -^μ/ + /3/(1+)^μ·} σ^μν = 1/(1+) -2ησ̂^μν -2η1/(1+)1/(1+)∑_m=0^∞( /3^2/(1+)(1+))^m { -^⟨μ^ν⟩/ + /3/(1+)^⟨μ^ν⟩·} We can recast this form of σ^μν into the form of a relaxation equation given by(1+)σ^μν = -2η[σ̂^μν + 1/(1+)∑_m=0^∞( /3^2/(1+)(1+))^m {-^⟨μ^ν⟩/ + /3/(1+)^⟨μ^ν⟩·}] = -2ησ̂^μν + ρ_1^⟨μν⟩ where ρ_1^⟨μν⟩ is given by ρ_1^⟨μν⟩ = [-2η1/(1+)∑_m=0^∞( /3^2/(1+)(1+))^m {-^⟨μ^ν⟩/ + /3/(1+)^⟨μ^ν⟩·}] It can again be recast into a relaxation equation as⇒ (1+)ρ_1^⟨μν⟩ = [-2η∑_m=0^∞( /3^2/(1+)(1+))^m { -^⟨μ^ν⟩/ + /3/(1+)^⟨μ^ν⟩·}] = -2η (-) ^⟨μ^ν⟩/ + ρ_2^⟨μν⟩ with ρ_2^⟨μν⟩ defined and associated with another relaxation equation asρ_2^⟨μν⟩ =-2η/3/(1+)^⟨μ^ν⟩· - 2η∑_m=1^∞( /3^2/(1+)(1+))^m { -^⟨μ^ν⟩/ + /3/(1+)^⟨μ^ν⟩·}(1+) ρ_2^⟨μν⟩ = -2η/3^⟨μ^ν⟩· + ρ_3^⟨μν⟩ where again ρ_3^⟨μν⟩ contains the infinite series. In this way, the sequence would continue, and any general term would be given by (for n≥0)(1+) ρ_2n+1^⟨μν⟩ = (-2η)(-) ( /3^2 ) ^n ^⟨μ^ν⟩/ + ρ_2n+2^⟨μν⟩(1+ ) ρ_2n+2^⟨μν⟩ = (-2η)/3( /3^2 ) ^n ^⟨μ^ν⟩· + ρ_2n+3^⟨μν⟩ρ_2n+1^⟨μν⟩ =-2η/(1+)∑_m=n^∞( /3^2/(1+)(1+))^m { -^⟨μ^ν⟩/ + /3/(1+)^⟨μ^ν⟩·}ρ_2n+2^⟨μν⟩ = -2η/(1+)/3( /3^2)^n ^⟨μ^ν⟩·+∑_m=n+1^∞( /3^2/(1+)(1+))^m { -^⟨μ^ν⟩/ + /3/(1+)^⟨μ^ν⟩·}These are the general forms of the ρ_n^⟨μν⟩s given in (<ref>)-(<ref>). §.§.§ Transformation of temperature As it was done in the previous subsection, the expression for _n/ can be written as_n/ =(-)^n / + (-) ∑_m=0^n-1 c_mn^n-1-2m( /3^2 )^m (·/3) + /3∑_m=0^n-2 f_mn^n-2-2m( /3^2 )^m ^2/where c_mn is defined the same way as in ^μ_n and f_mn is defined in terms of c_mn asf_mn = 1/m+1( - ∂/∂) c_mn Similar to the case of ^μ, we again take an infinite summation over n to obtain / as/ = ∑_n=1^∞_n/ =-/(1+)/ + (-) 1/(1+)(1+)-/3^2( ·/3) + /31/(1+)1/(1+)(1+)-/3^2^2 / and from there, obtain the same T =+ as in (<ref>)T= 1/(1+) -1/(1+)(1+)-/3^2( ·/3) +1/(1+)1/(1+)(1+)-/3^2/3^2 = 1/(1+θD) (1+ χ D) - χθ/3∇^2 [ (1+)- ·/3]jhep
http://arxiv.org/abs/2312.16407v1
{ "authors": [ "Sayantani Bhattacharyya", "Sukanya Mitra", "Shuvayu Roy" ], "categories": [ "nucl-th", "gr-qc", "hep-ph", "hep-th" ], "primary_category": "nucl-th", "published": "20231227042858", "title": "Frame transformation and first order stable-causal hydrodynamic theory" }
tr Phys. Rev. B J_⊥ J_∥ Phys. Rev. Lett. Phys. Lett. A Phys. Rev.e r r^' αβ̱ kωΩΓγgϵεΔ∂łℓj prq vEBGδ̣ΘϑTrtrθϑVD_LD_T↑↓ĝ^Rĝ^Aĝ^Kσø∘⟨⟨||⟩⟩∇Department of Physics, University of Virginia, Charlottesville, Virginia 22904, USADepartment of Physics and Astronomy and Bhaumik Institute for Theoretical Physics, University of California, Los Angeles, California 90095, USADepartment of Physics, University of Virginia, Charlottesville, Virginia 22904, USA We consider magnetic Weyl metals as a platform to achieve current control of magnetization textures with transport currents, utilizing their underlying band geometry. We show that the transport current in a Weyl semimetal produces an axial magnetization due to orbital magnetic moments of the Weyl electrons. The associated axial magnetization can generate a torque acting on the localized magnetic moments. For the case of a magnetic vortex in a nanodisk of Weyl materials, this current-induced torque can be used to reverse its circulation and polarity. We discuss the axial magnetization torques in Weyl metals on general symmetry grounds, and compare their strength to current-induced torques in more conventional materials.Magnetic vortex control with current-induced axial magnetization in centrosymmetric Weyl materials D. A. Pesin January 14, 2024 ================================================================================================== Introduction.— Discrete degrees of freedom in condensed matter systems have firmly established themselves as perpetual candidates for information storage units. Their microscopic versions - single spins, single charges in single-electron boxes - have a long history of being considered for qubit realizations. Discrete macroscopic degrees of freedom, predominantly those associated with magnetization and its direction, have been not merely candidates, but also workhorses of information storage, albeit classical one, see reviews  <cit.>. Proposals to use mesoscopic magnetization textures in nanoscale samples as platforms for quantum information storage and manipulation have also emerged <cit.>. Associated with these proposals is the question of control of small-scale magnetization textures. Accomplishing this control with electric currents promises practical benefits coming from scalability of the architectures, and reduced power consumption, see Ref. <cit.> for a recent review of the field. In the realm of spintronics of conventional materials <cit.>, as opposite to topological ones, there is a number of well known ways to approach magnetization control with current. The list includes spin-transfer torques in spin-valve-type devices <cit.>, spin current injection via spin Hall effect <cit.>, and current-induced torques in noncentrosymmetric systems with strong spin-orbit coupling <cit.>.In this work we consider magnetic Weyl metals as a possible material candidate for realization of information storage and manipulation in nanoscale systems. We show that there is a new source of a current-induced spin torques in these materials related to the current-induced axial magnetization, which changes sign between the valleys of the Weyl material. The axial current associated with the axial magnetization induces nonequilibrium spin polarization of itinerant carriers. This spin polarization is capable of controlling textures of the underlying magnetization of localized spins, which is responsible for the equilibrium magnetism in the sample. This mechanism of texture control is distinct from the existing proposals of current-induced spin torques in Weyl materials due to the axial Hall current produced by the pseudomagnetic field of magnetic textures <cit.>, or torques stemming from the chiral anomaly <cit.>. All three mechanisms are compared in the Discussion presented at the end of this paper. As a specific application of the developed theory we consider magnetic vortex control in thin magnetically soft nanodisks, see Ref. <cit.> for a review of the subject. The practical motivation behind this choice of a texture comes from the fact that the vortex in a nanodisk is a compact object with discrete states determined by its core polarization and chirality. The vortex state develops to minimize the magnetic dipolar energy in nanodisks of size roughly exceeding their magnetic exchange length. This suggests that nanodisks assembled into an array will have weak interaction due to their stray fields, which is beneficial for high density information storage. Below we will show that the torques due to the current-induced axial magnetization in Weyl metals can efficiently flip vortex chirality, and even its polarization under the right circumstances. Electromagnetic fields and pseudofields in a magnetic Weyl metal.— We view a magnetic Weyl metal in the spirit of the s-d exchange model, which includes a subsystem of localized electrons responsible for the magnetization, and a system of itinerant electrons carrying transport currents. Our goal is to find a way to control the magnetization of localized electrons with the transport currents. We use the prototypical model of a magnetic Weyl metal with only two Weyl points with opposite chiralities close to the Fermi level. Such a model preserves the inversion symmetry, but the time-reversal symmetry is broken by the magnetization, M, of the localized electrons. The Hamiltonian of the model is given by <cit.> H_w= ∫ d^3r ψ ^†(r)[vτ_zσ·p -Jτ_0 σ· m] ψ(r).where ψ ( r) are the field operators for electrons, σ is a vector of Pauli matrices acting in the space spanned by the Weyl bands, which we will take to coincide with the actual spin, while τ_z and τ_0 act in the valley space. The unit matrix τ_0 will not be explicitly written from here on. Furthermore, in Eq. (<ref>)v is the Fermi speed, J is the exchange energy constant between itinerant electrons and localized spins, m≡ M/M_s is a unit vector in the direction of the localizedmagnetization. For definiteness, we will assume v>0, and denote the τ_z=± valleys with the chirality index χ=±. As simple as it is, the model (<ref>) might pertain to the case of EuCd_2As_2, either in a small external magnetic field <cit.>, or grown in the ferromagnetic phase, as well as K_2Mn_3(AsO_4)_3<cit.>. But one should keep in mind recent evidence that EuCd_2As_2 is in fact a narrow-gap semiconductor <cit.>. In this work, we will consider a Weyl magnet in which there exist both a static magnetic texture, m= m ( r), and transport current density, j_tr( r). Our aim is to find a way to manipulate the texture with the transport current. A general way to achieve this goal follows from Eq. (<ref>), which shows that the magnetization of the localized electrons couples to the spin polarization of the itinerant ones, which induces an effective Zeeman field B_eff=J/M_s⟨ψ^†σψ⟩,where the ⟨…⟩ denotes the average with respect of the density matrix of the itinerant electrons, and M_s is the sturation magnetization of the localized electrons. In turn, the spin polarization of the itinerant electrons is identical to the axial current, j_5, defined as the difference in the individual valley currents:⟨ψ^†σψ⟩=1/ev( j_+- j_-)≡1/ev j_5,where j_χ is the current in the valley with chirality χ, and e<0 is the charge of the electron. The conclusion is that one must search for valley-asymmetric currents to control magnetization, or magnetic textures. In the presence of a nonuniform magnetization, one has to take into account electric fields, magnetic fields coming from the magnetization and the transport current, as well as pseudomagnetic fields from magnetization gradients while considering a Weyl metal. The pseudomagnetic field appears because in Hamiltonian (<ref>) magnetization couples to the electrons in the two valleys as an axial vector potential, having the opposite signs in the opposite valleys, eA_5=J/v m.Then it is clear that in the presence of a spatially varying magnetization the axial vector potential A_5 can develop a non-zero curl, and the corresponding pseudomagnetic field ise B_5=J/v∇× m. There are many physical effects brought about by the fields mentioned above. A review of the pseudofield physics in Weyl metals can be found in Ref. araki2020review. Fortunately, not all of them are equally important in the present context, and we would like to discuss qualitatively which parts of physics need to be included into the qualitative theory, before we actually attempt it. We will focus on the phenomena specific to Weyl magnets, leaving aside phenomena associated with the usual diffusive transport in metals. First of all, there is a number of known phenomena that can be used to control the magnetization. The two most famous examples are the current-induced magnetization in noncentrosymmetric samples, and the spin Hall effect. In the present work we consider centrosymmetric crystals, such that the current-induced spin polarization does not appear, and assume that the spin-Hall effect does not exist, which is true for the model of Eq. (<ref>).It has already been noticed in Ref. <cit.> that an axial magnetic field drives an axial Hall current in the presence of a transport electric field, which leads to a contribution to a net spin polarization. In this work we will consider transport currents flowing along the axial magnetic field, hence the axial Hall current can be neglected. Furthermore, in the presence of a transport electric field, the pseudomagnetic field can drive an anomaly-type term in the equation for the local (number) density of electrons <cit.>, ∂_t n=e^2/2π^2ħ^2 E· B_5.Physically, this term stems from the divergence of the space-dependent current driven by intrinsic nonuniform Hall conductivity proportional to the separation of the Weyl nodes in the momentum space <cit.>, made space-dependent by the space-dependent magnetization. The change in the electronic density implied by Eq. (<ref>) is essentially forbidden in metallic samples due to screening. Perturbations that violate local charge neutrality are effectively relaxed in three-dimensional metals on the scale of Maxwell relaxation time, determined by the inverse Drude conductivity: τ_M∼ϵ_0/σ_D. Even for a reasonably low conductivity of σ_D∼ 10^6 Ω^-1m^-1 <cit.>, this relaxation time is of order of 10^-17 s, hence such perturbations can be completely disregarded. Finally, a discussion of a Weyl material is incomplete without mentioning the effect of surface Fermi arcs. In a magnetic nanodisk type of a sample considered below, the particular shape and length of a Fermi arc is determined by the projection of the magnetization on a sample surface in the real space. This implies that the energy of the surface electronic subsystem depends on magnetization orientation. It represents a type of surface anisotropy that describe a tendency to orient magnetization perpendicular to the surface. This tendency is at odds with the effect of dipolar interactions, whose energy is minimized when the magnetization is oriented along the surface, and no `magnetic charges' are produced. To roughly determine whether the surface states need to be taken into account in the energy balance, one can compare the energy of the system for magnetization along the surface, when the dipolar energy is minimized, but the Fermi arc energy is maximum, and the energy when the magnetization is perpendicular to the surface, in which case the Fermi arc energy is minimal, while the dipolar energy is the largest. Obviously, the exact balance depends on the shape of the sample, so we only aim to estimate for sample of what size the Fermi arcs are important. We note that the energy associated with a Fermi arc depends on the corresponding surface state spectrum, and on the occupation of the surface states. The surface state spectrum, determining the shape of a Fermi arc, can be quite involved, with spiraling around a Weyl point projection on the surface Brillouin zone <cit.>, and depends on the details of the confining potential. However, the overall length of the arc in a simple model with two nodes goes linearly with the magnetization projection on the surface in real space. Further, we can assume that the band width of the Fermi arc states is of order of J. Then the surface energy density associated with a patch of 2D momentum space occupied by the surface states is J^3/(2πħ v)^2. This energy should be compared to the μ_0 M_s^2ℓ_ex, where ℓ_ex∼ 5 nm is the magnetic exchange length over which magnetization can vary near a surface. For μ_0 M_s∼ 1 T, J∼ 0.1 eV, and v∼ 10^5 m/s, we see that surface state energy density is about one third of magnetic energy density, and can be ignored for our purposes. At the same time, it is obvious that the surface state energy very sensitively depends on the value of the exchange constant J, so one can easily encounter materials in which it has to be taken into account. While this is an interesting research direction, we do not pursue it here.Boundary current torques in current-carrying Weyl metals.— Given the discussion above, it is clear that our goal is to find a new source of axial current, which flows in response to a transport current. Since the axial current is a pseudovector, the linear relationship between it and the transport current (polar vector) in a centrosymmetric material is only possible either in a nonuniform situation, or near a sample boundary, where the inversion symmetry is broken by the surface. We will argue below that just the right axial currents flow as surface “axial magnetization” currents. Indeed, each valley of the band structure described by model (<ref>) breaks effective inversion symmetry, which acts by reversing the momentum counted relative to the valley position in momentum space. The full inversion symmetry is restored when the valleys are interchanged in addition to momentum inversion. This implies that a transport current can induce magnetization in each of the valley, but the total magnetization vanishes: this situation we will refer to as having nonzero “axial magnetization”. This means that at this level the transport current cannot affect the magnetization of the localized electrons in the centrosymmetric model we are considering. However, magnetizations in each valley, being opposite in direction, create opposite magnetization currents in regions of space where each of the magnetizations varies in space, in particular near sample boundaries. In other words, there is an axial current created by axial magnetization. This valley current is synonymous with the spin polarization of itinerant electrons, see Eq. (<ref>). Thus we expect boundary torques acting on the magnetization of the localized electrons from this mechanism. To describe the above mechanism of torque appearance quantitatively, we write down the expression for the magnetization in each valley as M_χ=∫_ pμ_χ, p f_χ, p,where ∫_ p≡∫d^3p/(2πħ)^3, f_χ, is the occupation number of a state with quasimomentumin valley χ and in the band (conduction or valence) that contains a Fermi surface. We do not introduce the band index explicitly not to clutter the notation. Further, μ_χ, p is the effective magnetic moment of an electron with quasimomentum p. Such magnetic moment has both spin and orbital contributions, but the orbital effects are usually much stronger in Weyl materials, which is related to the factthat the Bohr magneton contains the bare electron mass, which is very large as compared to the effective mass scale, p_F/v, determining the orbital magnetic moments of Weyl electrons. (See note 28 in Ref. <cit.> for more details.) It was shown in Ref. <cit.> that the orbital magnetic moments contain both an intrinsic contribution <cit.>, as well as extrinsic contributions from side jump and skew impurity scattering processes. However, for the simple isotropic model of Eq. (<ref>) side jump and skew scattering processes vanish for isotropic impurity scattering, and only the intrinsic contribution needs to be taken into account. It would be enough to add tilt to the dispersion of the Weyl cones to get an extrinsic contribution to the magnetic moment <cit.>.For a single Weyl point of chirality χ, we have the following expression for the intrinsic orbital angular moment <cit.>:μ_χ=χeħ v/2pe_ p,which works for both the conduction and valence bands, and where e_ p is the unit vector in the direction of p. To calculate the axial magnetization of Weyl electrons, we use Eq. (<ref>) with the nonequilibrium distribution function of the electrons in the presence of a transport electric field, E, δ f_p=-τ_tr e E∂_pf_eq, with f_eq being the equilibrium Fermi-Dirac distribution in the band with the Fermi surface, and τ_tr being the transport mean free time. Since both the transport current and the axial magnetization are determined by the same transport electric field, we can exclude it to obtain a direct relationship between the axial magnetization and the transport current: M_5≡ M_+- M_- =ħ/2p_F j_tr. Using j_5=∇× M_5, and combining Eq. (<ref>) with Eqs. (<ref>) and (<ref>), we obtain the final expression for the current-induced effective Zeeman field in a centrosymmetric magnetic Weyl metal: B_eff=ħ J/2eϵ_F M_s∇× j_tr,where ϵ_F≡ p_Fv is the Fermi energy counted from the energy of the Weyl nodes. Eq. (<ref>) for the effective Zeeman field is one of the central results of this work. Being determined by the curl of the transport current, this field vanishes in the bulk of an isotropic system, for which j_tr=σ_DE, because of the Faraday's law for a static electric field, ∇× E=0. For an anisotropic model, in which the conductivity is a nontrivial tensor, this field can exist even in the bulk of the system if the electric field is nonuniform. But in any case the effective field is nonzero near a boundary of a sample, if there is a flow of current along the boundary. Another important feature of Eq. (<ref>) is that the magnitude of the effective field acting at the boundaries of a sample does not depend on the sample size, as long as spatial quantization is not important. We can gain further insight into the energy associated with the effective magnetic field, E^Z_eff=-∫_ r B_eff· M,if we perform an integration by parts over a volume bounded by a surface outside the sample, over which the magnetization vanishes. We then trivially obtain E^Z_eff=-∫_ r B_5· M_5,where the pseudomagnetic field B_5 is given by Eq. (<ref>), and the current-induced axial magnetization is given by Eq. (<ref>). Hence the energy that we obtained is nothing but the Zeeman energy of the two axial magnetizations in the corresponding axial magnetic field due to a magnetic texture. Before switching to applications, we would like to give another form of E^Z_eff, appropriate for a sample with curl of the transport current confined to its surface: E^Z_eff=-ħ J/2eϵ_F∮_Sm· j_tr× n.The surface integral in the last term, representing the effective Zeeman energy, runs over the entire sample surface, and n is the outer normal to the surface element dS. Expression (<ref>) shows that the effective field (<ref>) is not unique in its form: an analogous contribution would come from the spin Hall effect, see the Discussion part at the end of this paper. Our point is that this field in Weyl metals is strong enough to control magnetic textures even without a spin Hall effect. Conversely, if the spin Hall effect is being studied in a magnetic Weyl material, it should be kept in mind that the current-induced axial magnetization can affect interpretation of experiments.Finally, we would like to address the question of what limits the magnitude of the effective field. As is clear from Eq. (<ref>), the maximum magnitude of the effective field is set by the maximum current one can drive through the sample. In a Weyl system, the maximum current in the linear regime is limited by the condition that the drift speed be smaller than the Fermi speed of Weyl electrons. In other words, the current is limited by j_max=en_ Wv, where n_w is the total density of the Weyl electrons. Then from Eq. (<ref>) it follows that the maximum effective field scales as B^max_eff∝ϵ_F^2, and saturates at ϵ_F∼ J, where the Fermi surfaces near the two nodes go through a Lifshitz transition into a single trivial Fermi surface.Magnetic vortex control in a Weyl nanodisk.— We now show that the current-induced effective Zeeman field is also effective in the sense of magnetization control. We consider a thin metallic disk shown in Fig.<ref>, in which a transport current is setup perpendicular to the plane of the disk. This current setup differs from the one considered for magnetic texture control in Ref. <cit.>, where the current flow was in the plane of the disk. We assume that the transport current is reasonably uniform in the bulk of the disk, and is mostly perpendicular to the top and bottom surfaces of the disk. In this case the effective Zeeman field acts on the side surface of the disk, see Eq. (<ref>), and Fig. <ref>. As is seen from Eq. (<ref>), for Jv>0, the field obeys the left hand rule, opposite to the Oersted field created by the current, because e<0. We will neglect the Ørsted field for the time being, but later will show that for disks of sizes measured in tens of nanometers the effect of the Ørsted field is small as compared to the effective field considered in this paper. Given the setup describe above, it is clear that the effective Zeeman field gives preference to certain chirality of magnetic vortices, and can switch between different chiralities for strong enough transport currents. Below we describe this process quantitatively.We will assume that the magnetic energy of the disk, E_M, contains an exchange part associated with magnetization gradients, a dipolar part defined by the demagnetization field H_d, and the effective Zeeman part, Eq. (<ref>), in the presence of a current: E_M=A∫_ r∇_a m∇_am-μ_0 M_s/2∫_ r m· H_d+E^Z_eff.Below we will use the value of A=10^-11 J/m for the exchange constant, μ_0M_s=1 T for the saturation magnetization, and J/ϵ_F∼ 10 in the expression for the effective Zeeman energy, Eq. (<ref>). For these numbers the magnetic exchange length is ℓ_ex=√(2A/μ_0M_s^2)≈ 5 nm. We neglect the Ørsted field of the current, as its effect is small for the sizes of the disks considered, which we checked numerically. A disk of large enough radius contains a magnetic vortex <cit.> of the in-plane magnetization, see Fig. <ref>. The vortex develops to minimize the dipolar energy at the expense of an increase in the exchange energy. To keep the exchange energy finite, a vortex must have a core with out-of-plane magnetization. Then a vortex is characterized with two discrete indices each taking values ±1: the chirality of magnetization winding away from the core, and the polarization direction of the core. The four possible combinations of these indices are all degenerate for the energy (<ref>) in the absence of a transport current. It is worth noting that magnetic vortices of the described kind have definite positive winding for either sign of the chirality, in the sense that the azimuthal angle of the magnetization, ϕ, winds in the positive direction with the azimuthal angle of the cylindrical coordinate system in real space, α, the z-axis of which goes through the center of the disk, perpendicular to its plane:ϕ=α+π/2+η.In the equation for the azimuthal angle of the magnetization the quantity η=0,π correspond to positive and negative chirality, respectively. An anti-vortex with negative winding would create magnetization pattern with nonzero radial component at the disk side surface, and thus would have high magnetostatic energy due to the magnetic charges on that surface.The fixed winding makes the topological index of the vortex, or its skyrmion charge, N(z)=1/4π∫ dxdym( r)·(∂_x m( r)×∂_ym( r)),dependent on the vortex core polarization only.Since the sample is three-dimensional, one can only define the topological charge for z=const plane, and the result is z-dependent. However, we checked numerically that even for a disk of diameter only twice as large as its thickness the skyrmion charge N(z) as a function of z does not deviate from the values of ± 1/2 by more than 5%, so we are dealing with well-defined vortices. We define the chirality as the volume integral over the interior of the sample, not including its boundary, of the z-component of the magnetization direction curl: C=1/2π r d∫ d^3 re_z·∇× m.This expression saturates at ± 1 for a vortex with independent of the z-coordinate m, which lies in the xy-plane near the sample boundary. In practice, for small and thin disks these conditions are satisfied in practice with high accuracy. Without a current, the two vortex chiralities are degenerate in energy. It follows from the chirality definition (<ref>) and Eq. (<ref>) for the effective Zeeman energy expressed via current-induced axial magnetization, as well as Eq. (<ref>) for the axial magnetization itself, that for a transport current along the disk axis the effective Zeeman energy is proportional to the average disk chirality. Hence it makes one of the chirality states metastable. The Ørsted field of the current would have the same qualitative effect, but for disk diameters around hundred nanometers the effect of the Ørsted field is small.For large enough current the metastability is removed, and the effective field induces deterministic switching into the low-energy state. A rigorous analytic way to determine the critical switching current would be to perform the linear stability analysis of the vortex excitation modes <cit.>. To this end one calculates the eigenmodes of small magnetization oscillations, and finds the value of the effective boundary field that drives the lowest frequency to zero. The zero-frequency mode becomes the nucleation one <cit.>, along which the chirality reversal proceeds. This analysis is very involved due to complicated patterns of demagnetizing fields.We thus proceed with a numerical analysis of the switching current. To obtain the critical current for the chirality switching, we simulated the system dynamics with slowly varying values of the transport current to determine the value at which the chirality switches. We did not attempt to simulate realistic temporal dynamics for some current pulses. The results of the simulation for a disk of radius 50 nm and variable thickness are shown in Fig. <ref>. We usedWe obtained critical currents of the order of 5× 10^11 A/m^2, which are feasible from the practical point of view. Equating the value of the critical current to the maximum achievable current in the linear regime, j_max=e n_ Wv, and using v=10^5 m/s, we see that the required carrier density is n_ W∼10^19 cm^-3. Hole doping of 10^20 cm^-3 in EuCd_2As_2 was reported in Ref. <cit.>. We also noticed empirically that for relatively large values of the Gilbert damping constant the polarization of the core switched together with the chirality in small disks. The typical graphs of the chirality and the Skyrmion number as functions of the applied static current are shown in Fig. <ref>. Note that with decreasing value of α the polarization fails to switch, while the critical current does not change. This shows that the polarization switching is a dynamic effect, which is sensitive to the speed of the chirality reversal, while the chirality itself switches when it loses metastability, regardless of how fast the subsequent dynamics is.Finally, we note that for pure Ørsted field of the current, neglecting the effective boundary field, the critical switching current for the geometry considered here is roughly 5× 10^12 A/m^2, and order of magnitude larger than for the boundary field. This is consisted with the findings of Ref. <cit.>, and shows that neglecting this field was justified for our purposes. Of course, for large enough disks the Ørsted field will eventually dominate the switching.Discussion.— The central result of this work is the observation that a transport current flowing in a centrosymmetric magnetic Weyl metal induces axial magnetization. The induced axial magnetization currents correspond to the spin polarization of itinerant electrons. This spin polarization can be used to control the chirality of a vortex in the magnetization of localized electrons via an effective Zeeman field, Eq. (<ref>). It is interesting to compare this mechanism with proposals to generate axial currents, and hence itinerant spin polarization, in current-carrying Weyl metals in the existing literature. In Ref. <cit.> it was shown that axial Hall effect, driven by the pseudomagnetic field B_5, drives an axial Hall current j_5∝ B_5× j_tr. Later in Ref. <cit.> the axial version of the chiral magnetic effect was used in conjunction with the chiral anomaly to generate j_5∝ B_5( B· j_tr), where B is the external (but which can be the field of the magnetization itself) magnetic field driving the chiral anomaly.In contrast, in this work the axial current takes the form of j_5∝∇× j_tr. This axial current, unlike those from Refs. <cit.>, is not proportional to B_5, Eq. (<ref>). This makes it at least one or maybe two orders of magnitude smaller than the other two axial currents, since B_5 is large due to the large value of the exchange constant and small exchange length that determines the size of magnetic textures in ferromagnets. However, its independence from B_5 is also its strength from the symmetry point of view: the axial current considered here is even in the localized magnetization, and hence can distinguish chiralities of a magnetic vortex. As we demonstrated, the magnitude of the effect is sufficient to drive chirality reversals in nanosized samples more efficiently than with the Ørsted field of the current. It also interesting to compare the boundary spin polarization associated with the axial magnetization current to the one that would have been induced by an isotropic spin Hall effect, if it existed in the sample. In that case the spin polarization current is given by j^a_b=θϵ_abcj_tr,c/e, where j^a_b is the current of ath component of spin polarization in the bth spatial direction, and θ is the spin-Hall angle. Then for electric current flowing in the z-direction along boundary perpendicular to the x-direction there is a spin accumulation of surface density of the yth component of spin polarization of magnitude ∼τ_sfθ j_tr/e. This result needs to be compared to the spin accumulation given by the current-induced axial magnetization current, given by ∼ħ j_tr/eϵ_F. For instance, for Pt τ_sf∼ 10^-14 s <cit.>, and θ∼ 10^-1 <cit.>, which yields τ_sfθ∼ 10^-15 s. For the mechanism described in this work and the typical ϵ_F∼ 50meV we obtain ħ/ϵ_F∼ 10^-14 s, obviously implying a much larger boundary spin polarization. This order of magnitude larger boundary spin polarization may even be utilized in the spintronics applications. The work of JGY and DAP was supported by the National Science Foundation under Grant No. DMR-2138008 . The work of YT was supported by the U.S. Department of Energy, Office of Basic Energy SCiences under Award No. DE-SC0012190.apsrev
http://arxiv.org/abs/2312.16122v1
{ "authors": [ "J. G. Yang", "Yaroslav Tserkovnyak", "D. A. Pesin" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20231226171307", "title": "Magnetic vortex control with current-induced axial magnetization in centrosymmetric Weyl materials" }
Dome structure in pressure dependence of superconducting transition temperature for HgBa_2Ca_2Cu_3O_8 — Studies by ab initio low-energy effective HamiltonianJean-Baptiste Morée^1,2 0000-0002-0710-9880, Youhei Yamaji^3 0000-0002-4055-8792,and Masatoshi Imada^1,4 0000-0002-5511-2056 =============================================================================================================================================================In the weakly supervised temporal video grounding study, previous methods use predetermined single Gaussian proposals which lack the ability to express diverse events described by the sentence query. To enhance the expression ability of a proposal, we propose a Gaussian mixture proposal (GMP) that can depict arbitrary shapes by learning importance, centroid, and range of every Gaussian in the mixture. In learning GMP, each Gaussian is not trained in a feature space but is implemented over a temporal location. Thus the conventional feature-based learning for Gaussian mixture model is not valid for our case. In our special setting, to learn moderately coupled Gaussian mixture capturing diverse events, we newly propose a pull-push learning scheme using pulling and pushing losses, each of which plays an opposite role to the other. The effects of components in our scheme are verified in-depth with extensive ablation studies and the overall scheme achieves state-of-the-art performance. Our code is available at https://github.com/sunoh-kim/pps.§ INTRODUCTION Temporal video grounding is a challenging task in computer vision, where the goal is to find the temporal location of starting and ending points described by a sentence query in an untrimmed video. The task has potential for applications such as video understanding <cit.>, video summarization <cit.>, and video retrieval <cit.>, because it can automatically extract temporal video locations of interest described by given sentences. For temporal video grounding, a fully supervised approach has made remarkable progress <cit.> but require manual annotations of temporal locations for every video-sentence pair. These manual annotations are usually labor-intensive and noisy due to the subjectivity of annotators, which limits their scalability to real-world scenarios and makes trained models biased <cit.>. To overcome the limitation, a weakly supervised approach has been proposed to solve the temporal video grounding problem, where only video-sentence pairs are required for training. Some existing methods <cit.> use a sliding window strategy to generate proposals for a temporal location but use a lot of pre-defined proposals, which require heavy computation. To reduce the required number of proposals, <cit.> generate learnable Gaussian proposals.However, these single Gaussian proposals with a peak at its center lack the expression ability for diverse query-relevant events in a video.To enhance the expression ability, we propose a Gaussian mixture proposal (GMP) that can depict arbitrary shapes by learning importance, centroid, and range of every Gaussian in the mixture.Since our GMP is implemented over a temporal location, conventional feature-based learning for Gaussian mixture model <cit.> is not applicable to our approach. In our special setting, our goal is to train the GMP to capture a temporal location semantically relevant to a sentence query that includes diverse events coupled moderately.In <ref>, for instance, one sentence query includes two semantic events coupled by “A man yells to them on the side" and “They continue dancing".To capture thecoupled events in a query, we propose a Pull-Push Scheme (PPS) to learn a GMP whose Gaussians are moderately coupled. Specifically, we first define a GMP with learnable parameters: importance, centroid, and range of every Gaussian in the mixture. To learn the importance, we propose an importance weighting strategy that represents importance levels of each Gaussian mask for a query-relevant location. To generate the GMP that represents a query-relevant location, our PPS is trained to reconstruct the sentence query from the proposal. In our scheme, the Gaussians in one GMP should be located near a query-relevant temporal location, but should not be overlapped too much with others to represent diverse events. To this end, our scheme leverages a pulling loss and a pushing loss, each of which plays an opposite role to the other to produce moderately coupled Gaussians. The pulling loss lets the Gaussians stay close to each other by pulling the Gaussian centroids together.The pushing loss prevents the Gaussians from overlapping too much with the others by forcing the Gaussians to be less overlapped.We verify that our scheme generates high-quality proposals that significantly improve recall rates on the Charades-STA <cit.> and ActivityNet Captions <cit.> datasets. We also demonstrate the effectiveness of each component in our scheme with extensive ablation studies.In summary, our contributions are as follows. * We generate a Gaussian mixture proposal that represents a query-relevant temporal location by learning importance, centroid, and range of every Gaussian to enhance the expression ability of the proposal.* We propose a pull-push learning scheme that uses a pulling loss and a pushing loss, each of which plays an opposite role to the other to capture diverse events.* The proposed components are verified in-depth with extensive ablation studies and the overall scheme achieves state-of-the-art performance. § RELATED WORK§.§ Weakly Supervised Temporal Video Grounding Sliding window-based methods <cit.> generate proposals through the sliding window strategy and select the most probable proposal. <cit.> proposes a multi-level co-attention model to learn visual-semantic representations. <cit.> uses relations between sentences to understand cross-moment relations in videos. However, sliding window-based methods make a lot of proposals with a predefined length and use Non-Maximum Suppression (NMS) <cit.> to reduce redundant proposals. This process requires a large amount of computation. The proposed method generates learnable Gaussian mixture proposals without using the sliding window. Reconstruction-based methods <cit.> assume that well-generated proposals can reconstruct a sentence query from a randomly hidden sentence query. Early works <cit.> aggregate contextual information of video-sentence pairs to score proposals sampled at different scales. However, these methods need to select one proposal from a large set of proposals, which requires heavy computation costs.To solve this problem, other reconstruction-based methods <cit.> propose a learnable Gaussian proposal for a small set of proposals. <cit.> iteratively refines proposal confidence scores to prevent the grounding results from being biased. Unlike the previous methods, our goal is to enhance the expression ability of proposals, hence we generate Gaussian mixture proposals which can effectively represent an arbitrary shape.§.§ Gaussian-based Approach Gaussians have been studied in various tasks <cit.>. For weakly-supervised temporal video grounding, <cit.> propose learnable Gaussian proposals. Specifically, <cit.> generates one Gaussian proposal for one temporal location, and <cit.> generates multiple Gaussian proposals and selects one proposal to predict a query-relevant temporal location.For action localization,<cit.> uses multiple Gaussian proposals to localize multiple actions, where each single Gaussian proposal represents a temporal location of a specific action. However, a single Gaussian is a pre-determined shape with a high value at its center, which is not suitable for expressing diverse query-relevant events. To effectively represent the diverse events, we propose a Gaussian mixture proposal by learning importance, centroid, and range of every Gaussian in the mixture. For action localization, <cit.> proposes a layer of Gaussian mixture that replaces a conventional convolutional layer to extract video features.Also, there have been various tasks that train Gaussian mixture model in a feature space <cit.>. Unlike these methods, we generate Gaussian mixture proposals that are directly implemented over a temporal location. To represent a query-relevant temporal location that has diverse events, we propose a pull-push scheme to learn the moderately coupled Gaussian mixture.§ PROPOSED METHODThe overall scheme of the proposed method is depictedin <ref>. We generate a new proposal model using multiple learnable Gaussian masks from a video feature 𝐕 and a query feature 𝐐. Here, each mask in a video plays a role in focusing on a specific video event and suppressing the rest.We use a mixture model consisting of multiple Gaussian masks to produce proposals. Each positive proposal is called Gaussian mixture proposal (GMP). To generate K GMPs (𝐏_p), we propose an importance weighting strategy to represent importance levels of each Gaussian mask for a query-relevant location.For the importance weighting strategy, the importance-based reconstructor receives the generated Gaussian masks and estimates the importance weights of the Gaussian masks for the mixture. Then, the GMP is obtained via attentive pooling with the Gaussian masks and importance weights. To capture diverse query-relevant events, we propose a pull-push learning scheme, where the Gaussian masks are trained by pulling loss and pushing loss.The pulling loss ℒ_pull makes the masks in a GMP be densely overlapped, whereas the pushing loss ℒ_push makes the masks in a GMP be less overlapped.Each of K easy negative proposals (𝐏_en) is also composed of multiple Gaussian masks to capture diversely-shaped confusing locations within the given video. Unlike the positive proposal, the easy negative proposal does not use importance weights because the importance weights only represent query-relevant levels, which is only needed for positive proposals. The importance-based reconstructor receives positive proposals from the Gaussian mixture proposal generator and reconstructs the sentence query from a randomly hidden sentence query.§.§ Encoders Given a video and sentence query, we use pre-trained encoders to obtain a video feature and a query feature, following previous methods <cit.> Video encoder. An untrimmed raw video 𝒱 is made into a video feature 𝐕 through the pre-trained 3D Convolutional Neural Network (3D CNN) <cit.>. The video feature 𝐕 is given by 𝐕=[𝐯_1,𝐯_2,…,𝐯_T]^⊤∈ℝ^T× d_V, where 𝐯_t is the t^th segment feature, T is the number of video segments, and d_V is the dimension of the segment feature 𝐯_t.Query encoder. Given a sentence query 𝒮, we use the pre-trained GloVe <cit.> word embedding to obtain a query feature 𝐐=[𝐪_1,𝐪_2,…,𝐪_N]^⊤∈ℝ^N× d_Q, where 𝐪_n is the n^th word feature, N is the number of words, and d_Q is the dimension of the word feature 𝐪_n.§.§ Gaussian Mixture Proposal GeneratorFrom video and query features 𝐕 and 𝐐, the proposedGMPgenerator yields K positive GMPs (𝐏_p), K easy negative proposals (𝐏_en) in addition to one existinghard negative proposal (𝐏_hn).Modeling of GMP for positive proposal. For the generation of the positive proposal, we first extract a multi-modal feature 𝐆 reflecting both visual and textual information. We use a transformer <cit.> to aggregate the information of 𝐕 and 𝐐 by𝐆=f_td(𝐕, f_te(𝐐)) =[𝐠_1,𝐠_2,…,𝐠_T,𝐠_cls]^⊤∈ℝ^(T+1)×d_G, where the transformer uses 𝐐 as an input to the transformer encoder f_te(·) and both 𝐕 and f_te(𝐐) as inputs to the transformer decoder f_td(·), and d_G is the dimension of the multi-modal feature.For the video feature 𝐕, we append a learnable token 𝐯_cls, same as a [CLASS] token in <cit.>, by 𝐕=[𝐯_1,𝐯_2,…,𝐯_T, 𝐯_cls]^⊤∈ℝ^(T+1)× d_V. By the transformer, correspondingly, the vector 𝐠_cls∈ℝ^d_G stores the sequence information of all words and video segments.For the k^th positive proposal 𝐏_p^(k), we define multiple Gaussian masks 𝐌^(k) = [𝐌_1^(k),𝐌_2^(k),…,𝐌_E_p^(k)]^⊤∈ℝ^E_p× T, where 𝐌_l^(k) is the l^th Gaussian mask for 𝐌^(k), and E_p is the number of masks. The k^th proposal 𝐏_p^(k) is defined by a mixture of the Gaussian masks 𝐌^(k). The Gaussian centers 𝐜^(k) and widths (standard deviations) 𝐬^(k) of 𝐌^(k) are calculated by the function of 𝐠_cls, as 𝐜^(k) = Sigmoid(𝐖_𝐜 𝐠_cls+𝐛_𝐜) ∈ℝ^E_p,𝐬^(k) = 1/σSigmoid(𝐖_𝐬 𝐠_cls+𝐛_𝐬) ∈ℝ^E_p.Here, 𝐖_𝐜 𝐨𝐫 𝐬 and 𝐛_𝐜 𝐨𝐫 𝐬 are defined aslearnable parameters of a fully connected layer, and σ is a hyper-parameter controlling the width of the masks. Consequently, we obtain the l^th Gaussian mask 𝐌_l^(k) = [f_l^(k)(0), f_l^(k)(1), …, f_l^(k)(T-1)]^⊤∈ℝ^T using f_l^(k)(t) = exp(-(t/(T-1)-𝐜^(k)_l/𝐬^(k)_l)^2) ,where 𝐜^(k)_l, 𝐬^(k)_l∈ℝ are the l^thelements of 𝐜^(k), 𝐬^(k), respectively.The k^th proposal 𝐏_p^(k) is defined by a mixture of the Gaussian masks 𝐌^(k) via attentive pooling with mask importance weights 𝐰^(k)∈ℝ^E_p.Finally, we generate K positive proposals 𝐏_p = [ 𝐏_p^(1), 𝐏_p^(2), …, 𝐏_p^(K) ]^⊤∈ℝ^K× T, where the k^th proposal is𝐏_p^(k) = 𝐌^(k)⊤𝐰^(k)∈ℝ^T.To represent the importance levels of each Gaussian mask in the mixture, we leverage an importance weighting strategy, where the importance weights 𝐰^(k) are estimated by the importance-based reconstructor in <ref>. Losses for pull-push learning scheme.In our scheme, the Gaussian masks in a Gaussian mixture proposal should be densely located near a query-relevant temporal location, but should not be overlapped too much with each other to represent diverse events. To this end, we propose a pull-push learning scheme using a pulling loss and a pushing loss, each of which plays an opposite role to the other, to produce moderately coupled masks. The pulling loss ℒ_pull lets the masks stay close, which is computed by minimizing the Euclidean distance between the centers of the two farthest masks as follows:ℒ_pull = ∑_k=1^K (𝐜^(k)_l_min - 𝐜^(k)_l_max)^2 ,where l_min=arg min_l 𝐜^(k)_l and l_max=arg max_l 𝐜^(k)_l. The pushing loss is defined by two losses: (1) an intra-pushing loss and (2) an inter-pushing loss. The intra-pushing loss ℒ^intra_push prevents the masks in a proposal from overlapping too much with others by forcing the masks to be less overlapped, which ensures each mask represents different events. Furthermore, we use the inter-pushing loss ℒ^inter_push to let each proposal predict different temporal locations. Based on the regularization term in <cit.>, the resultant two pushing losses are given asℒ^intra_push = ∑_k=1^K || 𝐌^(k)𝐌^(k)⊤ - λ_1 I||^2_F,ℒ^inter_push = || 𝐏_p𝐏_p^⊤ - λ_2 I||^2_F,where ||·||_F denotes the Frobenius norm, I is an identity matrix, and λ_1 and λ_2 are hyper-parameters controlling the strength of the pushing.Negative proposal mining. To capture diverse shapes of confusing temporal locations inside the video, we generate a new type of a negative proposal with multiple Gaussian masks, called easy negative proposals (𝐏_en∈ℝ^K× T) in addition to the existing hard negative proposal (𝐏_hn∈ℝ^T). To generate K easy negative proposals, we leverage multiple Gaussian masks to include confusing locations. In our negative proposal mining, the k^th easy negative proposal (𝐏^(k)_en) is composed of multiple Gaussian masks by using the same process in <ref>. Contrary to moderately coupled Gaussian masks in the positive proposal, we let the E_en Gaussian masks of each easy negative proposal spread sparsely without the pull-push learning scheme because most of the confusing locations exist throughout the entire video. Then, following <cit.>, the hard negative proposal 𝐏_hn is determined by a mask covering an entire video, which is𝐏_hn = [1, 1, …, 1] ∈ℝ^T, where both the query-relevant location and confusing locations are included. Finally, the Gaussian mixture proposal generator produces three proposals {𝐏_p, 𝐏_hn, 𝐏_en}.§.§ Importance-based Reconstructor We propose an importance weighting strategy to effectively represent importance levels of each Gaussian mask in the mixture. The importance-based reconstructor produces mask importance weights (𝐰) for Gaussian mixture proposals in <ref>. Moreover, the reconstructor receives proposals from the generator and reconstructs the sentence query. Mask importance.We estimate the k^th mask importance weights (𝐰^(k)) from the Gaussian masks 𝐌^(k).First, we use a Mask-Conditioned transformer (MC transformer) <cit.> to extract the multi-modal feature 𝐑^𝐌 for any video mask 𝐌, given the video feature 𝐕 and a randomly hidden sentence query feature 𝐐. In the MC transformer, the mask 𝐌 is multiplied by the self-attention map in every self-attention process to focus on the video feature inside the mask. Additionally, we append a learnable token 𝐪_cls , same as a [CLASS] token in <cit.>,to the hidden sentence query feature by 𝐐=[𝐪_1,𝐪_2,…,𝐪_N, 𝐪_cls]^⊤∈ℝ^(N+1)× d_Q. The resultant multi-modal feature 𝐑^𝐌 can be calculated as follows:𝐑^𝐌=f_md(𝐐, f_me(𝐕,𝐌), 𝐌) ∈ℝ^(N+1)× d_R.Here, the MC transformer uses 𝐕 and 𝐌 as inputs to the transformer encoder (f_me(·)). Then, the transformer decoder (f_md(·)) receives𝐐, f_me(𝐕,𝐌), and 𝐌.The dimension of the multi-modal feature is denoted by d_R. In 𝐑^𝐌=[𝐫^𝐌_1,𝐫^𝐌_2,…,𝐫^𝐌_N, 𝐫^𝐌_cls]^⊤, the vector 𝐫^𝐌_cls reflects all words and video segments conditioned by the mask 𝐌. To compute the k^th mask importance weights 𝐰^(k) in <ref>, we calculate 𝐫^𝐌_l^(k)_cls using 𝐌_l^(k) via <ref> and apply it to a Multi-Layer Perceptron (MLP) with two layers as follows:h^(k)_l = MLP(𝐫^𝐌_l^(k)_cls )∈ℝ,𝐰^(k) = Softmax([h^(k)_1,h^(k)_2,…,h^(k)_E_p]^⊤) ∈ℝ^E_p. Losses for reconstruction. Based on the supposition that properly generated proposals can reconstruct the given sentence query as in <cit.>, we reconstruct the sentence query from a randomly hidden sentence query. First, we generate the multi-modal features 𝐑^𝐏 using the proposed proposals 𝐏∈{𝐏_p, 𝐏_hn, 𝐏_en} by replacing 𝐌 with 𝐏 in <ref>. Then, the reconstructed query is produced using 𝐑^𝐏, and the cross-entropy loss C(·) is used to measure the difference between the reconstructed query and the original query. Then, we can calculate C(𝐏_p^(k)), C(𝐏_hn), and C(𝐏_en^(k)). For learning to reconstruct the sentence query, following <cit.>, we use a reconstruction loss which is the cross-entropy losses of the positive proposals and hard negative proposal, where a query-relevant temporal location exists, asℒ_rec=C(𝐏_p^(k^*))+C(𝐏_hn) , wherek^* = arg min_k C(𝐏_p^(k)) . Furthermore, following <cit.>, we perform contrastive learning to distinguish the query-relevant location from the confusing locations captured by the easy negative proposals and the hard negative proposal. Based on the triplet loss <cit.>, the intra-video contrastive loss ℒ_ivc is defined asℒ_ivc= max(C(𝐏_p^(k^*))-C(𝐏_hn)+β_1,0)+ max(C(𝐏_p^(k^*))-C(𝐏_en^(k^*))+β_2,0) , where β_1 and β_2 are hyper-parameters for margins and β_1 < β_2. §.§ Training and Inference Training. In an end-to-end manner, we train our network with five loss terms: 1)reconstruction loss ℒ_rec, 2)intra-video contrastive loss ℒ_ivc, 3)pulling loss ℒ_pull, and two pushing losses of 4) intra-pushing loss ℒ^intra_push and 5) inter-pushing loss ℒ^inter_push. Then the total loss is given byℒ_total=ℒ_rec+α_1ℒ_ivc+α_2ℒ_pull+α_3ℒ^intra_push + α_4ℒ^inter_push , whereα_1, α_2, α_3, and α_4 are hyper-parametersto balance losses.Inference. To select the top-1 proposal from the K positive proposals, we use vote-based selection to choose the best overlapping proposal, similar to <cit.>.§ EXPERIMENTS§.§ Experimental SetupEvaluation metrics. Following the evaluation metrics in <cit.>, we adopt two metrics (`R@n,IoU=m' and `R@n,mIoU'). `R@n,IoU=m' denotes the percentage of at least one of the top-n predicted temporal locations having a temporal Intersection over Union (IoU) with a ground truth larger than m. `R@n,mIoU' denotes the average of the highest IoUs among the n predicted temporal locations. The ActivityNet Captions dataset <cit.> contains 37,417, 17,505, and 17,031 video-sentence pairs for training, validating val_1, and val_2, respectively. Since a testing set is not publicly available, val_2 is used for testing. Video segment features are extracted via C3D <cit.>. Vocabulary sizes are 8,000. For proposals, K, E_en, and σ are set to 5, 2, and 4. For losses, α_1, α_2, α_3, and α_4 are set to 1, 0.2, 0.01, and 0.1.The Charades-STA dataset <cit.> contains 16,128 video-sentence pairs from 6,672 videos, which are divided into 12,408 for training and 3,720 for testing. Video segment features are extracted via I3D <cit.>. Vocabulary sizes are 1,111. For proposals, K, E_en, and σ are set to 7, 3, and 9. For losses, α_1, α_2, α_3, and α_4 are set to 3, 5, 0.001, and 1. Implementation details. We set the maximum number of video segments to 200, and the maximum length of the sentence query to 20. For the transformers, we use transformers with three-layer and four attention heads. The dimension of the features (d_V, d_Q, d_G, d_R) is set to 256. We use the equivalent MC Transformer for every reconstruction process. For the hidden sentence query, we randomly hide a third (1/3) of the words. For training, the Adam optimizer <cit.> is used. We set the learning rate to 0.0004, mini-batch size to 32, and hyper-parameters as λ_1=λ_2=0.15, β_1=0.1, and β_2=0.15. In the k^th positive proposal, we set the number of Gaussian masks E_p to k for reflecting a varying number of masks in each proposal, as shown in the top right of <ref>. §.§ Comparison with State-of-the-Art MethodsTo verify the effectiveness of the proposed method, we compare our PPS with previous weakly supervised temporal video grounding methods: WS-DEC <cit.>, TGA <cit.>, SCN <cit.>, WSTAN <cit.>, VLANet <cit.>, MARN <cit.>, CCL <cit.>, RTBPN <cit.>, EC-SL <cit.>, LoGAN <cit.>, VCA <cit.>, LCNet <cit.>, FSAN <cit.>, CWSTG <cit.>, CPL <cit.>, CRM <cit.>, CNM <cit.>, and IRON <cit.>.In <ref> for the ActivityNet Captions dataset, our PPS outperforms CPL <cit.> by 3.56%, 22.49%, and 28.19% at R@1,IoU=0.3, R@5,IoU=0.3, and R@5,IoU=0.5, respectively. It is worth noting that PPS outperforms the previous learnable mask-based method, CPL, by significant margins at R@5, which means that the generated proposals of PPS promise a higher level of quality. In <ref> for the Charades-STA dataset, our PPS surpasses CPL <cit.> by 3.77% and 2.19% at R@1,IoU=0.7 and R@5,IoU=0.3, respectively. The methods marked with ^* make unfair comparisons with the previous methods. CRM <cit.> uses additional paragraph description annotations. CNM <cit.> uses CLIP large-scale pre-trained features <cit.> and IRON <cit.> uses OATrans <cit.> and DistilBERT <cit.> large-scale pre-trained features. Although our PPS uses 3D ConvNet and Glove features for fair comparisons with previous methods, PPS shows competitive or higher performance with the methods marked with ^*. §.§ Ablation StudyFor a more in-depth understanding of the proposed method, we perform ablation studies on our components.Analysis on the Gaussian mixture proposal. As shown in <ref>, we study the impact of the different strategies to generate Gaussian mixture proposals for positive proposals. The results are summarized as follows: First, the Gaussian mixture proposals are more effective than the single Gaussian proposal, which means that the mixture proposal can better represent a query-relevant temporal location. Second, learning multiple centers and one width for one mixture proposal performs best. We conjecture that learning multiple widths makes it complicated to learn proposals, which reduces performance. Third, importance weighting from the reconstructor yields the best result by representing the importance of each mask for query reconstruction. On the other hand, importance weighting from the generator is less effective, because it is hard to reflect reconstruction-aware information. <ref> shows the impact of the number of proposals K. The performance increases until the number is 5 at R@1,mIoU. We observe that defining too many proposals makes the proposals redundant and have short lengths due to the impact of the inter-pushing loss ℒ^inter_push.Impact on a varying number of masks. For positive proposals, we form 𝐏_p^(k) by a Gaussian mixture of E_p=k Gaussian masks to reflect a varying number of Gaussian masks in each positive proposal. To verify the effectiveness of the varying number of Gaussian masks, we compare the performance of fixing the number of Gaussian masks for every positive proposal in <ref>. The results show that using a varying number of Gaussian masks for each positive proposal performs better than using a fixed Gaussian number of Gaussian masks. We find that combinations of different numbers of Gaussian masks can represent a diverse number of query-relevant events.Effect of the pull-push learning scheme. In <ref>, we verify the effectiveness of our pull-push learning scheme. Among combinations of three losses (ℒ_pull, ℒ^intra_push, ℒ^inter_push), adopting all three losses yields the best performance. We conjecture that our pull-push learning scheme helps Gaussian masks to capture diverse events for better representing a temporal location. It is notable that adopting only the pulling loss can yield competitive or higher results to the state-of-the-art methods in <ref>. If the pulling loss ℒ_pull is excluded, the performance decreases significantly. We observe that Gaussian masks for one Gaussian mixture proposal are spread sparsely throughout the entire video without ℒ_pull, which can not represent one proper temporal location. Additionally, the results suggest that two pushing losses (ℒ^intra_push, ℒ^inter_push) are used with ℒ_pull for a synergy effect, because the goal of the pushing losses is to make less overlapped masks for moderate coupling. For a more in-depth understanding of the pulling loss ℒ_pull, we conduct ablation studies of different strategies for ℒ_pull in <ref>. Among the strategies, pulling two distant masks closer or pulling two distant masks to the middle mask performs best. The results imply that pulling fewer masks is better and pulling more masks may ruin the structure of the mixture proposal due to overlapped masks. <ref> presents the impact of controlling the balance of the losses. The results show that a high α_2 value for ℒ_pull is needed to produce densely generated masks and the adequate α_3 and α_4 values for ℒ^intra_push and ℒ^inter_push are needed to cause proper discrimination between the masks and between the proposals, respectively.§.§ Qualitative Results <ref> shows qualitative results of our PPS and other variants of PPS. It is notable that PPS captures accurate query-relevant locations, while the ground truth, which can be noisy due to the subjectivity of annotators, includes redundant locations such as a logo at the beginning of the video.§ CONCLUSIONFor weakly supervised temporal video grounding, we have proposed Gaussian mixture proposals with a pull-push learning scheme to capture diverse events. We express arbitrary shapes of a temporal location by learning importance, centroid, and range of every Gaussian in the mixture. To produce moderately coupled Gaussians in the mixture, we leverage a pulling loss and a pushing loss, each of which plays an opposite role to the other. Through experimental comparisons and extensive ablation studies, we have verified that our method generates multiple high-quality proposals, which greatly improve recall rates. Limitations. We use the proposals with the shape of a Gaussian mixture, but other shapes could be explored to represent complex temporal structures. § ACKNOWLEDGMENTSThis work was supported by IITP grant funded by Korea government(MSIT) [No.B0101-15-0266, Development of High Performance Visual BigData Discovery Platform for Large-Scale Realtime Data Analysis; NO.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)] and the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2023.
http://arxiv.org/abs/2312.16388v1
{ "authors": [ "Sunoh Kim", "Jungchan Cho", "Joonsang Yu", "YoungJoon Yoo", "Jin Young Choi" ], "categories": [ "cs.CV", "cs.AI" ], "primary_category": "cs.CV", "published": "20231227032901", "title": "Gaussian Mixture Proposals with Pull-Push Learning Scheme to Capture Diverse Events for Weakly Supervised Temporal Video Grounding" }
[2020]60H10, 60H15 In this article, we construct weak solutions for a class of Stochastic PDEs in the space of tempered distributions via Girsanov's theorem. It is to be noted that our drift and diffusion coefficients (L,A) of the considered Stochastic PDE satisfy a Monotonicity type inequality, rather than Lipschitz conditions. As such, we can not follow the usual infinite dimensional analysis as described in <cit.>. Instead, we exploit related SDEs to obtain our desired result, and we point out an important observation that the same Novikov condition is used in changing the Brownian motion in both the SDEs and the Stochastic PDEs.Ultrafast inertia-free switching of double magnetic tunnel junctions V. Korenivski====================================================================§ INTRODUCTIONFix T > 0 and let (Ω, , {_t}_t ∈ [0, T], P) be a complete filtered probability space, satisfying the usual conditions. In this article, we study weak and strong solutions of the following stochastic partial differential equation (SPDE) in ', the space of tempered distributions on ^d, for t∈[0,T] dX_t = L(X_t)dt + A(X_t) · dB_t, X_0=ϕ, where B = {B_t}_0≤ t≤ T is a given d-dimensional standard Brownian motion with respect to the filtration {_t}_0≤ t≤ T, ϕ is an ^' valued deterministic initial condition and A:=(A_1,⋯,A_d), L,A_j:'→' are nonlinear operators such that for y∈' L(y) := 1/2∑_i,j=1^d (σ(y)σ^t(y))_ij ∂^2_ij y - ∑_i=1^d b_i(y) ∂_i y, A_j(y) := - ∑_i=1^d σ_ij(y) ∂_i y,where σ:^'→^d × d, b: ^'→^d, with the components denoted by σ_ij, b_i, i, j = 1, 2, ⋯, d and σ^t denotes the transpose of σ. Descriptions of the topology on , the Schwartz space on ^d and definitions of the notions of solutions of the SPDEs considered above have been recalled in subsection <ref> and subsection <ref>, respectively. For the notions of weak and strong solutions, we refer to <cit.> and the references therein.In <cit.>, local strong solutions to the SPDE (<ref>) were shown to arise from the local strong solutions from certain associated stochastic differential equations (SDEs). We recall this correspondence in subsection <ref>. Note that the same Brownian motion appears in both the equations, the SPDE as well as the SDE.The correspondence holds, provided the pair of operators (L, A) satisfy the Monotonicity inequality. Using the correspondence mentioned above, we apply the finite dimensional Girsanov's Theorem to change the drift terms in both the SPDE and the SDE. Consequently, we are able to use the same Novikov's integrability condition in our arguments.This leads to the existence of a weak solution as well as uniqueness in law for the modified SPDE, with the new Brownian motion arising from the finite dimensional Girsanov's Theorem. Since we are working with Hermite-Sobolev space valued processes (see Section <ref>), which are driven by a finite dimensional Brownian motion, as a consequence neither we can use infinite-dimensional approach as in <cit.>, nor the finite-dimensional results as in <cit.>. It is to be noted that in <cit.>, the noise is Hilbert valued and in <cit.>, the process in consideration is finite-dimensional. The main results of this article have been discussed in Section <ref> and some applications have been mentioned in Section <ref>. Note that our assumption does not include any Lipschitz continuity of (L, A).§ PRELIMINARIES§.§ Topology on Schwartz spaceLetdenote the space of real valued rapidly decreasing smooth functions on ^d, with the topology given by L. Schwartz (<cit.>). Note that its dual is ^' (see <cit.>). Let ℤ^d_+:={n=(n_1,⋯, n_d): n_inon-negative integers}. If n∈ℤ^d_+, we define |n|:=n_1+⋯+n_d. For p ∈, consider the increasing norms ·_p, defined by the inner products⟨ f,g⟩_p:=∑_n∈ℤ^d_+(2|n|+d)^2p⟨ f,h_n⟩⟨ g,h_n⟩, f,g∈.In the above equation, {h_n: n∈ℤ^d_+} is an orthonormal basis for ℒ^2(^d) given by the Hermite functions and ⟨·,·⟩ is the usual inner product in ℒ^2(^d). The Hermite-Sobolev spaces _p, p ∈ are defined as the completion ofin ·_p. Note that the dual space _p^' is isometrically isomorphic with _-p for p≥ 0. The following basic relations hold for the _p spaces: for 0<q<p,⊂_p⊂_q⊂ℒ^2(^d) = _0⊂_-q⊂_-p⊂^'.We also have = ⋂_p ≥ 0_p and^' = ⋃_p ≥ 0_-p.Consider the derivative maps denoted by ∂_i:→ for i=1,⋯,d. We can extend these maps by duality to ∂_i:' →' as follows: for ψ_1 ∈',∂_i ψ_1ψ_2:=-ψ_1∂_i ψ_2,∀ψ_2 ∈.It is well-known that the derivative operators ∂_i: _q →_q - 1/2, i = 1, 2, ⋯, d and ∂^2_ij: _q →_q - 1, i, j = 1, 2, ⋯, d are bounded linear operators for all q ∈. For x ∈^d, let τ_x denote the translation operator ondefined by (τ_xψ)(z):=ψ(z-x),∀ z ∈^d, ψ∈. This operator can be extended to τ_x:'→' byτ_xψ_1ψ_2:=ψ_1τ_-xψ_2, ∀ψ_2 ∈.Note that τ_x: _q →_q is a bounded linear operator for any x ∈^d and any q ∈ (see <cit.>). §.§ Definitions and literature review The initial condition ϕ of (<ref>) is in ^' = ⋃_q ≥ 0_-q. Consequently, there exists p ≥ 0 such that ϕ∈_-p. In what follows, we work with this specific p and assume thatAssumption 1σ_ij, b_i ∈_p, ∀ i, j = 1, 2, ⋯, d.Using the duality between _p and _-p, observe that σ_ij and b_i's are continuous linear functionals on _-p.LetB_-p(0, r):= {y∈_-p: y_-p≤ r}.for r > 0. Then, for all r > 0C_1(r):= max_i,jsup_y ∈ B_-p(0, r){|σ_ij(y)|^2,|b_i(y)|} < ∞.By construction, C_1(r) is non-decreasing in r. Consequently, the operators L,A_j: _-p→_-p-1, j = 1, 2, ⋯, d are bounded in the following sense:L(y)_-p-1≤C̃_1(d, r) y_-p, A_j(y)_-p-1≤C̃_2(d, r) y_-p,∀ y ∈ B_-p(0, r)for any r > 0. Here, C̃_1(d, r) and C̃_2(d, r) are some non-negative constants, depending on d and r.Let ζ be an arbitrary state, treated as an isolated point of _-p := _-p∪{ζ}. A pair ({X_t}_t, η) is called a local strong solution of (<ref>), if the following conditions hold. * X = {X_t}_t is an _-p-valued continuous adapted process defined on the filtered probability space (Ω,,{_t}_0≤ t≤ T,P),* η is an {_t}_0≤ t≤ T stopping time with a.s. X_t = ζ, ∀η < t ≤ T, * the following equality holds in _-p-1 a.s. for all 0 ≤ t < η. X_t= ϕ + ∫_0^t L(X_s)ds + ∫_0^t A(X_s)· dB_s. If for some local strong solution ({X_t}_t, η), the equality (<ref>) holds a.s. for all t ∈ [0, T], then we refer to {X_t}_t as a (global) strong solution of (<ref>). A system ((Ω,,{_t}_0≤ t≤ T,P), B, X) is called a weak solution of (<ref>), where B = {B_t}_0≤ t≤ T is a d-dimensional standard Brownian motion with respect to the filtration {_t}_0≤ t≤ T and X = {X_t}_0≤ t≤ T is an _-p-valued {_t}_0≤ t≤ T-adapted continuous process such that the equality (<ref>) holds in _-p-1 a.s. for all t ∈ [0, T]. If the filtered probability space (Ω,,{_t}_0≤ t≤ T,P) is clear from the context, then for notational convenience, we shall write (X, B) to denote the weak solution as mentioned in Definition <ref>.In <cit.>, the existence and uniqueness of local strong solutions to SPDE (<ref>) was considered. Consider the following SDE in ^d Z_t = z + ∫_0^t σ̅(Z_s)· dB_s + ∫_0^t b̅(Z_s)ds, where, σ̅= (σ̅_ij) : ^d →^d × d and b̅ = (b̅_i) :^d →^d are defined by σ̅_ij(ρ):= σ_ij(τ_ρϕ),b̅_i(ρ) := b_i(τ_ρϕ).for ρ∈^d. We make the following assumption.Assumption 2The functions σ̅ and b̅ are locally Lipschitz. Suppose <ref> and <ref> hold. Let ({Z_t}_t, η) be the local strong solution of SDE (<ref>) with the initial condition z=0. Then, ({X_t}_t, η) is a local strong solution of the Stochastic PDE (<ref>), where X_t:=τ_Z_tϕ, ∀ t < η. § MAIN RESULTS Under the setup described in Subsection <ref>, we consider situations where the SPDE (<ref>) has global strong solutions, i.e. for all time t ∈ [0, T]. Using the structure of X_t = τ_Z_tϕ, it is enough to ensure the existence of global strong solutions {Z_t}_t for the SDE (<ref>). Moreover, we require some norm-bounds on {X_t}_t uniformly in time. We state the relevant assumptions below.Assumption 3 The SDE (<ref>) does not explode in finite time and has a unique strong solution on the timeinterval [0, T].and Assumption 4There exists a constant λ = λ({X_t}_t) > 0 such that sup_t ∈ [0, T]X_t_-p-1^2 ≤λ.For completeness, we mention some examples where the above assumptions hold. For <ref>, we refer to <cit.>. For <ref>, the following special cases may be considered. * If ϕ = δ_x for some x ∈^d, then by <cit.>, we have X_t_-p-1 = δ_x + Z_t_-p-1≤ C_pwhere C_p > 0 is a constant depending only on p, provided - p -1 < -d/4 or equivalently, p > d/4 - 1. <ref> follows. Note that, taking ϕ as a finite linear combinations of δ_x, x ∈^d also works. * By <cit.>, there exists a real polynomial P_k of degree k = 2([|p + 1|] + 1), such that X_t_-p-1 = τ_Z_tϕ_-p-1≤ P_k(|Z_t|) ϕ_-p-1.Without loss of generality, the coefficients of P_k are taken to be non-negative. To have <ref>, we need to work with those {Z_t}_t such that sup_t ∈ [0, T] P_k(|Z_t|) < ∞.Under <ref>, we have {X_t}_t ∈_2, where _2 denotes the space of adapted _-p-1-valued continuous stochastic processes {_t}_t satisfying sup_0≤ t≤ T _t_-p-1^2<∞. Note that _2 is a Banach space, with the norm (see <cit.>) __2:= ( sup_0≤ t≤ T _t_-p-1^2 )^1/2.Markov property of the solutions {X_t}_t has been discussed in Section 4 of <cit.>.Let us consider the Stochastic PDE (<ref>) and look at theexponential martingale, following Example 19.9, Chapter 19 of <cit.>, viz. M_t := exp(∑_j=1^d {∫_0^t h^j(s)dB^j_s - 1/2∫_0^t h^j(s)^2ds}),where for s ∈ [0, T],h^j(s) := √(X_s^2_-p-1),∀j=1,2,⋯,d.Using <ref>, we conclude that the Novikov's condition holds, i.e.[ exp(1/2∫_0^T ∑_j=1^d h^j(s)^2ds) ] = [ exp(d/2∫_0^T X_s^2_-p-1ds) ]<∞. Consider the _-p-valued process {X_t}_0≤ t≤ T satisfying (<ref>) in _-p-1 with [ exp(∑_j=1^d {∫_0^T √(X_s^2_-p-1)dB^j_s - 1/2∫_0^T X_s^2_-p-1ds }) ] = 1. Then the process B^j_t = B^j_t - ∫_0^t √(X_s^2_-p-1)ds, t∈[0,T], ∀ j=1,⋯,d is a Brownian motion with respect to Q on the probability space (Ω,,Q), where dQ(ω)= exp(∑_j=1^d {∫_0^T √(X_s^2_-p-1)dB^j_s - 1/2∫_0^T X_s^2_-p-1ds })dP(ω). Note that, X_t is _-p valued, X_t^2_-p-1 is finite and {B_t} is a d-dimensional Brownian motion. Therefore the proof of Theorem <ref> follows from the finite dimensional proof of Girsanov's theorem for SDEs, see <cit.>.Consider the following two equations in _-p-1: dX_t = L(X_t)dt + A(X_t) · dB_t, X_0=ϕ; dX_t = ( L(X_t)+ L̂(t,X_t))dt + A(X_t) · dB_t, X_0=ϕ, where L and A as in (<ref>) and L̂(t,y) :=-∑_j=1^d h^j(t)A_j(y)= ∑_i,j=1^dh^j(t) σ_ij(y) ∂_i y, ∀ y∈_-p. Consider the following SDE: Z_t^i := z^i +∫_0^t ∑_j=1^d σ̅_ij (Z_s)dB^j_s + ∫_0^t ( b̅^i(Z_s) - ∑_j=1^d h^j(s) σ̅_ij (Z_s) )ds, where,σ̅_ij, b̅_i are defined as in (<ref>). Note that, σ̅_ij, b̅_i : → and σ_ij, b_i : _-p→. Consider the SDE (<ref>) with initial condition z=0. Then X_t:=τ_Z_tϕ is an unique strong solution of SPDE (<ref>). Applying Itô's formula for the translation operator (<cit.>), we have a.s. τ_Z_tϕ = ϕ - ∑_i=1^d ∫_0^t ∂_i τ_Z_sϕd Z^i_s + 1/2∑_i,j=1^d ∫_0^t ∂^2_ij τ_Z_sϕd [Z^i,Z^j ]_s= ϕ - ∑_i=1^d ∫_0^t ∂_i τ_Z_sϕ ( ∑_j=1^d σ̅_ij (Z_s) dB^j_s )- ∑_i=1^d ∫_0^t ∂_i τ_Z_sϕ ( b̅^i(Z_s) - ∑_j=1^d h^j(s) σ̅_ij (Z_s) )ds+ 1/2∑_i,j=1^d ∫_0^t ∂^2_ij τ_Z_sϕ ( σ̅(Z_s) σ̅^t (Z_s))_ij ds= ϕ - ∑_i=1^d ∫_0^t ∂_i τ_Z_sϕ ( ∑_j=1^d σ_ij(τ_Z_sϕ) dB^j_s )- ∑_i=1^d ∫_0^t ∂_i τ_Z_sϕ ( b^i (τ_Z_sϕ) - ∑_j=1^d h^j(s) σ_ij(τ_Z_sϕ) ) ds+ 1/2∑_i,j=1^d ∫_0^t ∂^2_ij τ_Z_sϕ (σσ^t )_ij(τ_Z_sϕ)ds. Therefore, X_t:=τ_Z_tϕ is a solution of SPDE (<ref>). The uniqueness of {X_t}_t ∈ [0, T] as a solution to (<ref>) follows from uniqueness of the SDE for {Z_t}_t, as σ̅_ij, b̅_i are locally Lipschitz. Consider L̂ as in (<ref>), the probability measure Q as in (<ref>) and the Q-Brownian motion B as in (<ref>). Note that the Novikov condition (<ref>) holds. Then the _-p valued process {X_t} as in (<ref>)) is a solution todX_t = L(X_t)dt + A(X_t) · d B_t, X_0=ϕand has the same law under Q as {X_t}_t in (<ref>) under P. {Z_t}_t satisfies the SDE Z_t^i = z^i +∫_0^t ∑_j=1^d σ̅_ij (Z_s)d B^j_s + ∫_0^t b̅^i(Z_s)ds,i = 1, ⋯, dunder Q (see (<ref>)). Hence, its law under Q is the same as that of {Z_t}_t (under P) satisfyingZ_t^i = z^i +∫_0^t ∑_j=1^d σ̅_ij (Z_s)d B^j_s + ∫_0^t b̅^i(Z_s)ds,i = 1, ⋯, dunder P (<cit.>). Since P and Q are equivalent probability measures and X_t:=τ_ Z_tϕ P-a.s., X_t:=τ_Z_tϕ Q-a.s., we have the result. Under the correspondence between the SPDE (<ref>) and the SDE (<ref>), the same Brownian motion appears in both the equations. We are, therefore, able to use the finite dimensional Girsanov theorem in our arguments and the new Brownian motion appears again in both the equations (<ref>) and (<ref>). It is noteworthy that the same Novikov condition is used in changing the Brownian motion for the Stochastic PDEs, as well as the SDEs. Note that the condition is in terms of the solutions of the SPDE (<ref>). § APPLICATIONS In this section, we apply our main results, Theorems <ref>, <ref> and <ref> in the following two examples,to construct weak solutions. Though the examples are described in 1-dimension for simplicity, they can be extended to any general d-dimensions in a similar fashion.Consider Z_t = B_t, ∀ t ∈ [0, T]. This process {Z_t}_t can be thought of as the solution to the following SDEdZ_t=dB_t,Z_0=0, for t∈[0,T]. Take X_t:=δ_B_t=τ_B_tδ_0, is the solution of the following SPDE in 'δ_B_t= δ_0 - ∫_0^t∂δ_B_s dB_s + 1/2∫_0^t∂^2δ_B_sds. For any p>d/4, sup_t ∈ [0, T]δ_B_t_-p≤ C < ∞, for some constant C = C(p, d) > 0 see <cit.> (also see the comments in Example <ref> above). Then, δ_B_t is _-p-valued, for p>d/4, whereas equation (<ref>) holds in _-p-1 and our Novikov condition (<ref>) will be[ exp(1/2∫_0^Tδ_B_s^2_-p-1ds) ]≤[ exp(1/2∫_0^Tδ_B_s^2_-pds) ] <∞. Note that h(t):= √( δ_B_t^2_-p-1) (see (<ref>)) and by (<ref>), the new Brownian motion is given byB_t = B_t - ∫_0^t √(δ_B_s^2_-p-1)ds.Note that, (<ref>) is asufficient condition for (<ref>) to hold, see <cit.>. Now, consider the SDE:dZ_t = dB_t + √(δ_B_t^2_-p-1)dt, Z_0=0,Now, by Itô's formula for the translation operator, as applied in Theorem <ref>X_t:=τ_Z_tδ_0 = δ_Z_t = δ_0 - ∫_0^t ∂δ_Z_sdZ_s + 1/2∫_0^t ∂^2 δ_Z_sds = δ_0 - ∫_0^t ∂δ_Z_s ( dB_s + √(δ_B_s^2_-p-1)ds) + 1/2∫_0^t ∂^2 δ_Z_sds= δ_0+ ∫_0^t ( 1/2∂^2 δ_Z_s - √(δ_B_s^2_-p-1) ∂δ_Z_s)ds - ∫_0^t ∂δ_Z_sdB_s.Observe that, (Z,B) from (<ref>) is a weak solution of (<ref>) and (X,B) from (<ref>) is a weak solution of (<ref>) and the solutions are weakly unique in the following sense:ℒ(Z)= ℒ(Z),andℒ(X)= ℒ(X),where ℒ(·) denotes the law of a stochastic process. Note that, the same Novikov condition (<ref>) and new Brownian motion (<ref>) are used to construct the weak solutions of SDEs and SPDEs. Consider Z_t = B_t^2, ∀ t ∈ [0, T]. This {Z_t}_t can be thought of as the solution to the following SDEdZ_t=d(B_t^2),Z_0=0, for t∈[0,T].Applying Itô's formula, we havedZ_t=d(B_t^2) = 2B_tdB_t+ dt.Applying Itô's formula for the translation operator (<cit.>), we have a.s.X_t:= τ_Z_tδ_0 = δ_B_t^2 = δ_0 - ∫_0^t∂δ_B_s^2dZ_s + 1/2∫_0^t∂^2δ_B_s^2d[Z,Z]_s= δ_0 - ∫_0^t∂δ_B_s^2 ( 2B_sdB_s+ ds ) + ∫_0^t∂^2δ_B_s^22B_s^2ds= δ_0 + ∫_0^t ( 2B_s^2 ∂^2δ_B_s^2 - ∂δ_B_s^2)ds - ∫_0^t 2B_s ∂δ_B_s^2dB_s Similar to Example <ref>, δ_B_t^2 is _-p-valued, for p>d/4, whereas equation (<ref>) holds in _-p-1 and our Novikov condition of (<ref>) will be [ exp(1/2∫_0^Tδ_B_s^2^2_-p-1ds) ] ≤[ exp(1/2∫_0^Tδ_B_s^2^2_-pds) ] <∞,Here, h(t):= √( δ_B_t^2^2_-p-1) (see (<ref>)) and by (<ref>), the new Brownian motion is given byB_t = B_t - ∫_0^t √(δ_B_s^2^2_-p-1)ds.From (<ref>), substituting dB_t of (<ref>), we obtaindZ_t = 2B_t ( dB_t + √(δ_B_t^2^2_-p-1)dt) + dt= 2B_tdB_t + ( 2B_t√(δ_B_t^2^2_-p-1) +1)dt. Now, by Itô's formula for the translation operator X_t:= δ_Z_t = δ_0 - ∫_0^t ∂δ_Z_sdZ_s + 1/2∫_0^t ∂^2 δ_Z_sd[Z, Z]_s = δ_0 - ∫_0^t ∂δ_Z_s{ 2B_sdB_s + ( 2B_s√(δ_B_s^2^2_-p-1) +1)ds} + ∫_0^t2B_s^2 ∂^2 δ_Z_sds= δ_0 + ∫_0^t { 2B_s^2 ∂^2 δ_Z_s - ( 2B_s√(δ_B_s^2^2_-p-1) +1)∂δ_Z_s}ds- ∫_0^t 2B_s ∂δ_Z_sdB_s.Similar to <ref>, (Z,B) from (<ref>) is a weak solution of (<ref>) and (X,B) from (<ref>) is a weak solution of (<ref>) and the solutions are weakly unique as they are equal in law, i.e.ℒ(Z)= ℒ(Z),andℒ(X)= ℒ(X).Here also, the same Novikov condition (<ref>) and new Brownian motion (<ref>) are used to construct the weak solutions of SDEs and SPDEs.Acknowledgement: Suprio Bhar was partially supported by the INSPIRE Faculty Award DST/INSPIRE/04/ 2017/002835 (Department of Science and Technology, Government of India). Barun Sarkar acknowledges the support of SERB project -SRG/2022/000991, Government of India.amsplain
http://arxiv.org/abs/2312.16539v1
{ "authors": [ "Suprio Bhar", "Barun Sarkar" ], "categories": [ "math.PR", "60H10, 60H15" ], "primary_category": "math.PR", "published": "20231227114206", "title": "Weak Solutions of SPDEs in the space of Tempered distributions" }
http://arxiv.org/abs/2312.16721v1
{ "authors": [ "Edmundo F. Lavia", "Guadalupe Cascallares", "Juan D. Gonzalez" ], "categories": [ "physics.comp-ph" ], "primary_category": "physics.comp-ph", "published": "20231227211502", "title": "TetraScatt model: Born approximation for the estimation of acoustic dispersion of fluid-like objects of arbitrary geometries" }
Observation-based Optimal Control Law Learning with LQR Reconstruction Chendi Qu, Jianping He, Xiaoming Duan The authors are with the Dept. of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai, China. E-mail address: {qucd21, jphe, xduan}@sjtu.edu.cn. Part of this paper has been accepted to IFAC 2023 World Congress <cit.>. ================================================================================================================================================================================================================================================================================================================================================================================ Designing controllers to generate various trajectories has been studied for years, while recently, recovering an optimal controller from trajectories receives increasing attention. In this paper, we reveal that the inherent linear quadratic regulator (LQR) problem of a moving agent can be reconstructed based on its trajectory observations only, which enables one to learn the optimal control law of the agent autonomously. Specifically, the reconstruction of the optimization problem requires estimation of three unknown parameters including the target state, weighting matrices in the objective function and the control horizon.Our algorithm considers two types of objective function settings and identifies the weighting matrices with proposed novel inverse optimal control methods, providing the well-posedness and identifiability proof. We obtain the optimal estimate of the control horizon using binary search and finally reconstruct the LQR problem with above estimates.The strength of learning control law with optimization problem recovery lies in less computation consumption and strong generalization ability. We apply our algorithm to the future control input prediction and the discrepancy loss is further derived. Numerical simulations and hardware experiments on a self-designed robot platform illustrate the effectiveness of our work.§ INTRODUCTIONNowadays, as the development of localization, computer vision and planning, mobile agents have been widely applied in various fields and achieved high-profile success <cit.>. Lots of studies focus on designing controllers to generate satisfying trajectories <cit.>, while in this paper we pay attention to its inverse problem: How to learn the interior controller based on agent's trajectory observations?To tackle this problem, let's consider a mobile agent driving from its initial position to a target point, which is one of the most common and basic scenarios of agents. In most situations, the agent is exposed to the physical world thus its trajectory can be observed by potential external attackers <cit.>. We suppose there is another agent equipped with camera and computation ability trying to learn the trajectory and motion of the agent based on observations only. The issue that agents learn from other agents through observations is similar to learning from demonstration (LfD) <cit.>, while in our case the agent needs to collect data actively instead of just being provided by human demonstrations. LfD has raised extensive studies recent years and applied to various fields including autonomous driving <cit.>, manufacturing <cit.> and human-robot interaction <cit.>. Given standard demonstrations, one mainstream category of LfD algorithms is to learn the control policy directly from the state observations to action <cit.>, known as end-to-end learning. However, this branch of methods usually require for large amount of demonstrations as training data and contain little generalization ability between different environments. Therefore, our approach is to conduct a two-stage LfD, learning the interior controller of the agent first, which is a non-trivial problem, since the control objective of the agent is unknown. To further simplify the problem, we assume the moving agent is utilizing an LQR controller <cit.>. Since LQR is a state-feedback controller, the control law is actually a series of feedback gain matrices. Note that if the controller is infinite-time, these matrices are invariant and can be estimate with the optimal state and input trajectories. However, in the finite-time case, this sequence of matrices is time invariant with unknown length. Thus, in our paper the main idea is to reconstruct the control optimization problem. Once the optimization problem is recovered, we can obtain the control law and imitate the agent's motion with strong generalization performance. Assuming the dynamic model and objective function form is known, to reconstruct the LQR problem, we need to obtain the target state, the weighting matrices and the control horizon.An inspiring two-stage LfD method is inverse optimal control (IOC) <cit.>, which is utilized to identify objective function parameters being an important part of our algorithm. IOC algorithms diverse according to the assumption of the objective function form. As for the quadratic LQR form considered in our case, <cit.> both study the infinite-time problem assuming the constant control gain matrix is already known, while we provide the gain matrix estimator.In addition, <cit.> propose approaches to identify objective function as continuous finite-time LQR and <cit.> solve IOC problems for discrete-time. However, few of these studies consider the classic LQR setting including final-state, process-state and process-cost terms with different parameters, and they usually require complete trajectory observations. The past research on control horizon has mainly focused on adaptive horizon model prediction control.The control stability and effectiveness of the system can be decided by the horizon length <cit.> and the horizon estimation refers to choosing an optimal length to balance the performance and computation cost <cit.>. However in our paper, we study how to identify the horizon length of the optimization problem given optimal state trajectory observations with noises. One important application of our optimization problem reconstruction algorithm is the control input and trajectory prediction, which can be the basis for attacker's subsequent attacks, interception or misleading. Some existing methods for trajectory prediction are data-driven and model-free, such as using polynomial regression <cit.> or introducing neural networks, including the long short-term memory neural network <cit.> or graph neural network <cit.>. However, these usually require a large amount of observations as well as the model training in the early stage. Another class of algorithms is model-based. For instance, <cit.> use an unscented Kalman filter to predict multi-agent trajectories. <cit.> measure the secrecy of the trajectory and proves that uniform distributed inputs maximize the unpredictability of the system.But these predictive models often assume that the control inputs at each step are known, which in our case, cannot be obtained directly. Notice that if we build the reconstructed optimization problem of the agent, we are able to calculate the input at arbitrary states and predict the future trajectory accurately, combining with the estimation of the dynamic model and current state through system identification (SI) <cit.> and data fusion filters <cit.>, respectively.Motivated by above discussion, we design a control law learning algorithm with LQR problem reconstruction based on trajectory observations.We first estimate the target point by finding the intersection of the trajectory extension lines. Then identify the weighting matrices in the objective function with IOC algorithms and obtain the optimal horizon estimate through a binary search method. Finally, we reconstruct the control optimization problem and solve its solution as the learned control law.This algorithm can not only learn the control law and enable agents to imitate the motion from arbitrary initial states, but to predict the future input and state trajectory of the mobile agent precisely. One of the main challenges of problem reconstruction is how to build a proper IOC problem to identify the parameters simultaneously in complex quadratic objective function forms. Another lies in analyzing the estimation error of each part and their impact on the input prediction application.This paper is an extension of our conference paper <cit.>. The main differences include i) we reorganize the article structure to focus on LQR problem reconstruction, treating input prediction as an application, ii) the infinite-time case is considered, iii) the IOC problem for more complex form of the objective function is built and solved, iv) the sensitivity analysis of control horizon estimation is provided, v) extended simulations are provided and hardware experiments are added. The main contributions are summarized as follows: * We investigate the observation-based optimal control law learning issue and propose a novel LQR reconstruction algorithm for mobile agents, including the estimation of target state, objective function parameters and the control horizon. As far as we know, we are the first to consider and reveal that the entire optimization LQR problem can be recovered based on only observations.* We solve the IOC problem considering two forms of objective functions, including i) final-state only setting based on PMP conditions given incomplete state trajectories and ii) classic LQR setting based on condition number minimization.We provide the scalar ambiguity property analysis and further prove the uniqueness and identifiability of the problem. Furthermore, a novel approach for estimating the agent's control horizon is presented, converting a non-convex integer optimization into a binary search problem.* We apply our LQR reconstruction algorithm to the future control inputs prediction problem, providing error sensitivity analysis based on the convergent property of the algebraic Riccati equation. * Numerical simulations reveals the effectiveness of both objective parameters and control horizon estimations. Our algorithm shows low bias and variance of the error. Moreover, hardware experiments of input prediction conducted on our self-designed robot platform demonstrate the prediction accuracy and efficiency. The remainder of the paper is organized as follows. Section <ref> describes the problem of interest. Section <ref> analyzes the infinite-time case and studies the objective function identification in two settings. Section <ref> estimates the control horizon and summarizes the complete algorithm flow. Simulation results and hardware experiments are shown in Section <ref>-<ref>, followed by the conclusion in Section <ref>. § PRELIMINARIES AND PROBLEM FORMULATION §.§ Model DescriptionConsider a mobile agent R_m driving from its initial state to a fixed target point x_T. R_m is modeled by a discrete-time linear systemx_k+1 = A x_k + B u_k,where x_k is the state vector, u_k is the control input and A, B are n × n and n × m matrices. Assume that (A, B) is controllable and B has full column rank. Moreover, A is an invertible matrix since the system matrix of a discrete-time system sampled from a continuous linear system is always invertible <cit.>.The output function isy_k = C x_k + ω_k,where y_k is the outputs such as agent's position and C ∈ℝ^p × n is the observation capability matrix. We require C is invertible in our algorithm and analysis since if C is a fat matrix (p <n), the information contained in x_t cannot be fully characterized by y_t and the identification will be difficult <cit.>.ω_k ∼𝒩(0,Γ) is independent Gaussian observation noise. There is𝔼(ω_k) = 0,𝔼(‖ω_k ‖^2) < + ∞. Note that R_m follows the optimal LQR control and the optimization problem is described as 𝐏_0: min_u_0:N-1J_0 = 1/2 x_N^T H x_N+∑_k = 0^N-11/2 (x_k^T Q x_k +u_k^T R u_k),  s.t.(<ref>), x_0 = x̅- x_T, where H, R are positive definite matrices, Q is a semi-positive definite matrix and x̅ is the initial state. The first term in objective function J_0 reflects the deviation to the target state and the second term represents the cost of energy during the process.To simplify the problem description, we assume the agent sets the target as 0 in 𝐏_0 and performs coordinate system transformation on the initial state x̅ with x_T. It is known that the solution of 𝐏_0 isu_k = -K_k x_k, k = 0,1,…,N-1,where K_0:N-1={K_0,K_1,…,K_N-1} is the control gain matrix sequence related to the system equation and control objectives, which is calculated through the iterative equations (<ref>) shown in Section <ref>.§.§ Problem FormulationNow there is another external agent R_o observing and recording the output trajectories of R_m. We assume R_o has exact knowledge of the dynamic function, which can be accessed by SI methods, and the quadratic form of the objective function. Suppose that R_o obtains M⩾ n optimal trajectories {𝒴^1, ⋯, 𝒴^M} of R_m, where 𝒴^j = {y_0^j, y_1^j,…,y_l_j^j}, j = 1, 2, …, M,is the j-th trajectory and l_j+1 ⩾ 2n is the length.We require there exist at least n linearly independent final states among M trajectories, which means the matrix [ y_l_1^1 … y_l_M^M ] has a full row rank.Consider that R_o tries to learn the control law of R_m based on observations, which is equal to estimate the control feedback gain matrix sequence K_0:N-1 of 𝐏_0 accurately. However, this is a non-trivial problem since K_k is time-variant and the control objective of R_m is unknown. Therefore, the main idea of this paper is to reconstruct the optimization problem 𝐏_0 in order to solve K_0:N-1 directly, which means we need to estimate the following unknown parameters:* target state x_T;* weighting matrices in objective function H, Q, R;* control horizon length N. If the above estimations are accurate, we can obtain the real control law of R_m accurately by solving the reconstructed optimization problem. Notice that the target state estimation is quite trivial and less important in our problem reconstruction. We have the following remark.To estimate the target state, we can do curve fitting on at least two non-parallel observed trajectories, whose intersection is calculated as the estimate x̂_T. Moreover, one simple way is to directly observe the final state in multiple trajectories and take the average as the target value.Hence, in the following sections, we will estimate the objective function parameters and control horizon accordingly, and then formulate the optimization problem with all the estimates.§ TARGET STATE ESTIMATIONIn this section, we will estimate the target position of R_m. We choose at least two non-parallel trajectories and calculate the intersection of their extension lines as the target estimation. Assume R_m's current position is p_0^1 ∈ℝ^2 or ℝ^3. Observe and record the trajectory for l_1 steps, which is denoted as {p_0^1, p_1^1, ⋯, p_l_1^1}. Referring to Assumption <ref> and Fig. <ref>, when R_o is an obstacle (an external stimulus u_a), the state of R_m changes into p_0^2, and the following trajectory is recorded as {p_0^2, p_1^2, ⋯, p_l_2^2}. It is required that two trajectories are not parallel, indicating vector p_0^1 - p_l_1^1 is linearly independent with p_0^2 - p_l_2^2.Then, we need to fit the lines where the trajectories lie and solve for their intersection. See the solution of the two-dimensional case in <cit.> Sec IV.B. Here we will provide the algorithm in the three-dimensional case.* Two Dimension Denote p_i^j = (p_x,i^j, p_y,i^j)^T. As for jth trajectory, the straight line function is written asp_y = a^j · p_x + b^j = [ p_x 1 ][ a^j; b^j ]= p_x η^j,where η^j is the parameter to be obtained. The polynomial is learned by minimizing the sum of squared errors of all the L_i observations:min_η^j∑_i=0^L_j-1‖ a^j · p_x,i^j + b^j - p_y,i^j ‖^2_2,which is a least squares approximation problem and has a unique solution η^j = (H^j^T H^j)^-1H^j^T D^j, H^j = ∑_i=0^L_j-1[ p_x,i^j; 1 ][ p_x,i^j 1 ],D^j = ∑_i=0^L_j-1[ p_x,i^j; 1 ] p_y,i^j.Then the target point estimation is calculated by the intersection of y^1 = [ x 1 ]η^1 and y^2 = [ x 1 ]η^2.* Three Dimension §.§ Line FittingWe first perform the linear approximation to observed position sequences. Denote p_i^j = (p_x,i^j, p_y,i^j, p_z,i^j)^T. We have the equation of a straight line in three-dimensional space asp_x - x_0/a = p_y - y_0/b = p_z - z_0/c,which implies thatp_x = a/c (p_z - z_0) + x_0 = m_1 p_z + n_1,p_y = b/c (p_z - z_0) + y_0 = m_2 p_z + n_2.Omit the trajectory number j for brevity.To minimize the sum of squares of the residuals, we havemin∑_i^l_1 (p_x,i - m_1 p_z,i - n_1)^2for p_x. Taking the derivative of m_1 and n_1 respectively and setting the derivative equal to 0, we obtainm_1 = l_1 ∑_i p_x,i p_z,i - ∑_i p_x,i∑_i p_z,i/l_1 ∑_i p_z,i^2 - ∑_i p_z,i·∑_i p_z,i, n_1 = ∑_i p_x,i - m_1 ∑_i p_z,i/l_1.The calculation of m_2,n_2 is similar, except that p_x,i is replaced by p_y,i.§.§ Calculate the “Intersection"Now we need to calculate the intersection of these lines as the target point. Suppose two straight lines L_1, L_2 obtained by line fitting have direction vectors v_1, v_2 respectively. They are guaranteed not to be parallel, which isv_1 × v_2 ≠ 0. However, due to the existence of the observation noise, the two fitted lines may not intersect but be skew in three-dimensional space. Therefore, when the shortest distance between them, denoted as d_L_1,L_2 satisfiesd_L_1,L_2⩽ϵ,ϵ∈ℝ_+,we consider the two lines to “intersect", and take the midpoint of the nearest two points on L_1,L_2 as an estimate of the “intersection". Denote v_3 =o_1 o_2, where o_1,o_2 are points of L_1,L_2. Then we obtaind_L_1,L_2 = (v_1 × v_2) · v_3/‖ v_1 × v_2 ‖.If (<ref>) holds, we have{[ L_1(t_1) = o_1 + t_1 · v_1,; L_2(t_2) = o_2 + t_2 · v_2, ].and[L_1(t_1) - L_2(t_2)] · v_1 = 0, [L_1(t_1) - L_2(t_2)] · v_2 = 0,which is solved ast_1 = (v_1 · v_2) (o_2-o_1) · v_2 - (v_2 · v_2) (o_2-o_1) · v_1/(v_1 · v_2)^2 - (v_1 · v_1) (v_2 · v_2),t_2 = v_1 · v_1/v_1 · v_2 t_1 - (o_2-o_1) · v_1/v_1 · v_2.Then, the “intersection" is calculated by L_1(t_1) + L_2(t_2)/2 := x̂_T. Note that this method is limited to estimate the spatial target position. When the system state x_k contains other components, such as speed or acceleration (These variables may not change linearly with time like position coordinates), we can first determine the spatial position of the target point, then observe the final state of the agent once it reaches the target position for multiple times, and take the average as the target value of the remaining variables. § OBJECTIVE FUNCTION IDENTIFICATIONIn this section, we will firstly discuss the control law learning in the infinite horizon case as an inspiration. Then, we focus on finite-time problem and estimate the weighting parameters in the objective function through IOC methods.Since in some situations, the agent only pays attention to whether the target state can be reached and the whole energy consumption during the process instead of the transient states. Therefore, we will first consider the final-state only setting based on looser data assumptions. In the third subsection, we will solve the classic LQR setting IOC problem.§.§ Infinite Horizon CaseWhen the control horizon N of agent R_m goes to infinity, the objective function in 𝐏_0 changes intoJ^∞ = 1/2∑_k = 0^∞ (x_k^T Q x_k + u_k^T R u_k).and the optimal control law K is given byK =(R + B^T P B)^-1 B^T P A,where the intermediate parameter P satisfies the following Riccati equation:P = A^TP A - A^T P B(R+ B^T P B)^-1 B^TP A + Q.Since in the infinite case, the feedback gain matrix K is constant, we can estimate K based on observations directly without the control problem reconstruction.Denote the closed-loop system matrix A^c = A - BK and its spectral radius satisfies ρ(A^c) < 1. We havey_k+1 = C A^c C^-1 (y_k - ω_k) + ω_k+1for all k, which is a typical form of first-order stationary vector auto-regression process. We can calculate A^c inspired by ordinary least square (OLS) method with one single trajectory 𝒴^1.The estimator is designed asÂ^c = C^-1 (1/l Y X^T) (1/l X X^T- I_l ⊗Γ)^-1 C,where X = (y^1_0 , y^1_1, …, y^1_l-1)^T, Y = (y^1_1 , y^1_2 ,…, y^1_l)^T and Γ = diag{σ^2_ω,1, ⋯, σ^2_ω,p}. The matrix X^TX is required to be full rank to guarantee the uniqueness. Then, we learn the constant gain matrixK̂ = B^† (A - Â^c).as control law. The error between the estimation and the true K can be restricted by:‖ K - K̂‖_F=‖ B^† (A - A^c) - B^† (A - Â^c) ‖_F ⩽‖ B^†‖_F ‖Â^c - A^c ‖_F ⩽‖ B^†‖_F ·√(n)‖Â^c - A^c ‖.According to Theorem 6 in <cit.>, the estimation error of gain matrix K converges to zero as the number of samples l goes larger, which islim_l →∞‖Â^c - A^c ‖ =0,  ‖Â^c - A^c ‖∼𝒪(1/√(l)). The control law learning in infinite case is quite trivial but inspiring, revealing that the feedback gain matrix K can be obtained through OLS-like estimators. In the next subsections, we will turn back to the finite-time problem.§.§ Final-state Only Setting In this subsection, We study a simplified objective function setting without considering the penalties on states x_k^T Q x_k (i.e., Q=0) described asJ_1 = 1/2 (x_N^T x_N+∑_k = 0^N-1 u_k^T R u_k).We set H=I in the sense that for minimizing the distance between the final state and the target point, the weights on each component of x_N are usually considered equal. Then, the IOC problem here is to estimate the parameters R only with M optimal state trajectories in the presence of observation noises. We have nothing requirements for data other than Assumption <ref>, which means these M trajectories can be just fragments of some complete state trajectories. According to the Pontryagin's minimum principle (PMP) <cit.>, we introduce the following lemma:Consider the optimization problem 𝐏_0 with J_1. The optimal control inputs u_0:N-1^* and its corresponding state trajectories x_0:N^* satisfy 1)optimal control policy u_i^* = - R^-1 B^T λ_i+1^*,2)costate equation λ_i^* = A^T λ_i+1^*,3)terminal condition λ_N^* = H x_N^*,with the given initial state x_0^*, where λ_i is the costate of the system, i = 0,1,⋯,N-1. It is straightforward to find that 𝐏_0 has the same optimal solution with objective functions J_1 and α J_1, α∈ℝ_+. We provide following lemma to reveal that PMP conditions in Lemma <ref> have this scalar ambiguity property as well. Suppose parameters H, R, x_1:N^j,λ_1:N^j satisfy the PMP conditions, then H'=α H, R' = α R, x_1:N^j,αλ_1:N^j for α∈ℝ_+ are also a set of solution to these conditions. The proof is given in Appendix <ref>.Note that the PMP condition is the necessary condition for the optimal solution of 𝐏_0. We set PMP condition as the constraint and formulate the inverse control problem Problem <ref> based on M trajectory observations {𝒴^1,⋯,𝒴^M}. Since there exist observation noises, x_1:N_j and λ_1:N_j are also optimization variables. Moreover, in our case we constrain H=I, then according to Lemma <ref>, the optimal solution to Problem <ref> is supposed to be unique. (Inverse Control Problem with PMP) min_R̂,x_1:N_j^j,λ_1:N_j^j 1/M∑_j=1^M ∑_i = 1^N_j ‖y_i^j - C x^j_i ‖^2       s.t. x^j_i+1 = A x^j_i - BR̂^-1B^T λ^j_i+1,         λ^j_i = A^T λ^j_i+1, λ^j_N_j = x^j_N_j, x^j_0 = y_0^j - x̂_T,        i = 0,1,⋯,N_j-1, j = 1, …,M. We offer the following theorem to further prove the well-posedness property of the problem, which means the inverse problem of 𝐏_0 with J_1 exists a unique solution. Denote A^c_k = A - B K_k as the close-loop system matrix at time k.Suppose two closed-loop system matrix sequences A^c_0:N-1, A^c'_0:N-1 are optimal solutions to problem 𝐏_0 with J_1(R) and J_1(R') respectively. If we have A^c_k = A^c'_k for all k, there is R = R'. The proof is given in Appendix <ref>. Furthermore, we present Theorem <ref> as follow to reveal that the solution of Problem <ref> is the true value of R when the amount of observations is large enough.Suppose R̂, x_1:N^*,λ_1:N^* are the optimal solution to Problem <ref>. We havePr(lim_M →∞‖R̂ - R ‖ =0)=1.The proof is given in Appendix <ref>.With the above theorems, we are able to obtain the objective function parameter estimate R̂ by solving the Problem <ref>. §.§ Classic LQR SettingNow we consider adding the process-state term into minimization and the objective function is written asJ_0 = 1/2 x_N^T H x_N+∑_k = 0^N-11/2(x_k^T Q x_k +u_k^T R u_k).Solving the optimization problem 𝐏_0, we obtain a sequence of time-varying gain matrices K_0:N-1 calculated byK_k = (R + B^T P_k+1 B)^-1 B^T P_k+1 A,and P_k satisfies the following iterative equation with the initial value P_N = H:P_k = K_k^T R K_k + (A - B K_k )^T P_k+1 (A - B K_k)+Q. Inspired by the infinite-time case in Section <ref>, we propose an algorithm consisted of two parts: i) Estimate the feedback gain matrices K̂_k first based on the observation trajectories; ii) Calculate a proper (Ĥ,Q̂,R̂) with matrices K̂_k. We will describe the two parts separately. Notice that the form of J_0 subsumes J_1 (Q=0,H=I) in the previous subsection. However, we still divide them and provide two different algorithms since for identifying J_1 there is no requirements on observations, which can only be a segment of the trajectory, while in this subsection we need the observation to be a whole trajectory containing the final state. This will be shown in the following part. * Feedback Gain Matrices Estimation Suppose we obtain M trajectory observations {𝒴^1, …, 𝒴^M}. As Remark <ref> saying, in this subsection we require the observation to contain the final state of each trajectory, which is not difficult to achieve since we have estimate the target state or we can just observe for enough long time waiting for the agent to get to its target. Then we take l⩽min{l_1, …, l_M} steps from the end of each trajectory and reorder them as 𝒴̅^j = y̅^j_0:l = y^j_l_j-l:l_j.Now we have a truncated trajectory set {𝒴̅^1, …, 𝒴̅^M} for subsequent estimation.Denote the closed-loop system matrix at time k as A_k^c = A-BK_k, then we havey_k+1 = C A^c_k x_k + ω_k+1for all k. Based on equations (<ref>) and (<ref>), we provide the following lemma:If H,Q,R remain unchanged, considering two complete optimal state trajectories {x^1_0:N_1}, {x^2_0:N_2} with different control horizons N_1⩽ N_2 generated by the given system, we have matrix sequence {K_N_1^1, K_N_1-1^1, …, K_0^1} equal to {K_N_2^2, K_N_2-1^2, …, K_N_2-N_1^2}. Denote 𝒳^j as the state sequence estimated from truncated trajectory 𝒴̅^j. We utilize following method to estimate states x̂_k from the observation y_k through a filter <cit.>. Denote D=CB. For {y_0:l}, there is[ ζ_k+1 = (A-ABD^†C) ζ_k + ABD^†(y_k - CA^k x_0),; û_k-1 = D^† Cζ_k - D^†(y_k - CA^k x_0),  k = 1,…,l, ]where x_0 = C^-1y_0 and the intermediate variable ζ_0 = 0. Then with û_0:l-1, we have[ η_k+1 = A η_k + AB û_k,x̂_k = -η_k - Bû_k + A^k x_0, ]where the intermediate variable η_0 = 0. We omit the subscript here for brevity, i.e., û_k= û_k|l and x̂_k= x̂_k|l.Then, from Lemma <ref>, it is obvious to find that sequence pairs (𝒴̅^j, 𝒳^j), j=1,…,M share the same {A^c_0:l-1}. Therefore, similar as the infinite time case, for each k, we design the estimator asÂ^c_k ={[ C^† (1/M Y_k X_k^T) (1/M X_k X_k^T - I_M ⊗Γ)^-1, if  ρ(C A_k^c)⩽ 1,;C^† (Y_k X_k^T)(X_k X_k^T)^-1, otherwise ]. Â^c_k =C^-1 (Y_k X_k^T)(X_k X_k^T)^-1,where X_k = (x̂_k^1, …, x̂_k^M)^T and Y_k = (y̅_k+1^1 , …, y̅_k+1^M)^T. The matrix X_k^TX_k is required to be full rank (M⩾ n) to guarantee the uniqueness. Then we haveK̂_k = B^† (A - Â_k^c), k=0,…,l-1,and lim_l →∞‖K̂_k-K_k ‖ =0 <cit.>. * Objective Function Parameter Calculation After obtaining the feedback gain matrix estimation sequence K̂_0:l-1, we now find a parameter set (H,Q,R) that generates K̂_0:l-1 exactly through iteration equations (<ref>), (<ref>).Similarly, we provide the following theorem first to show the scalar ambiguity property under this case.Suppose two feedback gain matrix sequences K_0:N-1, K'_0:N-1 are generated with two sets of parameters H,Q,R and H',Q',R' respectively through equation (<ref>). If there exist at least mn(n+1)(m+1)/2 linearly independent vec(𝒫_i(ℰ_i)) defined in (<ref>) and K_k=K'_k for all k, we have H' = α H, Q' = α Q, R'=α R for some α∈ℝ_+. See the proof in Appendix <ref>.Theorem <ref> provide a criteria for the identifiability of the objective function. If the control horizon is set as N < mn(n+1)(m+1)/2, the true weight parameters H,Q,R of the control objective will never be identified accurately, which can be utilized in preserving the system's intention. Now, with the identifiability guarantee, we introduce our IOC algorithm based on following lemma derived from (<ref>).For H,Q,R in the objective function J_0, we havea_i (I_i ⊗ R) b_i = c_i [H0;0 I_i-1⊗ Q ] d_i,for i=1,…,T anda_i = [I_n    -B^T K_N-i+1^T    -B^T A_N-i+1^c^T K_N-i+2^T   ⋯       -B^T ∏_r = 2^i-1A_N-r^c^T K_N-1^T], c_i = B^T [ ∏_r = 1^i-1 A_N-r^c ∏_r = 2^i-1 A_N-r^c ⋯ A_N-i+1^c I_n ], b_i = [K_N-i;K_N-i+1 A_N-i^c;⋮; K_N-1∏_r = 2^i A_N-r^c ], d_i = [ ∏_r = 1^i A_N-r^c; ∏_r = 2^i A_N-r^c; ⋮; A_N-i^c ],where K_N-T:N-1 are gain matrices. See the proof in Appendix <ref>.Note that equation (<ref>) iterates from k=N-1, which is the reason why we require the observations to contain the final state of trajectories in the previous step. Set (<ref>) as the constraint of our estimation problem and Theorem <ref> ensures the identifiability of H,Q,R (i.e., as the observations increase, the estimation error of K̂_k decreases and we obtain the real parameters), while we use an additional criteria to guarantee the uniqueness of the solution that estimates must minimize the condition number of the block diagonal matrix consisting of Ĥ,Q̂,R̂. Supposing we obtain K̂_N-T:N-1, the optimization problem is formulated as: (H,Q,R Estimation with Condition Number Minimization)(Ĥ, Q̂, R̂, τ̂) = min_H,Q,R,ττ^2 s.t.  (<ref>), I ≼diag(H,Q,R) ≼τ I. For Problem <ref>, the number of equality constraints and T⩽ N in (<ref>) can be decided by the trade-off between accuracy and computation cost. If the gain matrix estimations are all accurate, the above LMI problem is feasible. Since it is a convex optimization problem with linear constraints, there exists at least one exact solution. However, due to the existence of observation noises, the estimation may contain errors and the problem has no solution (infeasible). Therefore, We offer a further analysis to determine whether Problem <ref> is solvable. Write formula (<ref>) into following expression:(b_i^T ⊗ a_i) ·vec(I_i ⊗ R)_𝒢^1(R) = (d_i^T ⊗ c_i) ·vec([H0;0 I_i-1⊗ Q ])_𝒢^2(H,Q),for i = 1,…,T. Define the rows of zero element in the vector 𝒢^1(R) as set 𝒞^1, 𝒞^1 = {k|𝒢^1(R)_k = 0} and set 𝒞^2 for 𝒢^2(H,Q) similarly. Then we take𝒦_i^1 = (b_i^T ⊗ a_i)(:,𝒞^1), 𝒦_i^2 = (d_i^T ⊗ c_i)(:,𝒞^2),with which (<ref>) is simplified into𝒦_i^1 · (1_i ⊗ vec(R)) = 𝒦_i^2 ·[vec(H); 1_i-1⊗ vec(Q) ]Therefore, we can combine all the T equations as𝒦_T^1 · (1_T ⊗ vec(R)) =𝒦_T^2 ·[vec(H); 1_T-1⊗ vec(Q) ]⇔[𝒦_T^1 -𝒦_T^2 ]_Φ_T·[1_T ⊗ vec(R);vec(H); 1_T-1⊗ vec(Q) ]_Θ_T(H,Q,R) = [ 0; ⋮; 0 ],where𝒦_T^1 = [ 𝒦_1^1 0 ⋯ 0; 2c𝒦_2^1 ⋯ 0; 2c⋱ ⋮ ⋮; 4c𝒦_T^1 ],𝒦_T^2 =[ 𝒦_1^2 0 ⋯ 0; 2c𝒦_2^2 ⋯ 0; 2c⋱ ⋮ ⋮; 4c𝒦_T^2 ]. Based on the above derivation, we provide the following theorem.Problem <ref> is infeasible, when the rankrank(Φ_T) ⩾ n^2+n+m^2+m/2,which implies that the equationΦ_T ·Θ_T(X_h,X_q,X_r) = 0has only zero solution for unknown variables X_h,X_r∈𝕊^n and X_q∈𝕊^m. Therefore, we consider the following optimization problem instead when Problem <ref> is infeasible:min_Ĥ, Q̂, R̂ ‖Φ_T·Θ_T(Ĥ, Q̂, R̂)‖_2^2   s.t.   ‖Θ_T(Ĥ, Q̂, R̂) ‖ = 1,which can be solved by existing QP solvers <cit.>. § CONTROL HORIZON ESTIMATION AND LQR PROBLEM RECONSTRUCTIONNote that with the target state and objective function, we can generate a satisfying control trajectory. However, in order to imitate the agent's motion precisely, we still need to estimate the control horizon, since different control horizons lead to different inputs and final states. In this section, we will first introduce the control horizon estimation algorithm, then reconstruct the LQR problem with estimates and provide the input prediction application.§.§ Control Horizon EstimationSuppose R_m is now driving under an optimal trajectory generated by 𝐏_0 toward the target. To estimate the control horizon N of this trajectory, we need an l length continuous observation 𝒴 = {y_0, y_1,…, y_l}.We build the following optimization problem for estimation. (Estimation of the Control Horizon) min_N̂  ∑_i = 1^l ‖ y_i - C x_i ‖^2 : = J_N(N̂;y_0:l)  s.t.  x_k+1 = A x_k + B u_k,         u_i = -K_i x_i, x_0 = y_0- x̂_T,         K_i =(R̂ + B^T P_i+1 B)^-1 B^T P_i+1 A,        P_i =K_i^T R̂ K_i + A^c_i^T P_i+1 A^c_i+Q̂,P_N̂ = Ĥ,         i = 0,1,…,N̂-1, where y_1:l is the observation of R_m up to time k=l.The above problem reflects that the deviation of the trajectory obtained under the optimal solution N̂^* from the observed data y_1:l is the smallest. Since the optimization variable N̂∈ℕ_+ does not explicitly exist in the objective and constraints, the problem is a non-convex optimization on the set of positive integers and hard to solve directly. Therefore, we turn to investigate how the value of the objective function changes with N̂. Note that N > l and as N̂ grows from l to the real horizon N, the function J_N gradually decreases to the minimum point. Then, as the continue growth of N̂, J_N increases and finally converges to a fixed value ∑_i = 1^l ‖ y_i‖^2. See an illustration in Fig. <ref> of Section <ref>. According to the analysis of J_N, to obtain the solution of Problem <ref>, we can start traversing from N̂ = l and keep increasing N̂ until J_N no longer decreases, at which time the optimal N̂^* is found. However, this will consume lots of computation if N ≫ l. Therefore, we propose an algorithm based on binary search to find the optimal solution inspired by the line search of gradient decent method. Since J_N is a discrete function with respect to N̂, we use the function values at both N̂ and N̂+1 to approximate the gradient at point N̂, which is given by:g_N = J_N(N̂+1;y_0:l)-J_N(N̂;y_0:l)/(N̂+1)-N̂.Thus, if we have g_N < 0, then N̂ < N̂^*; if g_N>0, then N̂⩾N̂^*. The detailed algorithm is shown in Algorithm <ref>.Binary Search for Optimal N̂ The observation trajectory and the observation times, y_0:l, l; The estimate of the target state, x̂_T; The parameters of the system dynamic, A,B,C; The estimate of the objective function parameter, Ĥ,Q̂,R̂; The step length, θ;The optimal control horizon estimate, N̂^*; Determine the initial bound:Set the lower bound as N^- = l+1; Let N̂' = l + θ;g_N̂' < 0 N̂' = N̂' + θ;Set the upper bound as N^+ = N̂';Binary search:Take the midpoint of the range as N = ⌊N^- + N^+/2⌋;N^+ - N^- > 1 g_N > 0 Set N^+ = N Set N^- = NN̂^* = min_N̂{J_N(N̂;y_0:l); N̂∈{N^+,N^-}};return The control horizon estimation N̂^*. Notice that after determining the initial range [N^-,N^+], if we simply leverage the function values of the two break points N_1,N_2,N^-⩽ N_1<N_2 ⩽ N^+ to find the optimum, to have a constant compressive ratio c, the break points should satisfy N_2-N^-/N^+ - N^-= N^+ - N_1/N^+ - N^- = c and c = √(5)-1/2≈ 0.618. However, with the above Algorithm <ref>, since we approximate the gradient, the compressive ratio is improved to c = 0.5. Moreover, in order to reduce the computation cost, it is recommended to store the result each time we calculate the deviation sum J_N corresponding to a certain N̂ to avoid repeated calculation.§.§ LQR ReconstructionNow we have obtained the estimate of the target state x̂_T, the control horizon N̂^*, and identified the weighting matrices in objective function J_0. Therefore, we can reconstruct the optimization problem by substituting x_T,H,Q,R,N in 𝐏_0 with our estimates and calculate the control law K̂_0:N̂-1 through the iteration (<ref>) and (<ref>). We provide the future input prediction as one important application of our control law learning algorithm. Now suppose at time k = l, the agent R_o has observed R_m for l+1 steps and obtained a series of observations 𝒴 as (<ref>). Denote u_l as the real input at k=l that we want to predict and û_l|l as our input inference. R_o tries to infer the current control input accurately, which is to minimize the error between û_l|l and u_l. From (<ref>), u_l is calculated by -K_l x_l, then we havemin‖ u_l - û_l|l‖^2 = ‖ -K_l x_l - û_l|l‖^2,Notice that the state estimate x̂_l|l can be obtained by Kalman filter.Therefore, we can infer the control input of the target agent R_m at time l by û_l|l = μ_0,where μ_0 is calculated by solving the following reconstructed optimization problem: (Control Input Prediction) min_μ_0:N'-1 J' = x_N'^T Ĥ x_N' + ∑_k = 0^N'-1 (x_k^T Q̂ x_k+μ_k^T R̂μ_k)    s.t.  (<ref>), x_0 = x̂_l|l - x̂_T, in which N' = N̂^*-l.The solution of this reconstructed LQR problem is given in <cit.> asμ_0 = - K_0 x_0,where K_0 is calculated by (<ref>).According to the principle of optimality <cit.> in the dynamic programming, if a control policy p_0,N^* is optimal for the initial point x_0, then for any l∈{1,2,⋯,N-1}, its sub-policy p_l,N^* is also optimal to the subprocess containing last N-l+1 steps with the initial point x_l. * Prediction Error Analysis Note that when the trajectory sample size M →∞, we have x̂_T = x_T and Ĥ = H, R̂ = R,Q̂=Q according to the law of large numbers. However, the accuracy of estimation N̂^* can not be guaranteed, since it is the value that minimize function J_N, which is affected by the observation errors. Therefore, the sensitivity of N̂^* with respect to the input estimation û_l|l needs to be analyzed. It is found that the influence of N̂^* is reflected in the calculation of K_0 with Problem <ref> and formula (<ref>). Now suppose the real control horizon of the system N generates K_0^r, while our estimate N̂^* calculates K̂_0. We will show the estimation error ‖K̂_0 - K_0^r‖ can be bounded and controlled in the following analysis.Note that a discrete-time LQR problem with finite control horizon as 𝐏_0 is solved through dynamic programming method and the iteration equation of the intermediate parameter P_k can be described asP_k-1 = A^TP_kA - A^T P_k B(R+ B^T P_kB)^-1 B^TP_kA+Q,for k=1,…,N with P_N = H>0. According to <cit.>, when N ∞, there is P_0=P^*>0, where P^* satisfies the discrete Riccati equation:P^* = A^TP^*A - A^T P^* B(R+ B^T P^*B)^-1 B^TP^*A+Q,and correspondingly,K_0 =K^*= (R + B^T P^* B)^-1 B^T P^* A.What's more, the sequence {P_k} is monotonic (In our analysis, a matrix sequence is monotonic means P_0⋚ P_1 ⋚…⋚ P_N, where P_i ⩾ P_j implies P_i - P_j is a positive semi-definite matrix). According to Lemma <ref>, the difference between the ends of two trajectories with control horizons N_1,N_2, N_1⩽ N_2 can be converted to a comparison inside a single sequence generated by N_2, which is ‖ K_0^(1) - K_0^(2)‖ = ‖ K_N_2-N_1^(2) - K_0^(2)‖. Therefore, we have the estimation error ‖K̂_0 - K_0^r‖ = ‖ K_|N̂^* - N| - K_0‖,where K_0:N_m is generated under the horizon N_m = max(N,N̂^*). Then we will focus on the convergence of the {K_k} sequence under fixed H,Q,R. We offer the following theorem to show the input prediction error and sensitivity of the control horizon estimate. There exists a positive integer N∈ℕ_+ and η > 0, which can be set as the maximum tolerable inference error. For any N> N, we have ‖μ_0^(N+δ N) - μ_0^(N)‖⩽η, where μ_0^(N) denotes the input inference when the control horizon estimate N̂^*=N and η is proportional to δ N ∈ℕ_+. See the proof in Appendix <ref>. The complete algorithm flow is shown in Algorithm <ref>. LQR Reconstruction based Control Inputs Prediction AlgorithmThe observation data including M history trajectories, {𝒴^1:M}; The observation of current trajectory and the observation times, 𝒴, l; The system dynamic function, A,B,C;The control input inference at time l, û_l|l; Estimate the target state x̂_T through curve fitting and calculating the intersection;consider only final-state Identify the objective function parameter R̂ by solving Problem <ref> with M trajectories; consider process-states Estimate the feedback gain matrices;Identify Ĥ,Q̂,R̂ by solving Problem <ref>; Return false Calculate the optimal estimate to control horizon N̂^* with Algorithm <ref>;Formulate and solve Problem <ref> with previous estimates; Obtain the one-step input μ_0 with (<ref>) in the forward pass;return The control input prediction û_l|l = μ_0.§ SIMULATION RESULTSIn this section, we conduct multiple simulations on our algorithm and apply it to the future input and trajectory prediction to show the performance and efficiency.Consider a controllable linear system modeled by a three-dimensional dynamical function as (<ref>) andA = [1.4155 -0.08760.7213;0.81862.7338 -1.2750; -0.3118 -0.75731.2008 ] , B = [ -0.04840.1611 -1.8972; -1.13501.66000.1003;0.3905 -0.78510.1055; ],are generated randomly where n = m =3. Assume the agent is driving to the target state [ 6,8,4 ]^T. The observation function is (<ref>) with C=I_3 and the observation noise satisfies a Gaussian distribution 𝒩(0,0.02^2). We now obtain a set of trajectory observation. Through applying the external incentives and the line fitting, we calculate the intersection point as the estimation to the target statex̂_T = [ 6.063,8.086,4.039 ]^T.§.§ Final-state Only Setting Firstly, we consider the control optimization problem 𝐏_0 with final-state setting (as J_1). We supposeH = I_3× 3, R = [ 0.4 I_2× 2 0_2× 1; 0_1× 20.8 ].To test the estimation algorithm for R, we set the trajectory length l_j=10 and random initial states x_0^j for all j = 1,2,⋯,M to ensure the linear independence. Use the MATLAB function fmincon with “interior-point" method to solve Problem <ref>. We pick the Frobenius norm to measure the estimation error:err(R̂) = ‖R̂-R ‖_F/‖ R ‖_F.The results are shown in Fig. <ref> and the time costs are listed in Table.<ref>. Notice that R is a three-dimensional matrix, more than three trajectories are required to solve for a unique R̂. We can see that as the number of trajectories M increases from 3 to 13, the estimation error shows a decreasing trend. However, as M becomes larger, the search space of the problem increases as well and we need to continuously enlarge the parameter MaxfunEvals of function fmincon to ensure an accurate solution. The lager number of iterations leads to a longer solving time. Therefore, considering the trade-off between the estimation error and the computational efficiency, we choose M=7 and the estimation value R̂ = [0.4193 -0.0078 -0.0063; -0.00260.4083 -0.0071; -0.0089 -0.01110.8166; ].§.§ Classic LQR SettingNow we consider the more complex objective function setting as J_0 in (<ref>). We set parametersH = I_3× 3,Q = 0.2 I_3× 3 , R = [ 0.4 I_2× 2 0_2× 1; 0_1× 20.8 ]and estimate them simultaneously by the algorithm in Sec. <ref>-B. We first collect M trajectories containing their terminal states and compute the feedback matrix K_k at each step using the proposed estimator as (<ref>). Note that when there is no observation noise, Problem <ref> is always feasible with any T since rank(Φ_T) = 9 < n^2+n+m^2+m/2 = 18 according to Theorem <ref>, and the solutions are accurately equal to the real H,Q,R. However, if the observation noise exists, in this case Problem <ref> is only feasible at T=1 (rank(Φ_1) = 9<18). For T ⩾ 2, we have rank(Φ_1) = 9 · T ⩾ 18 which needs to transfer into the QP problem as (<ref>). Here we set T = 6 and solve Problem <ref> and (<ref>) with solver YALMIP <cit.> and SeDuMi <cit.> in MATLAB. The estimation errors are shown in Fig. <ref>. We use the Frobenius norm to measure the estimation error:err = ‖[Ĥ Q̂ R̂ ]-[ H Q R ]·α‖_F/‖[ H Q R ]·α‖_F.With the trajectory number fixed, as observation noises decrease, the estimate to K̂_k becomes more accurate and estimation errors of Ĥ,Q̂,R̂ gradually decrease.We compare with the IOC algorithm proposed in <cit.> (we set H=Q here since <cit.> estimates Q,R only) and find that when the observation noise is small enough, the estimation errors obtained by two algorithms are basically the same, while when the noise is not negligible, ours works better. What's more, comparing the variance of multiple simulation results, our algorithm is overall smaller and more stable than theirs. Finally we take the following estimates:Ĥ = [5.0699 -0.1320 -0.1507; -0.13204.9445 -0.0058; -0.1507 -0.00584.9836 ], Q̂ = [ 1.0069 0.0097 0.0045; 0.0097 1.0137 0.0064; 0.0045 0.0064 1.0030 ], R̂ = [1.9926 -0.0791 -0.1309; -0.07911.9727 -0.0556; -0.1309 -0.05563.8733 ]with the multiplied scalar α = 5.Now we start to infer the inputs and predict the future states of the agent. Observe for l =15 times and set θ = 10. With Algorithm <ref> and the property of J_N curve in Fig. <ref>, we obtain the optimal control horizon estimation N̂^* = 20. At this point, we have completed the estimation of x̂_T, R̂ and N̂^*. The current state is estimated by Kalman filter as x̂_l|l. Then, we reconstruct and solve the control optimization problem of the mobile agent as Problem <ref>:min_μ_0:4   x_5^T Ĥ x_5 + ∑_k = 0^4 (μ_k^T R̂μ_k + x_k^T Q̂ x_k)   s.t.  (<ref>), x_0 = x̂_l|l - x̂_T, k = 0,…,4.Note that the solution x_1:5 of above problem are actually the prediction to the future state of the agent from k=16 to 20. Denote the prediction error as ‖x̂ - x‖. Comparing with the prediction through polynomial regression <cit.> in presence of the same observation noise distribution, the results are shown in Table.<ref>. The curve fitting is based on all the l=15 history states and the highest polynomial order is chosen as 3 which is the optimal. We can see that the prediction error generated by our methods is overall lower than the fitting methods. Moreover, the error by curve fitting grows larger with time k while the error of our method decreases as x_k goes to 0.§ EXPERIMENTS We demonstrate the algorithm on our self-designed mobile robot platform <cit.> as Fig. <ref>. The AprilTag visual system is adopted for the real-time localization of the robots. The control procedures based on the localization results are implemented by MATLAB in a VMWare ESXI virtual machine, which is equipped with an Intel(R) Xeon(R) Gold 5220R CPU, 2.20G Hz processor and 16GB RAM. All experiments are conducted on a 5m × 3m square platform, and two 17.5cm × 17.5cm × 20cm omni-directional mobile cars are used. We simulate a carrying scenario. A moving agent equipped with the blue light strip transports building blocks from different locations to the green box under LQR optimal control, while the car with red lights acts as an external attacker to observe and record the trajectory of the blue one in order to reconstruct its optimization problem, thereby inferring the future state or imitating its behavior. We have the system state x_k = (x_k^x x_k^y)^T and input vector u_k = (u_k^x u_k^y)^T, where the superscript indicates the horizontal or vertical direction. The control problem of the moving agent is described as 𝐏_0 with A =C= I_2, B=0.2 I_2 and N=15, H=5 I_2,Q=0.1 I_2, R=0.5 I_2. Set x_T = (3000,2000)^T as the target state. During the data collection period, two kinds of conditions described in Fig. <ref> are considered. Notice that in scenario 1, R_m will re-calculate its trajectory after being disturbed. Therefore, we can leverage this mechanism and actively apply external inputs to collect multiple different trajectories. Referring to Fig. <ref>(a)-(d), the moving agent starts from three initial points including [1430, 1457], [2196,1185], [1738,2389] in sequence and drives to the set target, which provides three unparalleled trajectory observations. Another situation is shown as Fig. <ref>(e)-(h) which leverages the attacker as an external stimulate and makes the agent to re-plan from its current state, leading to two different trajectories. Therefore, we take four trajectories as observation data and launch our algorithm. We have x̂_T=(2973.3, 1989)^T and the estimation results of objective function parameters areĤ = [ 49.0753 -2.1655; -2.1655 48.6586; ], Q̂ = [1.0025 -0.0060; -0.00601.0143; ],R̂ = [4.8863 -0.2250; -0.22504.8459; ]. When the attacker successfully reconstructs the control optimization problem with the estimates of all the parameters, we demonstrate that starting from an arbitrary starting point, the attacker can mimic the trajectories of the agent under different motions and accurately predict the future state with Algorithm <ref> as shown in Fig. <ref> and <ref> respectively. All experiments demonstrate the effectiveness of the proposed reconstruction algorithm. More tests including variable speed and curve motions can be found in the video https://youtu.be/-rUWd9k3-mg. § CONCLUSIONThis paper proposes an algorithm for learning the control law of a moving agent under LQR control with optimization problem reconstruction based on observations. Assuming the linear dynamical system and the quadratic objective function form is known, we identify the parameters in the objective function based on inverse optimal control considering of settings and estimate the control horizon with a binary search method. Finally, we reconstruct the control optimization problem of the agent and calculate its solution as the learned control law. We apply our algorithm to predict the future input and conduct extensive simulations and hardware experiments to demonstrate its efficiency. Future work may consider nonlinear modeled agents and identifying more complex objective functions with constraints and obstacles in the environment.§ PROOF OF LEMMA <REF>Apply the problem constraints to the new parameters H',R'. We obtain that (λ_i^j)' = αλ_i^j for all i,j and(x^j_i+1)'= A (x^j_i)' - B(R')^-1B^T (λ^j_i+1)' = A (x^j_i)' - BR^-1 B^T λ_i+1^j.Comparing to x^j_i+1 = A x^j_i - BR^-1 B^T λ_i+1^j, if we have (x_0^j)' = x_0^j, it is easy to realize that H',R',x_1:N^j,(λ_1:N^j)' are also solutions to the problem with the same objective function value of the real parameters H,R.§ PROOF OF THEOREM <REF>Now we will show that if A^c_k and A^c_k' defined in the theorem are equal for all k, then we have R=R'. We prove this by contradiction. Suppose the two corresponding positive definite matrices R,R' are different, then defineΔ R = R'-R ≠ 0,where Δ R is also a symmetric matrix. Then there are two sets of matrices P_0:N, K_0:N-1 and P_0:N', K_0:N-1' satisfying the iteration equations:K_k =(R + B^T P_k+1 B)^-1 B^T P_k+1 A, P_k =K_k^TR K_k + A^c_k^T P_k+1 A^c_k,with P_N = H=I respectively. Since we have A^c_k = A^c_k' for all k, there isA-B K_k= A-BK_k' ⇔ B K_k = BK_k' ⇔ B^TB K_k = B^T B K_k'.Note that B has full column rank, thus B^TB is invertible. Therefore, we derive K_k = K_k' directly.According to (<ref>), it follows that(R + B^T P_k+1 B) K_k= B^T P_k+1 A, R K_k= B^T P_k+1 A^c_k.Then we also have the above equation for R' and P_k+1' which is written as(R + Δ R) K_k= B^T (P_k+1 + Δ P_k+1) A^c_k, Δ R K_k= B^T Δ P_k+1 A^c_k,where Δ P_k = P'_k-P_k. Similarly, for equation (<ref>) we haveP_k + Δ P_k=K_k^T (R + Δ R) K_k + A^c_k^T (P_k+1 +Δ P_k+1 A^c_k), Δ P_k= (K_k^T B^T + A^c_k^T) Δ P_k+1 A^c_k.Since P_N = P_N' = I, then Δ P_N = 0. Combining with (<ref>), we have Δ P_k = 0 and P_k = P_k' for all k. Thus (<ref>) converts intoΔ R K_k = 0, k = 0,1,… , N-1.Note that I≻ 0, R≻ 0, then with (<ref>) we can obtain that P_k ≻ 0 and is invertible. Therefore, since P_k, A are all invertible matrices, from (<ref>) we derive rank(K_k) = rank(B^T) = m. Thus K_k has full row rank and Δ R =0, which is contradict with the assumption. The proof is done.§ PROOF OF THEROREM <REF>Denote z_i = [ x_i^T λ_i^T ]^T, i = 1, …,N. Then the constraints for each step in Problem <ref> is written as[I B R^-1 B^T;0A^T ]_E z_i+1 =[ A 0; 0 I ]_F z_i.Combine all the constraints into a matrix equation as follows[EF; -FE; ⋱⋱ ; -FE;]_ℱ(R)[ z_1; z_2; ⋮; z_N ]_Z = [ A; 0; ⋮; 0 ]_Ax_0,where E = [I B R^-1 B^T;00 ] and F = [00; -II ]. Note that ℱ(R) is an invertible matrix. Then we have∑_i = 1^N ‖ y_i - x^*_i ‖^2 = ‖ Y - G_X Z ‖^2 = ‖ Y - G_X ℱ(R)^-1A x_0 ‖^2,where G_X = I_N-1⊗ [I_n, 0_n].Then the subsequent proof is similar to the proof of Theorem 4.1 in <cit.>. Just replace ℱ(Q) in <cit.> with ℱ(R) in (<ref>) and we have R̂ R as M ∞, where R̂ is the solution obtained by Problem <ref> and R is the true parameter in the forward problem.§ PROOF OF THEOREM <REF>This proof will show that the matrix pairs H,Q,R and H',Q',R' obtained through the iteration equations (<ref>) with same sequence K_0:N-1 satisfy a scalar multiple relationship under some linearly independent conditions.From (<ref>) we haveA_k^c= A-BK_k = A- B (R + B^T P_k+1 B)^-1 B^T P_k+1 A = (I+BR^-1B^T P_k+1)^-1 A,where we used the Sherman–Morrison formula. Since K_0:N-1 and K'_0:N-1 are the same, A_k^c's corresponding to K_k and K'_k are the same and we haveR^-1B^T P_k+1 = R'^-1B^T P'_k+1for all k, where we used that A is invertible and B has full column rank. For P_k, we haveP_k-1 = A^TP_kA - A^T P_k B(R+ B^T P_kB)^-1 B^TP_kA+Q=A^T P_k A_k-1^c +Q.Starting from k=N with P_N=H,P'_N=H', we substitute P_k into (<ref>) and obtain the following N equations.R^-1B^T H = R'^-1B^T H',R^-1B^T (Q+A^T H A_N-1^c)=R'^-1B^T (Q'+A^T H' A_N-1^c), R^-1B^T (Q+A^T Q A_N-2^c + (A^T)^2 H A_N-1^c A_N-2^c)= R'^-1B^T (Q'+A^T Q' A_N-2^c + (A^T)^2 H' A_N-1^c A_N-2^c),⋯In particular, for k=i, we have R^-1B^T P_i = R'^-1B^T P'_i, whereP_i = Q+ Σ_j=1^N-i (A^T)^j Q Π_r=i^i+j-1 A_r^c + (A^T)^N-i H Π_r=i^N-1 A_r^c.We subtract the equation for k=i+1 from the equation for k=i and write the difference in the following matrix form:R^-1ℬ𝒜_i QH_i 𝒜^c_i = R'^-1ℬ𝒜_i Q'H'_i 𝒜^c_i,where ℬ𝒜_i = B^T [A^T, ⋯, (A^T)^N-i-1, (A^T)^N-i-1, (A^T)^N-i],𝒜^c_i = [(A^c_i-A^c_i+1)^T, ⋯, (Π_r=0^p-1A^c_i+r-Π_r=1^pA^c_i+r)^T,⋯,        (Π_r=0^N-1A^c_i+r)^T, -(Π_r=1^NA^c_i+r)^T, (Π_r=0^NA^c_i+r)^T]^T,QH_i = [ I_N-i-1⊗ Q0;0 I_2⊗ H ].Q'H'_i can be obtained by replacing Q,H in QH_i with Q',H'. Index p in the block matrix 𝒜^c_i refers to the p-th block, i.e, for the first block, p=1 and it can directly write as (A^c_i-A^c_i+1)^T. Take the trace of both sides of equation (<ref>) and we havetr(R^-1ℬ𝒜_i QH_i 𝒜^c_i) =vec(R^-1)^T (𝒜^c_i^T ⊗ℬ𝒜_i)_ℰ_i vec(QH_i). Notice that there are many zero elements and identical matrices in QH_i and R^-1. Therefore, the trace equation (<ref>) can be simplified through the following three steps:i) Let the set of indices of all non-zero elements in vec(QH_i) be 𝒞_1. Collect all the columns of ℰ_i with indices in 𝒞_1 and form 𝒫^1_i(ℰ_i)= ℰ_i(:,𝒞_1). Then we havevec(R^-1)^T 𝒫^1_i(ℰ_i) [1_N-i-1^T ⊗ vec(Q)^T, vec(H)^T,vec(H)^T]^T.ii) Since vec(Q) and vec(H) appear multiple times, we define 𝒫^2_i(ℰ_i), where 𝒫^2_i(ℰ_i)(:,1:n)=Σ_t=1^N-i-1𝒫^1_i(ℰ_i)(:,1+m(t-1):mt),𝒫^2_i(ℰ_i)(:,n+1:2n)=Σ_t=N-i^N-i+1𝒫^1_i(ℰ_i)(:,1+m(t-1):mt).iii) Since H,Q ∈ℝ^n and R ∈ℝ^m are symmetric, we only need the upper triangle part vec(H), vec(Q), vec(R^-1) respectively to uniquely determine them. We can further simplify (<ref>) asvec(R^-1)^T 𝒫_i(ℰ_i) [ vec(Q); vec(H) ] = ([ vec(Q)^T, vec(H)^T ]⊗ vec(R^-1)^T) vec(𝒫_i(ℰ_i))= ([ vec(Q')^T, vec(H')^T ]⊗ vec(R'^-1)^T) vec(𝒫_i(ℰ_i)),where 𝒫_i(ℰ_i) is a m(m+1)/2× n(n+1) matrix. If there exist at least mn(n+1)(m+1)/2 linearly independent vectorsvec(𝒫_i(ℰ_i)) in the horizon k=0,…, N-1, then we can combine them as a full row rank matrix and obtainvec(Q')=α· vec(Q), vec(H') =α· vec(H),vec(R'^-1) =1/α· vec(R^-1). Thus, we have H' = α H, Q' = α Q, R'=α R. Note that if the matrix H,Q,R are all diagonal matrices, which is a common setting in practical scenarios, we only need 2nm linearly independent vec(𝒫_i(ℰ_i)).§ PROOF OF LEMMA <REF>In this proof we will show how to obtain the equation (<ref>) from the iterations of P_k, K_k. Transform the formula (<ref>):(<ref>) ⇔ (R + B^T P_k+1 B) K_k = B^T P_k+1 A, ⇔ R K_k = B^T P_k+1 (A - B K_k) = B^T P_k+1 A^c_k.Since P_N = H, for k=N-1 we haveR K_N-1 = B^T H A^c_N-1⇔ a_1 R b_1 = c_1 H d_1where a_1 = I_2, b_1 = K_N-1, c_1 = B^T and d_1 = A^c_N-1. For k=N-2 there isR K_N-2 = B^T P_N-1 A^c_N-2and substitute P_N-1 with (<ref>)R K_N-2 = B^T (K_N-1^T R K_N-1 + A^c_N-1^T H A^c_N-1+Q) A^c_N-2 ⇔[ I_2 - B^T K_N-1^T ]_a_2 (I_2 ⊗ R) [ K_N-2; K_N-1 A^c_N-2 ]_b_2 =     [ B^T A^c_N-1^T B^T ]_c_2[ H; Q ][ A^c_N-1 A^c_N-2; A^c_N-2 ]_d_2.For k = N-3 we haveR K_N-3 = B^T P_N-2 A^c_N-3= B^T (K_N-2^T R K_N-2 + A^c_N-2^T P_N-1 A^c_N-2+Q) A^c_N-3 = B^T (K_N-2^T R K_N-2 + A^c_N-2^T (K_N-1^T R K_N-1   + A^c_N-1^T H A^c_N-1+Q) A^c_N-2+Q) A^c_N-3,which can also be transformed into the form a_3 (I_3 ⊗ R) b_3 = c_3 [ H; I_2 ⊗ Q ] d_3.Therefore, according to the above derivation we conclude the equation (<ref>) for i=1,…,N, where the parameters a_i,b_i,c_i,d_i are calculated by (<ref>).§ PROOF OF OPTIMALITY PRINCIPLENow we prove the principle of the optimality for the dynamic programming: Suppose x_0 is the initial state and p_0,N^* is the optimal policy. Then for any k∈{1,2,…,N-1}, the sub-policy p_k,N^* is also optimal to the subprocess containing last N-k steps with the initial point x_k = T_k-1(x_k-1^*, u_k-1^*).Prove by contradiction. If P_k,N^* is not the optimal policy for the subsequent subprocess, then we haveV_k,N(x_k^*, p_k,N^*) > min_p_k,N∈ P_k,N(x_k^*) V_k,N(x_k^*, p_k,N).Therefore, we obtainV_0,N(x_0^*, p_0,N^*) = V_0,k-1(x_0^*, p_0,k-1^*) + V_k,N(x_k^*, p_k,N^*)> V_0,k-1(x_0^*, p_0,k-1^*) + min_p_k,N∈ P_k,N(x_k^*) V_k,N(x_k^*, p_k,N) ⩾min_p_0,k-1∈ P_0,k-1(x_0){ V_0,k-1(x_0^*, p_0,k-1)+      min_p_k,N∈ P_k,N(x_k^*) V_k,N(x_k^*, p_k,N) },which conflicts with the previous assumption, so the theorem holds. § PROOF OF THEOREM <REF> a) We first studies the convergence of {P_k} sequence.The following lemma provides a convergent property of algebraic Riccati equation.There exist a constant γ >1 and two non-negative constants c_1 = c_1(γ, P_N, P_N-1) and c_2 = c_2(γ, P_N, P_N-1) such that for any fixed finite control horizon N the following inequality holds for all k=0,1,…,N-1:β_k P_k+1⩽ P_k⩽β_k P_k+1,whereβ_k := γ^k (γ-1)/γ^k(γ-1)+c_1,β_k := γ^k (γ-1) + c_2/γ^k(γ-1).See the proof of Proposition 7 in <cit.>.Lemma <ref> reveals both monotonic and convergent properties of the {P_k} sequence.Denote a positive definite matrix Φ=P_N^-1/2 P_N-1 P_N^-1/2 and let[ c_1 = max{0, (1/λ_min(Φ)-1)(γ -1)},; c_2 = max{0, (λ_max(Φ)-1)(γ -1)}. ]The corresponding parameter γ := 1/(1-σ) is calculated as formula (16) in <cit.>. However, one basic assumptions of Lemma <ref> is Q>0, while our problem may have Q=0 (in Sec. <ref>-B). If Q=0, their calculation of parameter γ will fail and we provide the following selection method of γ instead.Suppose there is an n × m matrix X. We haveX^T (A-B)X ⩾ 0 ⇒ X^T AX ⩾ X^TBX.Let X=B^-1/2. From A⩾ B we deriveB^-1/2 A B^-1/2⩾ B^-1/2 B B^-1/2 = I.Set Y = B^-1/2 A B^-1/2. If Y ⩾ I, then Y^-1/2 Y Y^-1/2⩾ Y^-1/2 I Y^-1/2⇒ I ⩾ Y^-1. Therefore, we have I ⩾ B^1/2 A^-1 B^1/2⇒ B^-1/2 I B^-1/2⩾ A^-1⇒B^-1⩾ A^-1. Since the sequence {P_k} is monotonic, we will discuss two kinds of conditions: increasing and decreasing. Firstly, we haveP_k - (A - B K_k )^T P_k+1 (A - B K_k) =K_k^TR K_k= A^T P_k+1^T B (R + B^T P_k+1 B)^-T R (R + B^T P_k+1 B)^-1 B^T P_k+1 A. i) If the sequence {P_k} is monotonically decreasing, there is P_N ⩾ P_k ⩾ P^* for all k. Thus we haveP_k - (A - B K_k )^T P_k+1 (A - B K_k) ⩾A^TP^*^T B (R + B^T P_N B)^-T R (R + B^T P_N B)^-1 B^T P^* A:= Q_a >0. ii) If the sequence {P_k} is monotonically increasing, there is P_N ⩽ P_k ⩽ P^* for all k. Thus we haveP_k - (A - B K_k )^T P_k+1 (A - B K_k) ⩾A^TP_N^T B (R + B^T P^* B)^-T R (R + B^T P^* B)^-1 B^T P_N A := Q_b >0.Therefore, letΓ := 1/κ· Q_a(orQ_b),σ = λ_min(Γ).Select γ := 1/(1-σ) and we have γ>1.b) We will then figure out the convergence property of {K_k} with Lemma <ref>. Similarly, we discuss the following two monotonic situations separately:i) If P_N ⩽ P_N-1 holds, we have c_1 = 0, c_2 ⩾ 0 and P_k ⩽ P_k-1⩽β_k P_k+1 for all k, which yields thatP_k-P_k+1 ⩽β_k P_k+1 -P_k+1⩽ (β_k-1) P_k+1⩽ (β_k-1) P_N.Then ‖ P_k-P_k+1‖⩽ (β_k-1) ‖ P_N ‖. There always exists N_a ∈ℕ_+ and ϵ > 0, for any N>N_a, we obtain ‖ P_1-P_0 ‖⩽ϵ.‖ K_0-K_1 ‖ = ‖ (R + B^T P_1 B)^-1 B^T P_1 A - (R + B^T P_2 B)^-1 B^T P_2 A ‖⩽‖ (R + B^T P_2 B)^-1 B^T (P_1-P_2) A ‖⩽‖ (R + B^T P_N B)^-1 B^T‖·‖ P_1-P_2 ‖·‖ A ‖ =: η_a ii) If P_N ⩾ P_N-1 holds, we have c_1⩾ 0, c_2 = 0 and β_k P_k+1⩽ P_k ⩽ P_k+1 for all k, which yields thatP_k+1-P_k⩽ P_k+1 - β_k P_k+1⩽ (1-β_k) P_k+1⩽ (1-β_k) P_N.Then ‖ P_k+1-P_k ‖⩽ (1-β_k) ‖ P_N ‖. There always exists N_b ∈ℕ_+ and ϵ > 0, for any N>N_b, we obtain ‖ P_1-P_0 ‖⩽ϵ.‖ K_1-K_0 ‖ = ‖ (R + B^T P_2 B)^-1 B^T P_2 A - (R + B^T P_1 B)^-1 B^T P_1 A ‖⩽‖ (R + B^T P_1 B)^-1 B^T (P_2-P_1) A ‖⩽‖ (R + B^T P^* B)^-1 B^T‖·‖ P_2-P_1 ‖·‖ A ‖ =: η_b c) Based on previous discussion, we provide the proof of Theorem <ref>. When N> N = N_a(orN_b), there is‖μ_0^(N+δ N) - μ_0^(N)‖ = ‖ K_0^(N+δ N) - K_0^(N)‖·‖ x_0 ‖ =‖ K_0^(N+δ N) - K_δ N^(N+δ N)‖·‖ x_0 ‖⩽δ N ‖ K_0^(N+δ N) - K_1^(N+δ N)‖·‖ x_0 ‖⩽δ N ·η_a(or η_b) ·‖ x_0 ‖ =: η.The proof is done.IEEEtran Chendi Qu received the B.E. degree in the Department of Automation from Tsinghua University, Beijing, China, in 2021.She is currently working toward the Ph.D. degree with the Department of Automation, Shanghai Jiao Tong University, Shanghai, China.She is a member of Intelligent Wireless Networks and Cooperative Control group.Her research interests include robotics, security of cyber-physical system, and distributed optimization and learning in multi-agent networks. Jianping He(SM’19) is an Associate Professor in the Department of Automation at Shanghai Jiao Tong University. He received the Ph.D. degree in control science and engineering from Zhejiang University, Hangzhou, China, in 2013, and had been a research fellow in the Department of Electrical and Computer Engineering at University of Victoria, Canada, from Dec. 2013 to Mar. 2017. His research interests mainly include the distributed learning, control and optimization, security and privacy in network systems.Dr. He serves as an Associate Editor for IEEE Trans. Control of Network Systems, IEEE Open Journal of Vehicular Technology, and KSII Trans. Internet and Information Systems. He was also a Guest Editor of IEEE TAC, IEEE TII, International Journal of Robust and Nonlinear Control, etc. He was the winner of Outstanding Thesis Award, Chinese Association of Automation, 2015. He received the best paper award from IEEE WCSP'17, the best conference paper award from IEEE PESGM'17, and was a finalist for the best student paper award from IEEE ICCA'17, and the finalist best conference paper award from IEEE VTC'20-FALL. Xiaoming Duanis an assistant professor in the Department of Automation at Shanghai Jiao Tong University. He obtained his B.E. degree in Automation from the Beijing Institute of Technology in 2013, his Master’s Degree in Control Science and Engineering from Zhejiang University in 2016, and his Ph.D. degree in Mechanical Engineering from the University of California at Santa Barbara in 2020. He was a postdoctoral fellow in the Oden Institute for Computational Engineering and Sciences at the University of Texas at Austin in 2021. His research interests include robotics, multi-agent systems, and autonomous systems.
http://arxiv.org/abs/2312.16572v1
{ "authors": [ "Chendi Qu", "Jianping He", "Xiaoming Duan" ], "categories": [ "eess.SY", "cs.SY" ], "primary_category": "eess.SY", "published": "20231227133707", "title": "Observation-based Optimal Control Law Learning with LQR Reconstruction" }
Semantic Importance-Aware Based for Multi-User Communication Over MIMO Fading Channels Haotai Liang, Zhicheng Bao, Wannian An, Chen Dong*, Xiaodong Xu, Senior Member, IEEE, Haotai Liang, Zhicheng Bao, and Wannian An are with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China (e-mail: [email protected]; [email protected]; [email protected]). *Chen Dong is the corresponding author and with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China (e-mail: [email protected]). Xiaodong Xu is with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China, and also with the Department of Broadband Communication, Peng Cheng Laboratory, Shenzhen, Guangdong, China (e-mail: [email protected]). January 14, 2024 ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Semantic communication, as a novel communication paradigm, has attracted the interest of many scholars, with multi-user, multi-input multi-output (MIMO) scenarios being one of the critical contexts. This paper presents a semantic importance-aware based communication system (SIA-SC) over MIMO Rayleigh fading channels. Combining the semantic symbols' inequality and the equivalent subchannels of MIMO channels based on Singular Value Decomposition (SVD) maximizes the end-to-end semantic performance through the new layer mapping method. For multi-user scenarios, a method of semantic interference cancellation is proposed. Furthermore, a new metric, namely semantic information distortion (SID), is established to unify the expressions of semantic performance, which is affected by channel bandwidth ratio (CBR) and signal-to-noise ratio (SNR). With the help of the proposed metric, we derived performance expressions and Semantic Outage Probability (SOP) of SIA-SC for Single-User Single-Input Single-Output (SU-SISO), Single-User MIMO (SU-MIMO), Multi-Users SISO (MU-MIMO) and Multi-Users MIMO (MU-MIMO) scenarios. Numerical experiments show that SIA-SC can significantly improve semantic performance across various scenarios. semantic communication,semantic symbols inequality, MIMO fading channels § INTRODUCTIONSemantic communication systems are considered a potentially promising next-generation communication paradigm, which has gained significant attention in recent years. According to the previous work about the semantic communication system, the new paradigm differs from classical communication systems by jointly considering the representation of semantic information in the source and the impact of the channel on the semantics <cit.>. The end-to-end semantic communication system targeted at specific sources was proposed in the last few years, validating that semantic communication exhibits impressive semantic performance compared to traditional communication, especially under low signal-to-noise (SNR) conditions <cit.>. Due to the joint consideration of channel effects on the semantic performance of the source in the optimization objective of the end-to-end semantic communication systems, the cliff effect issue, which exists in traditional communication systems, has been resolved. In addition, almost all the semantic communication systems designed are not based on the hard decoding schemes of traditional communication systems. Therefore, the mechanism of how the channel affects the performance of semantic communication systems becomes one of the potential directions that require attention. Some prior works have noticed and utilized the inequality of semantic symbols to achieve variable length coding schemes for semantic communications. The authors <cit.> used a hyperprior network <cit.> to fit the probability distribution of semantic signals and performed entropy estimation for each semantic symbol. The magnitude of the entropy expresses the importance of each semantic symbol. Consequently, it becomes possible to select the semantic symbols that need to be transmitted based on their importance level, according to the communication rate. In the research conducted by D. Huang et al., <cit.>, the semantic features were classified into various categories, and the quantization level was adjusted individually for each category. B. Zhang et al. <cit.> designed a universal variable-length semantic and channel coding module that can be utilized in different semantic communication systems. By introducing some proxy functions, the rate allocation scheme is learned in an end-to-end manner. The paper mentioned above utilized the inequality of semantic symbols and achieved a variable-length rate semantic communication system tailored for single-antenna setups. This paper considers the organic combination of the inequality of semantic symbols with the characteristics of multiple parallel subchannels in Multiple-Input Multiple-Output (MIMO) channels as a necessary and intriguing research direction. MIMO systems have garnered significant interest following the pioneering works of <cit.> and <cit.>. These systems transmit parallel data streams over a MIMO channel, leading to a linear increase in the Shannon capacity with the minimum number of transmit and receive antennas. Compared to single-input single-output (SISO) systems, MIMO systems exhibit a substantial boost in spectral efficiency, known as spatial multiplexing gain. There have recently been semantic communication systems over MIMO fading channels <cit.>. H. Wu et al. <cit.> proposed a Joint Source and Channel Coding (JSCC) scheme based on a Vision Transformer (ViT) for wireless image transmission through MIMO systems, namely ViT-MIMO.ViT-MIMO can adaptively learn the feature mapping and power allocation on the source image and channel conditions. S. Yao et al. <cit.> present a novel versatile semantic coded transmission (SCT) scheme over MIMO fading channels named VST-MIMO. An adaptive spatial multiplexing (ASM) module is designed to guide the rate allocation and stream mapping, coupling the source semantics and channel states. However, these works have relied on strong manual assumptions in the rate allocation scheme, while the optimal coding scheme may vary from one transmission task to another. Furthermore, when confronted with changes in the number of antennas, the end-to-end systems over MIMO fading channels will experience variations in their optimal coding schemes, necessitating retraining the entire model, which introduces additional burdens.Based on the considerations above, we designed a layer mapping scheme based on semantic importance, which integrates the inequality of semantic symbols with the parallel subchannels of the MIMO channel. This scheme can be applied to any end-to-end semantic communication system trained with a single antenna, and it does not require additional modules or impose any extra training burden. Furthermore, SIA-SC is combined with Orthogonal Model Division Multiple Access (O-MDMA) <cit.> technology to extend it to multi-user broadcasting scenarios. From Orthogonal Model Division Multiple Access (O-MDMA), it can be inferred that the semantic signals generated by different semantic models do not result in greater interference within the semantic domain. In addition, a semantic cancellation method is proposed to improve the semantic performance of multiple users.To provide a more comprehensive explanation of the gains achieved by the SIA-SC, a new metric, namely semantic information distortion (SID), is established to unify the expression of semantic performance <cit.>, which is affected by channel bandwidth ratio (CBR) <cit.> and signal-to-noise ratio (SNR). With the help of the proposed metric, we derived performance expression and Semantic Outage Probability (SOP) of SIA-SC for Single-User Single-Input Single-Output (SU-SISO), Single-User MIMO (SU-MIMO), Multi-Users SISO (MU-SISO) and Multi-Users MIMO (MU-MIMO) scenarios.The following is a summary of the contributions made by this paper:* A semantic importance-aware based semantic communication system over fading MIMO channel (SIA-SC) is proposed, which integrates the inequality of semantic symbols with the parallel subchannels of the MIMO channel. The SIA-SC is combined with Orthogonal Model Division Multiple Access (O-MDMA) to extend it to a multi-user transmission system in broadcasting scenarios.* The performance of a single-antenna semantic communication system based on the entropy model was theoretically analyzed. A unified expression was derived from the perspectives of the CBR and SNR to quantify the system's performance. Based on this unified expression for single-antenna and single-user scenarios (SU-SISO), the performance expression and Semantic Outage Probability (SOP) for the SIA-SC was derived in Single-User MIMO (SU-MIMO), Multi-Users SISO (MU-MIMO) and Multi-Users MIMO (MU-MIMO) scenarios.* A large number of numerical and comparative experiments were conducted on SIA-SC. The numerical results demonstrate that our SIA-SC significantly improves the transmission quality compared to the traditional source-channel separation coding scheme using the BPG image compression algorithm with capacity-achieving channel transmission.The rest of this paper is arranged as follows:The framework of the semantic importance-aware based communication system (SIA-SC) is introduced in Section 2. Section 3 analyzes the performance of the SIA-SC, including the definition of the semantic information distortion (SID) and analysis of SIA-SC for SU-SISO, SU-MIMO, MU-SISO, and MU-MIMO scenarios. Section 4 provides numerical results and corresponding discussions. Finally, conclusions about our work are drawn in section 5. Notation: ℝ^m × n represent sets of real matrices of size m× n. Variables with uppercase letters in bold-font represent matrices, variables with lowercase letters in bold-font represent vectors. In particular, 𝐒_i[m,n] represent the elements with index m, n in the i^th matrix and 𝐬_i[m] represent the elements with index m in the i^th vector. (·)^H denote the Hermitian. § SYSTEM MODELThis section introduces the semantic importance-aware semantic communication system (SIA-SC) over the fading MIMO channel. Then, the SIA-SC for the multi-user degraded broadcast scenarios is described in detail. §.§ The framework of SIA-SCA semantic importance-aware communication system is depicted in Fig. <ref>. Consider a N_t × N_r MIMO communication system, where a N_t antennas transmitter aims to send an image 𝐗∈ℝ^W× H× C to a N_r antennas receiver. The source 𝐗 is transformed into latent representation 𝐋 and semantic symbols 𝐒∈ℝ^N×1 through semantic encoder and deep JSCC encoder <cit.>, 𝐋=g_a(𝐗; θ_g_a), 𝐒=f_e(𝐋; θ_f_e),where g_a and f_e represent the semantic encoder with the training parameter θ_g_a and Deep JSCC with the training parameter θ_f_e. Before transmitting the semantic symbols, an entropy model is applied to estimate the information entropy of the symbols,namely semantic importance matrix 𝐖, 𝐖=p_e(𝐋; θ_p_e),where p_e denotes the entropy estimation function <cit.> with the training parameter θ_p_e. Details about the entropy estimation function p_e have been detailed in our previous paper <cit.>. The value of the 𝐖 can be used to generate a 0-1 mask 𝐌 that controls the semantic rate. The semantic symbols 𝐒^SM∈ℝ^K×1 to be processed can be represented as, 𝐒^SM= dropout(𝐒⊙𝐌),where ⊙ represents the dot product, the zero-value will be discarded to meet certain constraints through the dropout function. The channel bandwidth ratio (CBR) <cit.> can be denoted as R≜K/W× H× C.Different from the semantic communications over fading MIMO channel <cit.>, MIMO channels are not introduced into the end-to-end training. Instead, the Singular Value Decomposition (SVD) pre-coding and post-coding schemes are used to decompose the MIMO channels into multiple equivalent sub-channels. Specifically, given the Channel State Information (CSI), the MIMO channel matrix 𝐇∈ℝ^N_t × N_r is subjected to SVD as follows,𝐇= 𝐔Σ𝐕^H,where 𝐔∈ℝ^N_t × N_t and 𝐕∈ℝ^N_r × N_r are unitary matrices, and Σ∈ℝ^N_t × N_r is a diagonal matrix with its singular values arranged in descending order. In our derivation, we assume that N_t=N_r=r, which physically means that the number of antennas at the transmitter is equal to that at the receiver. The Σ is denoted by diag(σ_1, σ_2, ..., σ_r), where σ_1≥σ_2≥ ...≥σ_r. The MIMO channel matrix can be effectively represented as multiple sub-channels with varying channel gains through SVD decomposition. By allocating different semantic symbols based on their importance for semantic performance, it is possible to maximize semantic performance.As shown in the left part of Fig. <ref>, the importance of each symbol in the semantic symbols 𝐒^SM are in random order. The transmitter first sorts 𝐒^SM according to the value of the entropy matrix 𝐖 and then places it onto the corresponding antennas, which is called semantic layer mapping. The mapped transmission symbols 𝐒^map through semantic layer mapping can be represented as,𝐒^map = Smap(𝐒^SM;𝐖),where 𝐒^map∈ℝ^N_t × M and M=K/N_t denote the number of symbols in the semantic layer for each antenna. With SVD precoding, 𝐒^map will be transformed to 𝐒̃^map,𝐒̃^map= 𝐕𝐒^map.𝐒̃^map is transmitted to the receiver through the fading MIMO channel and can be represented as, 𝐘= 𝐇𝐒̃^map+𝐧,where 𝐇 follows independent and identically distributed (i.i.d.) complex Gaussian distribution with zero mean and variance σ_h^2, i.e., 𝐇[i, j]∼𝒞𝒩(0, σ_h^2), 𝐧 follows i.i.d. complex Gaussian distribution with zero mean and variance σ_n^2, i.e., 𝐧[i, j]∼𝒞𝒩(0, σ_n^2). In the receiver, premultiply by the matrix 𝐔^H, Ỹ= Σ𝐒^map+ñ,where ñ still follows i.i.d. complex Gaussian distribution. As shown in the right part of Fig. <ref>, the receiver uses the 𝐖 to remap the received semantic symbols Ỹ back to the semantic symbols of the original sequence Ŝ^SM,Ŝ^SM = Sremap(Ỹ;𝐖).Finally, the image 𝐗̂ and the latent representation 𝐋̂ are restored through the Deep JSCC decoder f_d and semantic decoder g_s,𝐋̂=f_d(Ŝ^SM; θ_f_d), 𝐗̂=g_s(𝐋̂; θ_g_s).§.§ The framework of O-MDMA-based SIA-SC for multi-usersAs shown in Fig. <ref>, K users use their respective SIA-SC encoders g_a(·;θ_g_a^k), f_e(·;θ_f_e^k) to extract semantic transmission symbols S_k and perform layer mapping according to their respective semantic importance matrix 𝐖_k to obtain sorted semantic symbols 𝐒^map_k. From O-MDMA<cit.>, it can be inferred that the semantic signals generated by different semantic models do not result in greater interference within the semantic domain. Under the interference cancellation capabilities of DeepJSCC, it is possible to recover the specified semantic performance effectively. Similarly, in the digital domain, the semantic symbols of different users are combined with unequal power,𝐒̃^map_super=∑_k=1^K√(P_k)𝐕_k𝐒^map_k,where P_k (P_1>...>P_k>...>P_K) is the power allocated by the k^th user, and 𝐕_k is the right singular matrix of the transmission channel for the k^th user. The semantic signal received by user k can be represented as follows,𝐘_k= 𝐇_k𝐒̃^map_super+𝐧_k.First, decode the user's received semantic signal with the highest allocated power as same as Non-Orthogonal Multiple Access <cit.>,Ỹ_1 = √(P_1)𝐔_1^H(𝐇_1𝐒̃^map_super+𝐧_1) =√(P_1)Σ_1𝐒^map_1+∑_k=2^K√(P_k)Σ_1𝐕_1^H𝐕_k𝐒^map_k + ñ.Then, the error caused by the interference of the last two terms on semantic performance is eliminated as much as possible by the ability of DeepJSCC,𝐗̂_1=g_s(f_d(Sremap(Ỹ_1;𝐖_1);θ_f_d^1);θ_g_s^1).User 2 performs the same operations as User 1 to obtain 𝐗̂_1 and eliminate User 1's interference,Ŝ^map_1=Smap(f_e(g_a(𝐗̂_1;θ_g_s^1);θ_f_e^1);𝐖_1).Assuming the pilot signals are shared among the users, User 2 can obtainthe signal with User 1's interference canceled 𝐒̃^map_super,2,𝐒̃^map_super,2=∑_k=2^K√(P_k)𝐕_k𝐒^map_k + _1,where _1=|√(P_1)𝐕_1(𝐒^map_1-Ŝ^map_1)|. Similar to User 1, the post-processed received signal of User 2 can be represented as follows,Ỹ_2 = 𝐔_2^H(𝐇_2𝐒̃^map_super, 2+𝐧_2) =√(P_2)Σ_2𝐒^map_2+∑_k=3^K√(P_k)Σ_2𝐕_2^H𝐕_k𝐒^map_k + _1 + ñ,where _1 = Σ_2𝐕_2^H_1. In general, for User i, Ỹ_i=Σ_i𝐒^map_i+∑_k=i+1^K√(P_k)Σ_i𝐕_i^H𝐕_k𝐒^map_k + _i-1 + ñ. § SEMANTIC SYSTEM ANALYSISThe key focus of the SIA-SC system introduced in the previous section is the semantic symbol inequality, meaning that each semantic symbol contributes differently to semantic performance. Currently, existing performance analyses for semantic communication <cit.> have not considered the inequality of semantic symbols. This section first introduces semantic information distortion (SID) and Semantic Outage Probability (SOP). A unified expression is provided to represent the impact of SNR and CBR on semantic performance. Subsequently, we further analyze the performance in MU-SISO, SU-MIMO, and MU-MIMO scenarios. §.§ Semantic Information Distortion and Semantic Outage ProbabilityX. Mu et al. <cit.> first propose to employ the data regression method and approximate the semantic performance concerning the received SNR. However, semantic systems with different semantic encoding lengths (or expressed using CBR) require using functions with different parameters to express their performance curves.It is easy to realize that a key difference between semantic communication and traditional communication lies in the fact that the receiver no longer performs hard/soft decisions; instead, it directly feeds the contaminated transmission symbols into the semantic decoder for semantic recovery. So, the indicative metric influenced by the SNR is no longer the symbol error rate but rather the degree of distortion in the transmitted symbols. Assuming that the remaining semantic symbols are transmitted through the AWGN channel to the receiver, 𝐬̂[i] = 𝐬[i] + n, 0<i<K+1, SID(γ)=∑_i=1^K𝐰[i]𝔼[|𝐬̂[i]-𝐬[i]|^2]=∑_i=1^K𝐰[i]σ_n^2, ∑_i=1^N𝐰[i]=1, 𝔼[𝐬^*𝐬]≤ P,where γ represents the SNR of the received signal and𝐰[i] (𝐰[1]>...>𝐰[i]>...>𝐰[K]>...>𝐰[N]) represents the degree of importance of the i^th semantic symbol 𝐬[i].∑_i=1^N𝐰[i]|𝐬[i]-𝐬̂[i]| is named as SID.Furthermore, current semantic communications with adaptive rate control achieve their adaptation by setting relatively less important semantic symbols to zero, such as Nonlinear Transform Source-Channel Coding for Semantic Communications (NTSCC) <cit.>, Wireless Model Division Video Semantic Communication (MDVSC) <cit.> and the SIA-SC proposed in the second section of this paper.Considering the end-to-end semantic communication systems, controlling CBR is typically achieved by zeroing out semantic symbols as Eq. (<ref>), which can be equivalently expressed as,SID(CBR)=∑_i=K+1^N𝐰[i]|𝐬[i]-0|^2=∑_i=K+1^N𝐰[i]|𝐬[i]|^2.We have separately simulated the influence of SNR and CBR on the SID, and their impact on semantic performance is depicted in Fig. <ref> using MDVSC. As shown in Fig. <ref>(a)(b), as SNR and CBR decrease, SID increases and the semantic performance decreases. More importantly, as shown in Fig. <ref>(c), the impact of SID generated by SNR and CBR on semantic performance (MS-SSIM) is consistent.Therefore, the performance expressions <cit.> can be rewrittenas follows:ξ(CBR, γ)=ξ(SID)=ξ(∑_i=1^N𝐰[i]|𝐬[i]-ŝ_i|^2).The estimation expressions for the SID in the SU-SISO scenarios can be obtained from Eq. (<ref>) and Eq. (<ref>) as follows:SID_SU-SISO(γ, CBR)=∑_i=1^K𝐰[i]σ_n^2 + ∑_i=K+1^N𝐰[i]|𝐒[i]|^2. With the help of the defined and derived SID, the SOP can be defined as the probability of exceeding the distortion constraint SID_th, which is expressed as <cit.>,P{SID(γ, CBR)>SID_th}.When considering a SU-SISO system, according to Eq. (<ref>) and Eq. (<ref>), where only Gaussian noise 𝐧 is a random variable, then the semantic outage probability is equivalent to,P_SU-SISO =P{∑_i=1^K𝐰[i]𝐧^2[i]>SID_th-∑_i=K+1^N𝐰[i]|𝐬[i]|^2},where ∑_i=1^K𝐰[i]𝐧[i] is the sum of squares of Gaussian distributions with different variances, following a generalized non-central chi-squared distribution for which there is currently no closed-form expression. Fortunately, due to the largeness of K, i.e., the degrees of freedom of the non-central chi-squared distribution are large, it further approximates a Gaussian distribution according to the Central Limit Theorem,∑_i=1^K𝐰[i]𝐧[i]∼𝒞(σ_n^2∑_i=1^K𝐰[i], 2σ_n^4∑_i=1^K𝐰^2[i]).By utilizing the cumulative distribution function of the Gaussian distribution, the SOP can be calculated.§.§ SU-MIMO scenariosThe SIA-SC MIMO system mentioned in Section 2 uses SVD precoding and postcoding methods. According to Eq. (<ref>), the gains of each sub-channel are (σ_1, σ_2, ..., σ_r) respectively. The SNR of the j^th equivalent sub-channel can be expressed as,γ_j=σ_jP/σ_n.The expected value of SID generated by channel noise can be represented as follows:SID(γ)=∑_j=1^r∑_i=1^M𝐖[i,j]σ_n^2/σ_j^2P,where M denotes the number of semantic symbols for each sub-channel and M× r=K. The impact of CBR on semantic performance recovery is the same as in the SU-SISO scenario, so the estimation expression for the SU-MIMO scenario can be expressed as,SID_SU-MIMO(γ, CBR) =∑_j=1^r∑_i=1^M𝐖[i,j]σ_n^2/σ_j^2P+∑_i=K+1^N𝐰[i]|𝐬[i]|^2.Assuming a time-invariant channel for transmitting an image, relying on Channel State Information at the Transmitter (CSIT), the SOP for SU-MIMO scenarios can be expressed similarly to Eq. (<ref>), as follows:P_SU-MIMO =P{∑_j=1^r∑_i=1^M𝐖[i,j]𝐧^2[i,j]/σ_j^2P>SID_th-∑_i=K+1^N𝐰[i]|𝐬[i]|^2}.Similarly to Eq. (<ref>), ∑_j=1^r∑_i=1^M𝐖[i,j] approximately follows a Gaussian distribution,∑_j=1^r∑_i=1^M𝐖[i,j] 𝐧^2[i,j]/σ_j^2P∼𝒞(σ_n^2∑_j=1^r∑_i=1^M𝐖[i,j]/σ_j^2P, 2σ_n^4∑_j=1^r∑_i=1^M(𝐖[i,j]/σ_j^2P)^2).§.§ MU-SISO scenariosConsidering a scenario with two users in a downlink, where the base station allocates power P_1 to user 1 and power P_2 to user 2, with P_1 > P_2, 𝐬_super=√(P_1)𝐬_1+√(P_2)𝐬_2.The semantic signals received by user 1 and user 2 respectively are𝐲_1 =𝐡_1𝐬_super+𝐧_1=𝐡_1√(P_1)𝐬_1+𝐡_1√(P_2)𝐬_2+𝐧_1, 𝐲_2 =𝐡_2𝐬_super+𝐧_2=𝐡_2√(P_1)𝐬_1+𝐡_2√(P_2)𝐬_2+𝐧_2,where 𝐧_i∼𝒞𝒩(0, σ_n, i^2). First, decode the semantic signal of user 1, γ_1=P_1|𝐡_1|^2/σ_n,1^2+P_2|𝐡_1|^2, ŝ_1=𝐬_1+√(ρ_2/ρ_1)𝐬_2+𝐧_1/𝐡_1√(P_1).The SID expectation affected by the channel γ and CBR can be expressed as,SID_MU-SISO, 1 (γ, CBR)=∑_i=1^K𝐰_1[i][P_2/P_1+σ^2/|h_1|^2P_1]+∑_i=K+1^N𝐰_1[i]|𝐬_1[i]|^2. Next, according to Fig. <ref>, the decoder for user 1 is utilized to decode the image of user 1, 𝐱̂_1, sic =g_s(f_d(Sremap(𝐬_1+√(P_2/P_1)𝐬_2+𝐧_1/𝐡_2√(P_1);𝐖_1);θ_f_d^1);θ_g_s^1),ŝ_1, sic=Smap(f_e(g_a(𝐱̂_1,sic;𝐖_1);θ_g_s^1);θ_f_e^1);𝐖_1).As the receiver does not use hard/soft decisions, it has to rely on its own understanding ability to derive a message as close as possible to the original message. But this reconstruction of 𝐬_1 cannot be perfect. Furthermore, ∑_i=1^K𝐰_2[i]|𝐬_1[i]-𝐬_1,sic[i]|^2 is related to the performance of decoding the user 1 image first, i.e. related to SID_MU-SISO, 1.Thus, SID_sic=∑_i=1^K𝐰_2[i]|𝐬_1[i]-𝐬_1,sic[i]|^2 can be expressed by SID_MU-SISO, 1,SID_sic=∑_i=1^K𝐰_2[i]|𝐬_1[i]-𝐬_1,sic[i]|^2 = te(SID_MU-SISO, 1),where te denote the model can be established for SID_sic and SID_MU-SISO, 1. User 2 then utilizes an estimate of ŝ_1,sic to eliminate interference from User 1,ŝ_2=𝐲_2-𝐡_2√(P_1)𝐬̂_1,sic/𝐡_2√(P_2). SID_MU-SISO, 2(γ, CBR)=te(SID_1)+∑_i=1^K𝐰_2[i]σ^2/|h_2|^2P_2+∑_i=K+1^N𝐰_2[i]|𝐬_2[i]|^2. The SOP for User 1 and User 2 P_MU-SISO, 1, P_MU-SISO, 2 can be expressed as,P_MU-SISO, 1=P{∑_i=1^K𝐰_1[i]𝐧^2[i]/|h_1|^2P_1>SID_th-∑_i=1^K𝐰_1[i]P_2/P_1-∑_i=K+1^N𝐰_1[i]|𝐬_1[i]|^2},∑_i=1^K𝐰_1[i] 𝐧^2[i]/|h_1|^2P_1∼𝒞(σ_n^2∑_i=1^K𝐰_1[i]/|h_1|^2P_1, 2σ_n^4∑_i=1^K(𝐰_1[i]/|h_1|^2P_1)^2). P_MU-SISO, 2=P{∑_i=1^K𝐰_2[i]𝐧^2[i]/|h_2|^2P_2>SID_th-te(SID_MU-SISO, 1)-∑_i=K+1^N𝐰_2[i]|𝐬_2[i]|^2},∑_i=1^K𝐰_2[i] 𝐧^2[i]/|h_2|^2P_2∼𝒞(σ_n^2∑_i=1^K𝐰_2[i]/|h_2|^2P_2, 2σ_n^4∑_i=1^K(𝐰_2[i]/|h_2|^2P_2)^2). §.§ MU-MIMO scenariosConsidering a scenario with two users in a downlink, where the base station allocates power P_1 to user 1 and power P_2 to user 2, with P_1 > P_2. Similar to Eq. (<ref>),Ỹ_1 = √(P_1)Σ_1𝐒^map_1+√(P_2)Σ_1𝐕_1𝐕_2𝐒^map_2 + ñ,The SID expectation affected by the channel and CBR for User 1 can be expressed as,SID_MU-MIMO,1=∑_j=1^r∑_i=1^M𝐖_1[i,j][σ_n^2/σ_j^2P_1+P_2/P_1]+∑_i=K+1^N𝐰_1[i]|𝐬^map_1[i]|^2. The following discusses the SID of the imperfect semantic SIC decoding for user 2. As same as Eq. (<ref>),Ỹ_2 =√(P_2)Σ_2𝐒^map_2+√(P_1)Σ_2𝐕_2^H𝐕_1|𝐒_1^map-𝐒̂_1^map| + ñ.Consistent with the discussion in the previous subsection, _1 = Σ_2𝐕_2^H𝐕_1|𝐒_1^map-𝐒̂_1^map| is related to SID_MU-MIMO,1, ∑_j=1^r∑_i=1^M𝐖_2[i,j]_1^2=te(SID_MU-MIMO,1),and the SID for User 2 can be given by,SID_MU-MIMO,2=∑_j=1^r∑_i=1^M𝐖_2[i,j]σ_n^2/σ_j^2P_2+P_1/P_2te(SID_MU-MIMO,1)+∑_i=K+1^N𝐰_2[i]|𝐬^map_2[i]|^2.The SOP for User 1 and User 2 P_MU-MIMO, 1, P_MU-MIMO, 2 can be expressed as,P_MU-MIMO, 1=P{∑_j=1^r∑_i=1^M𝐖_1[i,j]𝐧^2[i,j]/σ_j^2P_1>SID_th-∑_j=1^r∑_i=1^M𝐖_1[i,j]P_2/P_1-∑_i=K+1^N𝐰_1[i]|𝐬^map_1[i]|^2},∑_j=1^r∑_i=1^M 𝐖_1[i,j]𝐧^2[i,j]/σ_j^2P_1∼𝒞(σ_n^2∑_j=1^r∑_i=1^M𝐖_1[i,j]/σ_j^2P_1, 2σ_n^4∑_j=1^r∑_i=1^M(𝐖_1[i]/σ_j^2P_1)^2). P_MU-MIMO, 2=P{∑_i=1^K𝐰_2[i]𝐧^2[i]/|h_2|^2P_2>SID_th-te(SID_MU-SISO, 1)-∑_i=K+1^N𝐰_2[i]|𝐬_2[i]|^2},∑_j=1^r∑_i=1^M 𝐖_2[i,j]𝐧^2/σ_j^2P_2∼𝒞(σ_n^2∑_j=1^r∑_i=1^M𝐖_2[i,j]/σ_j^2P_2, 2σ_n^4∑_j=1^r∑_i=1^M(𝐖_2[i,j]/σ_j^2P_2)^2).§ EXPERIMENTS AND DISCUSSIONS§.§ Experimental Setup§.§.§ DatasetsOur semantic importance-aware based communication system over MIMO Rayleigh fading channels for images is trained on the Open Images datasets <cit.> which has a training set of over 100,000 images, a testing set of over 1000 images. In this paper, the images on the Open Images dataset are downscaled to 512×512 for the model training. Furthermore, the Open Images test set and the Kodak dataset (768×512) <cit.> are used for model testing. §.§.§ Network architecturesIt's worth emphasizing that our multi-antenna approach applies to other semantic communication systems based on semantic importance, such as Qin's <cit.> and Dai's <cit.>. In this section, we have modified our previous MDVSC <cit.> system developed for image-oriented semantic communication based on semantic importance. The training phase is similar to MDVSC, which is treated as an end-to-end semantic communication system for a single antenna. The specific network architecture is illustrated in Fig. <ref>, including a Latent Transformer, DeepJSCC Encoder, DeepJSCC Decoder, Latent Inversion, and Importance-aware module.§.§.§ BaselineAs a benchmark for comparison, we consider a scheme using JPEG, JPEG2000, and BPG for image compression, together with capacity-achieving channel coding <cit.> (using waterfilling, NOMA capacity) to transmit the compressed images over the channel.The source-capacity schemes calculate the MIMO channel capacity, which is the maximum amount of lossless transmission data, and then compress the image to the maximum amount of lossless transmission data to ensure that traditional receivers can successfully decode.In addition, the following equation can be obtained by left multiplying the inverse matrix of the diagonal matrix Σ to the Eq. (<ref>),Ỹ_eq=Σ^-1Ỹ= 𝐒^map+ñ.This indicates that semantic importance is not distinguished on the MIMO layer mapping, named the SIA-SC channel-equalization scheme.MS-SSIM <cit.> is employedas the metric for evaluation, which is defined as follows,MS-SSIM(x, y) = (2μ_xμ_y+C_1)(2σ_xy+C_2)/(μ_x^2+μ_y^2+C_1)(σ_x^2+σ_y^2+C_2),where μ_x, σ_x and σ_xy are the mean, standard deviation, and cross-correlation between the two patches x, y, respectively. C_1 and C_2 terms can avoid instability when the means and variances are close to zero. An average MS-SSIM is then determined over all the test images. §.§ Single-user Transmission PerformanceThe comparison between SIA-MIMO and the JPEG-Capacity, JPEG2000-Capacity, BPG-Capacity benchmarks and SIA-SC channel-equalization scheme is shown in Fig. <ref>. Fig. <ref>(a) shows the MS-SSIM results of the Open Images test set with CBR constraint R = 1/24 in 2×2 MIMO scenario. As can be seen, the SIA-SC MIMO-SVD scheme outperforms the separation-base benchmark in all SNRs. This is mainly benefited from the DeepJSCC architecture of the semantic communication system. The semantic communication system considers the main impact of channel noise on source semantics during the transmission process. The training process tries to maximally avoid the loss of source semantics caused by the channel. The advantages of the SIA-SC MIMO-SVD scheme become more significant as the number of antennas increases. In a 2×2 MIMO scenario, our scheme outperforms the SIA-SC scheme by over 2 dB. In a 4×4 MIMO scenario, our scheme exceeds the SIA-SC scheme by over 3 dB. In a large-scale 64x64 MIMO scenario, our scheme surpasses the SIA-SC MIMO-channel equalization scheme by over 10 dB.Additionally, the proposed method outperforms traditional separate transmission schemes in both small-scale and large-scale antenna scenarios. However, simply applying an end-to-end semantic communication system to a multi-antenna system (SIA-SC MIMO-channel equalization) would not provide additional gains, and may even underperform separate transmission schemes as shown in Fig. <ref>(b)(c)(d)(e)(f). A visual comparison of the reconstructed images for the SIA-SC MIMO-SVD scheme and the traditional scheme is presented in Fig. <ref>. From top to bottom are scenes with different antenna numbers and SNR, and from left to right are the proposed SIA-SC MIMO-SVD scheme, BPG-Capacity scheme, and JPEG2000-Capacity scheme. It can be seen that BPG-Capacity and JPEG2000 produce visible blocking artifacts, especially in channels with antenna numbers and low SNR, which are not present in the images transmitted with SIA-SC. The visualized examples of the proposed scheme show that the impact of the channel on the image is a global faintening effect, still retaining details, while the comparative schemes show that the impact of the channel on the image is more of a blurring of details. §.§ Muti-user Transmission PerformanceThe comparison between SIA-MIMO and the JPGE2000-Capacity and BPG-Capacity benchmarks is shown in Fig. <ref>. Specifically, JPGE2000-Capacity and BPG-Capacity schemes calculate the channel capacity of two users using the NOMA scheme, which is the maximum number of bits that can be transmitted. Then the source information is compressed to the maximum bit number using BPG and JPEG source coding schemes, respectively. Since NOMA-Capacity <cit.> assumes the calculation of channel capacity under perfect successive interference cancellation (SIC), the interference between users becomes less significant compared to channel noise interference at lower SNR. Therefore, the performance of the two users under NOMA is comparable. As can be seen from Fig. <ref>, the performance of users allocated with higher power in the multi-user multi-antenna system is mostly similar to the BPG-Capacity and JPEG2000-Capacity schemes at high SNR. At low SNR, the proposed scheme has a distinct advantage. For example, in a 2×2 MIMO scenario when the SNR is lower than -2dB, and in a 4×4 MIMO scenario when the SNR is lower than -10dB.Additionally, it is worth noting that the performance of User 1 and User 2 under the SIA-SC scheme is similar in high SNR. However, as the SNR decreases, the performance of User 2 gradually becomes inferior to that of User 1. This is because the performance of the semantic SIC scheme depends on the performance of User 1's recovery. When User 1's performance is poor, the error _1 in Eq. (<ref>) will be larger, reducing the recovery performance for User 2. §.§ Validation of Semantic System Analysis§.§.§ SU-SISO scenarioAs defined in Eq. (<ref>) of the semantic performance analysis in Section 3, it first calculates the average SID and MS-SSIM values for all test sets and uses a linear function to fit them, as shown in Fig. <ref>,MS-SSIM=a_1× SID+a_2,where a_1=-0.044, a_2=0.956 in our trained system under the experimental setting.By utilizing the SID expressions derived for the three scenarios (SU-MIMO, MU-SISO, and MU-MIMO) in Section 3, the transmitter can directly use the expression fitted above between the SU-SISO semantic performance and SID to estimate the corresponding semantic performance of each user.As shown in Fig. <ref>, the dashed line represents the SOP for the SU-SISO scenario obtained through Eq. (<ref>) and Eq. (<ref>). At the same time, the discrete points are derived from simulations, which indicates that Eq. (<ref>) and Eq. (<ref>) provide an approximate closed-form expression for the SOP in the SU-SISO scenario.§.§.§ SU-MIMO scenarioFig. <ref> compares the predicted and simulated results for a single-user MIMO scenario, where the predicted line is obtained by calculating the SID for a single-user through Eq.(<ref>) and then calculating the corresponding semantic performance (MS-SSIM) by substituting it into Eq. (<ref>). As can be seen from Fig. <ref>, the trend of the prediction and simulation remains generally consistent. The predicted values are slightly higher than the simulated values, and the error gradually increases with more antennas. Overall, the transmitter can predict the performance of the image at the receiver given the known SNR. As shown in Fig. <ref>, the dashed line represents the SOP for the SU-MIMO scenario obtained through Eq. (<ref>) and Eq. (<ref>). In this simulation, an interruption occurs when the semantic performance of MS-SSIM is less than 0.9. As the number of antennas increases, the probability of outage becomes smaller.§.§.§ MU-SISO scenarioWhen considering multi-user scenarios, we first validate the advantage of the semantic SIC method (decode→ encode→ cancel). As shown in Fig. <ref>(a), the horizontal axis represents User 1's performance, and the vertical axis represents User 2's SID. It can be seen that using the semantic SIC scheme can significantly reduce User 2's SID under poor channel conditions. Specifically, as shown in Fig. <ref>(b), at low SNR, the quality of the received signal is poor. However, through the denoising ability of the semantic model itself, the semantic SIC method can eliminate certain noise. Therefore, using the semantic SIC scheme improves User 2's performance. As described by Eq. (<ref>) in Section 3, the SIC after semantic SIC is related to the original SIC of User 1. As shown in Fig. <ref>(c), the horizontal axis represents the SIC of User 1, i.e. SID_1 in Eq. (<ref>), and the vertical axis represents the SID after semantic SIC, i.e. SID_sic=∑_i=1^K𝐰[i]|𝐬_1[i]-𝐬_1,sic[i]|^2 in Eq. (<ref>). As can be seen from Fig. <ref>(c), when SID_1 is 0, SID SIC is slightly greater than 0. This is because the semantic model is a lossy encoding process, so there is still a loss even when the channel is noiseless. However, as the value of SID_1 increases, SID_sic also slowly increases. Therefore, a linear function is used to fit the function te,SID_sic=te(SID_1)=b_1× SID_1+b_2,where b_1=0.64, b_2=0.21 in our trained system under the experimental setting. Since there is no closed-form solution in theory to mathematically describe the performance of AI models, fitting is a necessary step. However, the above formula mainly shows the proportional relationship between SID and MS-SSIM, where the fitted parameters b_1, b_2 apply to the model under this performance. As long as the performance remains the same, even if the model or dataset changes, these parameters are still applicable. Fig. <ref> compares the predicted and simulated results for the two-user SISO scenario, where the predicted line is obtained by calculating the SID through Eq.(<ref>) and Eq. (<ref>) and then calculating the corresponding semantic performance (MS-SSIM) by substituting it into Eq. (<ref>). As shown in Fig. <ref>, the predicted semantic performance values are slightly higher than the actual simulated values, but the overall trend is consistent and the gap is not significant. As shown in Fig. <ref>, the theoretical values and simulation values of the SOP of user 1 and user 2 under the expected performance (MS-SSIM) of 0.9 and 0.7 respectively are plotted. The simulation values are basically on the theoretical curve, verifying the feasibility of the closed-form expression of the SOP of Eq. (<ref>), Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>) in the MU-SISO scenario.§.§.§ MU-MIMO scenarioFig. <ref> shows the predicted and simulated curves for the MU-MIMO scenario. As with other validation methods, it first calculates the SID for the two users respectively using Eq. (<ref>) and Eq. (<ref>), and then calculates the corresponding performance using Eq. (<ref>). As with other scenarios, the predicted values are slightly greater than the simulated values, with the overall trend remaining consistent and the errors being relatively small.Fig. <ref>(a)(b)(c) represent SOP for the 4x4, 8x8, and 16x16 MU-MIMO scenarios, with an MS-SSIM target value set at 0.9. The simulation values closely align with the theoretical curve, with a slightly larger error observed in the 16x16 scenario, although it remains within an acceptable range.§ CONCLUSIONThis paper presents a semantic importance-aware based communication system (SIA-SC) over MIMO Rayleigh fading channels. Combining the semantic symbols' inequality and the equivalent subchannels of MIMO channels based on SVD maximizes the end-to-end semantic performance through the new layer mapping method. For multi-user scenarios, a method of semantic interference cancellation is proposed. Moreover, a novel metric called SID has been introduced to provide a unified representation of semantic performance, which is affected by CBR and SNR.With the help of the proposed metric, we derived performance expressions and SOP of SIA-SC for SU-SISO, SU-MIMO, MU-MIMO and MU-MIMO scenarios. Numerical experiments show that SIA-SC can significantly improve semantic performance across a large variety of scenarios. § ACKNOWLEDGMENTThis work is supported in part by the National Key Research and Development Program of China under Grant 2022YFB2902102.
http://arxiv.org/abs/2312.16057v1
{ "authors": [ "Haotai Liang", "Zhicheng Bao", "Wannian An", "Chen Dong", "Xiaodong Xu" ], "categories": [ "cs.IT", "eess.SP", "math.IT" ], "primary_category": "cs.IT", "published": "20231226140107", "title": "Semantic Importance-Aware Based for Multi-User Communication Over MIMO Fading Channels" }
Minimization of the Robin eigenvalues with perimeter constraint]Minimization of the k-th eigenvalue of the Robin-Laplacian with perimeter constraint S. Cito]Simone Cito [Simone Cito]Dipartimento di Matematica e Fisica “E. De Giorgi”, Università del Salento, Via per Arnesano, 73100 Lecce, Italy. [S. Cito][email protected]. Giacomini] Alessandro Giacomini [Alessandro Giacomini]DICATAM, Sezione di Matematica, Università degli Studi di Brescia, Via Branze 43, 25133 Brescia, Italy [A. Giacomini][email protected] In this paper we address the problem of the minimization of the k-th Robin eigenvalue λ_k,β with parameter β>0 among bounded open Lipschitz sets with prescribed perimeter. The perimeter constraint allows us to naturally generalize the problem to a setting involving more general admissible geometries made up of sets of finite perimeter with inner cracks, along with a suitable generalization of the Robin-Laplacian operator with properties which look very similar to those of the classical setting. Within this extended framework we establish existence of minimizers, and prove that the associated eigenvalue coincides with the infimum of those achieved by regular domains. 10ptKeywords: Robin-Laplacian eigenvalues, shape optimization, sets of finite perimeter, functions of bounded variations.10pt2020 Mathematics Subject Classification: 49J35, 49J45, 26A45, 35R35, 35J20, 28A75.[ [ =====§ INTRODUCTIONGiven ⊂^N open, bounded and with a Lipschitz boundary, λ∈ is said to be an eigenvalue of the Laplace operator under Robin (or Fourier) boundary conditions with constant β>0 if there exists a nontrivial u∈ W^1,2() such that-Δ u=λ u in∂ u/∂ν+β u=0 on ∂,which in the weak sense means∀φ∈ W^1,2() : ∫_∇ u·∇φ dx+β∫_∂uφ d=λ∫_ uφ dx.Here ν denotes the outer normal to ∂, whilestands for the Hausdorff (N-1)-dimensional measure on ^N, which coincides with the usual area measure on regular hypersurfaces. It is known thatadmits a positively diverging sequence of eigenvalues 0<λ_1,β()≤λ_2,β()≤…≤λ_k,β()≤…→ +∞,which are given (counting multiplicity) by the min-max formulaλ_k,β()=min_V ∈_kmax_u∈ V, u≠0∫_ |∇ u|^2 dx+β∫_∂u^2 d/∫_ u^2 dx,where _k denotes the family of vectorial subspaces of W^1,2() with dimension k. The quantity appearing in (<ref>) is the so called Rayleigh quotient R_β, and it involves a boundary term. Shape optimization problems involving Robin eigenvalues have been widely studied in recent years (see for example <cit.> for an overview): the main difference with respect to the much more studied problems involving Dirichlet eigenvalues is due to the fact that λ_k,β fails to be monotone under inclusion and does not enjoy simple rescaling properties, essentially because of the presence of the boundary integral in the Rayleigh quotient. The minimization of the first eigenvalue under a measure constraint leads to the so called Faber-Krahn inequality for the Robin-Laplacian: minimizers are balls, and this has been established only quite recently by Bossel <cit.> in 1986 for two dimensional smooth domains, and by Daners <cit.> in 2006 for general N dimensional Lipschitz domains. In the case k=2, Kennedy <cit.> proved that optimal domains are the union of two congruent balls. For k≥ 3 the problem is open, and some advances have been achieved by Bucur and the second author in <cit.> employing techniques from free discontinuity problems which are described below.In this paper we address the problem of minimizing λ_k,β under a perimeter constraint, namely we studymin{λ_k,β(Ω):Ω⊂ℝ^N is a bounded Lipschitz domain with (∂Ω)=p }where p>0. The existence and regularity of minimizers for problem (<ref>) in the class of convex domains have been studied by the first author in <cit.>. Allowing more general geometries, existence of minimizers is open as long as k≥ 2 (for k=1 minimizers are still balls). Our aim is to generalize problem (<ref>) to a larger class of geometries in order to gain existence of optimal domains. The presence of the perimeter allows us to deal with the problem in a more geometrical way with respect to <cit.>, still employing free discontinuity arguments but in a much more simplified way (also in the Dirichlet case, the perimeter has a “regularizing” effect on the problem: under a measure constraint, existence is available within the class of quasi-open sets, and optimal domains are known to be bounded and of finite perimeter (see <cit.>), while under a perimeter constraint De Philippis and Velichkov <cit.> proved that optimal domains are open and with a fairly smooth boundary). 10pt The issue of minimizing λ_k,β for k≥ 3 under a measure constraint has been addressed by Bucur and the second author in <cit.>, by generalizing the free discontinuity approach developed in <cit.> and <cit.> (see also <cit.> and <cit.>) to deal with the optimization of the first eigenvalue and associated variants (for example the torsion of the domain, leading to the so called de Saint-Venant inequality). Roughly speaking, in the case of the first eigenvalue, the free discontinuity approach can be summarized as follows. One replaces the dependence on a domain Ω with the dependence on a function u belonging to a suitable class of functions of bounded variation, whose support will be identified with Ω and whose jump set will play the role of ∂Ω. The Rayleigh quotient gives rise to a free discontinuity functional for u of the formR_β(u):=∫_^N |∇ u|^2 dx+β∫_J_u[γ_l^2(u)+γ_r^2(u)] d/∫_^N u^2 dx,where γ_l(u) and γ_r(u) stands for the traces of u on both sides of its jump set J_u, with respect to a given orientation. The minimization of λ_1,β() is replaced by the minimization of R_β(u) under a measure constraint for the support supp(u). Existence of minimizers follows by using compactness and lower semicontinuity properties of free discontinuity functionals: the Faber-Krahn inequality is established (and its validity extends to more general geometries) by showing that minimizers u have support equal to a ball. For higher order eigenvalues, the main idea of <cit.> is to reinterpret the min-max characterization (<ref>) in the free discontinuity setting, by replacing the k-dimensional subspaces V_k with vector valued functions u whose components are linearly independent and belong again to a suitable class of vector valued functions of bounded variation _k. The problem is generalized to the minimization of max R_β in _k under a measure constraint, with R_β given in (<ref>), and the max being computed on the space generated by the components of the functions. The intuitive idea behind the approach is that given a minimizer u, its support should provide the optimal domain , the space generated by the components being the optimal subspace in the min-max characterization (<ref>). The generalized problem on _k can be seen as a relaxation of the original one in the following sense: the minimum value turns out to be the infimum of λ_k,β in the class of Lipschitz domains. The same approach has been used by Nahon in <cit.> to deal with the minimization of more general functionals whose prototype is ↦λ_1,β()+…+λ_k,β() under a measure constraint: in this case, the supports of minimizers turn out to be open sets with boundary of finite -measure. 10pt In the present paper, as mentioned above, we deal with the optimization problem (<ref>) still employingfree discontinuity arguments as in <cit.>, but the perimeter constraint permits us toreformulate the problem in a setting which retains a more geometrical flavor. This is because the constraint suggeststhat the class of sets of finite perimeter, which enjoy strong compactness and structural properties, should be naturally involved. In Section <ref> we generalize the Robin-Laplacian eigenvalue problem to geometries (,Γ), where ⊂^N is a set of finite perimeter and finite volume, while Γ⊂^1 is a (-countably) rectifiable setwith finite -measure: here ^1 stands for the family of points of density 1 for . The idea behind the choice of these geometries is that open bounded domains are naturally replaced by sets of finite perimeter.Moreover being interested in variational problems, letting thus the domains vary, it is natural to keep into account possible “inner” boundaries, which may arise as a degeneration of inner holes, or by a folding of outer boundaries: the rectifiable set Γ, which can be seen as a crack inside , is introduced precisely for this purpose.In order to extend the Robin-Laplacian boundary value problem to these irregular geometries, we generalize some ideas introduced in <cit.> in the context of two dimensional open sets with rectifiable topological boundary. In particular we replace the usual Sobolev space W^1,2 withΘ(Ω,Γ):={u∈ SBV(^N): u=0 a.e. in Ω^c,∇ u∈ L^2(^N), J_u⊆∂^*Ω∪Γ up to-negligible sets, and ∫_∂^*Ω∪Γ[γ_l(u)^2+γ_r(u)^2] d<+∞},where ∂^* denotes the reduced boundary of , SBV(^N) is the space of special functions of bounded variation in ^N, and γ_l(u),γ_r(u) stand for the traces of u from both sides of the rectfiable set ∂^*Ω∪Γ with respect to (any) given orientation (see Sections <ref> and <ref>).The weak formulation of (<ref>) for the eigenvalue problem is generalized to (see Sections <ref> and <ref>)∀φ∈Θ(,Γ) : ∫_∇ u·∇φ dx+β∫_∂^* ∪Γ[γ_l(u)γ_l(φ)+γ_r(u)γ_r(φ)] d=λ̃∫_ uφ dx.We prove that Θ(,Γ) can be endowed with a complete scalar product, which permits to deal with (<ref>) using classical arguments coming from the Hilbert space approach to boundary value problems. In particular it is guaranteed the existence of a diverging sequence of eigenvalues (,Γ) which admit the min-max characterization (counting multiplicity)(Ω,Γ):=min_V⊂Θ(Ω,Γ) V=kmax_u∈ V∖{0}∫_|∇ u|^2 dx+β∫_∂^*Ω∪Γ[γ_l(u)^2+γ_r(u)^2] d/∫_u^2 dx.Concerning the the perimeter constraint, we make use of the generalized perimeterΓ:=Per()+2(Γ),where Per() stands for the usual perimeter of . The definition of Γ is again suggested by the interpretation of the inner crack Γ as degenerated holes or inwards folds of the outer boundary, so that its contribution to the generalized perimeter is given naturally by twice its area . Related notions of perimeter have been considered by Cerf in <cit.> in the study of the lower semicontinuous envelope of the Hausdorff measure for the approximation by smooth sets, and by Henrot and Zucco in<cit.> in relationship with the Minkowski content.We reformulate problem (<ref>) in the formmin{(Ω,Γ): (Ω,Γ)∈𝒜(^N), ΩΓ=p}.Notice that for a regular bounded domain ⊂^N, we have that (,∅)∈(^N) with (,∅)=λ_k,β()and∅=Per()=(∂).We thus see that (<ref>) is a natural generalization of the original problem (<ref>).10ptThe main result of the paper (Theorem <ref>) is that problem (<ref>) is well posed: in addition, the minimal eigenvalueis equal to the infimum of λ_k,βon regular domains, so that the generalized problem(<ref>) can be seen as a good relaxation of the original one (<ref>). Existence of optimal configurations is established by applying the direct method of the Calculus of Variations. A delicate point is given by the compactness of a minimizing sequence (_n,Γ_n), in particular on the side of the inner cracks Γ_n (compactness for _n, at least locally, is guaranteed by the properties of the perimeter). In this direction we employ a notion of variational convergence for rectifiable sets, calledσ^2-convergence (see Section <ref>), introduced by Dal Maso, Francfort and Toader in <cit.> to study existence of crack evolutions in finite elasticity. It is a notion of convergence which enjoys good compactness and lower semicontinuity properties, and turns out to be very natural in our context: it plays essentially the same role of the Hausdorff convergence for compact connected cracks with finite ^1 length in dimension two, providing a generalization of Göla̧b semicontinuity theorem of the length to higher dimensions. On the basis of Theorem, <ref> we can establish existence of minimizers for functionals involving only some eigenvalues (see Theorem <ref>), whose prototype is (,Γ)↦λ̃_1,β(,Γ)+…+λ̃_k,β(,Γ) or more generally (,Γ)↦ [λ̃^p_1,β(,Γ)+…+λ̃^p_k,β(,Γ)]^1/p with p>1.10ptThe paper is organized as follows. In Section <ref> we fix the notation and recall the main definitions and properties concerning sets of finite perimeter, special functions of bounded variation, and the variational σ^2-convergence for rectifiable sets. In Section <ref> we define the family of admissible configurations (,Γ)∈(^N), and generalize the Robin-Laplacian boundary value problem to those geometries, defining inparticular the generalized eigenvalues (,Γ). The main results of the paper are stated in Section <ref>. Section <ref> collects some technical compactness, lower semicontinuity and approximation properties concerning the admissible configurations and their associated functional spaces. The proof of the main results is given in Section <ref>.§ NOTATION AND PRELIMINARIES§.§ Basic notationIf E ⊆^N, we will denote with |E| its N-dimensional Lebesgue measure, and by (E) its (N-1)-dimensional Hausdorff measure: we refer to <cit.> for a precise definition, recalling that for sufficiently regular setscoincides with the usual area measure. Moreover, we denote by E^c the complementary set of E, and by 1_E its characteristic function, i.e., 1_E(x)=1 if x ∈ E, 1_E(x)=0 otherwise. Finally, for t∈ [0,1] we will write E^t for the points of density t for E (see <cit.>). If A ⊆^N is open and 1 ≤ p ≤ +∞, we denote by L^p(A) the usual space of p-summable functions on A with norm indicated by ·_p. W^k,2(A) will stand for the Sobolevspace of functions in L^2(A) whose derivatives up to order k in the sense of distributions belongs to L^2(A). Finally _b(A;^N) will denote the space of ^N-valued Radon measures on A, which can be identified with the dual of ^N-valued continuous functions on A vanishing at the boundary. We will denote by |·| its total variation. We say that Γ⊆^N is -countably rectifiable ifΓ=N∪⋃_i∈Γ_iwhere (N)=0 and Γ_i⊆_i are Borel sets, where _i is a C^1-hypersurface of ^N. It is not restrictive to assume that the sets Γ_i are mutually disjoint. In the rest of the paper, we will write simply rectifiable in place of -countably rectifiable. If Γ_1,Γ_2 are rectifiable, we will write Γ_1Γ_2 if (Γ_1∖Γ_2)=0, and Γ_1Γ_2 if ((Γ_1∖Γ_2)∪ (Γ_2∖Γ_1))=0.§.§ Functions of bounded variation Let A⊆^N be an open set. We say that u ∈ BV(A) if u ∈ L^1(A) and its derivative in the sense of distributions is a finite Radon measure on A, i.e., Du ∈_b(A;^N). BV(A) is called the space of functions of bounded variation on A. BV(A) is a Banach space under the norm u_BV(A):=u_L^1(A)+Du__b(A;^N).We call |Du|(A):=Du__b(A;^N) the total variation of u. We refer the reader to <cit.> for an exhaustive treatment of the space BV. If u∈ BV(A), then the measure Du can be decomposed canonically (and uniquely) asDu=D^au+D^ju+D^cu.The measure D^au is the absolutely continuous part (with respect to the Lebesgue measure) of the derivative: the associated density is denoted by ∇ u∈ L^1(A;^N). The measure D^ju is the jump part of the derivative and it turns out thatD^ju=(u^+-u^-)⊗ν_u⌊ J_u.Here J_u is the jump set of u, ν_u is the normal to J_u, while u^± are the upper and lower approximate limits at x. It turns out that J_u is arectifiable set: if we choose the orientation given by a normal vector field ν_u we have -a.e. u^+=γ_r(u)and u^-=γ_l(u)where γ_r(u) and γ_l(u) are the right and left traces of u on the rectifiable set J_u, associated to the orientation given by ν_u. Finally D^cu is called the Cantor part of the derivative, and it vanishes on sets which are σ-finite with respect to . Clearly D^ju+D^cu is the singular part D^su of Du with respect to ^N. The space SBV(A) of Special Functions of Bounded Variation on A is defined asSBV(A):={u∈ BV(A) :D^cu=0},i.e., it is composed of those functions of bounded variation with vanishing Cantor part. The following result is contained in <cit.> and provides a very important tool to approximate our relaxed configurations via smooth sets. Let Ω⊂^N be open and bounded with Lipschitz boundary, and let q>1. Let u∈ SBV(Ω)∩ L^∞(Ω) be such that ∇ u∈ L^q(Ω;^N) and (J_u)<+∞. There exists (u_n)_n such that the following items hold true for every n∈. (i) J_u_n is polyhedral in Ω, i.e., ((J_u_n∖ J_u_n)∩)=0, and J_u_n∩ is given by intersection withof the union of a finite number of (N-1)-dimensional simplexes.(ii) u_n∈ W^k,∞(Ω∖J_u_n) for every k≥ 1.(iii) It holdsu_n→ ustrongly in L^1(Ω), ∇ u_n→∇ ustrongly in L^q(Ω;^N),andlim sup_n→+∞∫_J_v^j_n∩ Aφ(x,v_n^+,v_n^-,ν_J_v_n) d≤∫_J_v∩ Aφ(x,v^-,v^-,ν_J_v) dfor any open set A⊂⊂Ω and every upper continuous function φ:Ω×××𝕊^N-1→[0,+∞[ such that φ(x,a,b,ν)=φ(x,b,a,-ν). If φ is locally bounded near the boundary, i.e., lim sup_(x_n,a_n,b_n,ν_n)→ (x,a,b,ν)φ(x_n,a_n,b_n,ν_n) d<+∞ where x_n∈ and x∈∂, then we can choose A⊆. §.§ Sets of finite perimeter Given E⊆^N measurable and A⊆^N open, we say that E has finite perimeter in A (or simply has finite perimeter if A=^N) ifPer(E;A):=sup{∫_E div(φ) dx : φ∈ C^∞_c(A;^N), φ_∞≤ 1}<+∞.If |E|<+∞, then E has finite perimeter if and only if 1_E∈ BV(^N). It turns out thatD1_E=ν_E ⌊∂^*E, Per(E;^N)=(∂^*E),where the rectifiable set ∂^*E is called the reduced boundary of E, and ν_E is the associated inner approximate normal (see <cit.>). It turns out that ∂^*E⊆∂ E, but the topological boundary can in general be much larger than the reduced one.The next proposition is the collection of two approximation results proved in <cit.>, and will prove particularly useful for our problem. Let μ be a Radon measure on ^N such that μ<< and let E⊂^N be a bounded set of finite perimeter. Let u_k:=1_E*ρ_ε_k, where ρ_k is a regularizing kernel, and let A_k,t:={u_k>t}. Then for a.e. t∈ (0,1), A_k,t is a smooth set and, for a.e. t∈ (1/2,1), the sequence (A_k,t)_k provides an interior approximation of E, i.e., lim_k→+∞|μ|(A_k,tΔ E^1)=0andlim_k→+∞(∂ A_k,t∖ E^1)=0. §.§ A variational convergence for rectifiable sets We recall the notion of σ^2-convergence for rectifiable sets introduced in <cit.> to deal with problems in fracture mechanics. It is a variational notion of convergence for rectifiable sets which enjoy compactness and lower semicontinuity properties under uniform bound for the associatedmeasure, which are very similar to that enjoyed by connected closed sets in ^2 with respect to Hausdorff convergence in view of Goļäb semicontinuity theorem. The definition is based on the use of the space SBV. Recall the notation Γ_1Γ_2 and Γ_1Γ_2 which denote inclusion and equality up to -negligible sets. Let D⊂^N be open and bounded, and let Σ_n,Σ⊂ D be rectifiable sets such that (Σ_n),(Σ)≤ C for some C>0. We say that Σ_n →Σin the sense of σ^2-convergence if the following two conditions are satisfied. (a) If u_j∈ SBV(D) with J_u_jΣ_n_j for some sequence n_j→+∞, and u∈ SBV(D) are such that u_j_∞, u_∞≤ C,u_j → ustrongly in L^1(D)and∇ u_j →∇ uweakly in L^2(D;^N),then J_uΣ. (b) There exist a function u∈ SBV(D) with ∇ u∈ L^2(D;^N) and a sequence u_n ∈ SBV(D) with u_∞, u_n_∞≤ C,u_n → ustrongly in L^1(D)and∇ u_n →∇ uweakly in L^2(D;^N),such thatJ_u Σ and J_u_nΣ_n for every n∈.Condition (a) guarantees that Σ contains the jump sets of functions which are suitable limits of functions jumping on Σ_n. Condition (b) ensures that Σ is the smallest set which enjoys this property. The notion of convergence introduced in <cit.> can indeed be generalized to an exponent p∈ ]1,+∞[: we will use only the case p=2.The following result compactness and lower semicontinuity result holds true, and will be fundamental for our analysis.Let D⊂^N be open and bounded. For every sequence Σ_n⊂ D of rectifiable sets such that (Σ_n)≤ C, there exist a rectifiable set Σ⊂ D and a subsequence Σ_n_k such thatΣ_n_k→Σin the sense of σ^2-convergence.Moreover we have(Σ) ≤lim inf_n→ +∞(Σ_n).§ ADMISSIBLE GEOMETRIES AND THE ASSOCIATED GENERALIZATION OF THE ROBIN-LAPLACIANIn this section we introduce the class (^N) of admissible geometries, and extend to this setting the Robin-Laplacian boundary value problem. §.§ Admissible geometriesThe precise definition of the class of admissible geometries we consider is the following. Recall that ^1 stands for the set of points of density one for .We say that the couple (Ω,Γ) is an admissible geometry, and we write (Ω,Γ)∈𝒜(^N), if Ω⊂^N is a set of finite perimeter with ||<+∞, and Γ⊂Ω^1 is a rectifiable set with (Γ)<+∞. From a geometrical point of view, we think (Ω,Γ)∈𝒜(^N) as the setwith an “inner” crack Γ: the domain is thus given essentially by the possibly irregular set ∖Γ. If ⊂^N is Lipschitz, then (,∅)∈(^N). In order to give a meaning to the Robin boundary value problem for the Laplacian on an admissible geometry (,Γ), and to define the associated eigenvalues, we need to define a functional space which can replace the Sobolev space W^1,2 on the (possibly irregular) set ∖Γ.Given a rectifiable set K⊂^N with (K)<+∞, by definition, we may writeK=N∪⋃_i=0^∞ K_i,where (N)=0, while for every i∈ the Borel sets K_i are subsets of a 𝒞^1 -manifold _i of dimension N-1, and K_i∩ K_j=∅ for i≠ j.It is not restrictive, up to reducing _i, to assume that _i is orientable with associated normal vector filed ν_i, and that two continuous trace operators from BV(^N) to L^1(_i), the “left” and “right” traces, are defined. For every u∈ BV(^N), let us denote by γ_l^i(u),γ_r^i(u)the “left” and “right” traces of u on K_i, using the orientation associated to ν_i.By general theory of BV functions it is known thatDu⌊ K=∑_i [γ^i_r(u)-γ^i_l(u)]ν_i ⌊ K_i.We define global “left” and “right” traces on the full K by settingγ_l(v):=∑_i γ_l^i(v) andγ_r(v):=∑_i γ_r^i(v). The functional space we are looking for is the following.Let (Ω,Γ)∈𝒜(^N). We setΘ(Ω,Γ):={u∈ SBV(^N): u=0 a.e. in Ω^c, ∇ u∈ L^2(^N), J_u∂^*Ω∪Γand ∫_∂^*Ω∪Γ[γ^2_l(u)+γ^2_r(u)] d<+∞}, where γ_l(u),γ_r(u) are the left and right traces of u on ∂^*∪Γ defined according to Remark <ref>.Notice that if ⊂^N is open, bounded and with a Lipschitz boundary, (,∅)∈(^N) and Θ(,∅) is simply given by the extension to zero outsideof the functions in the Sobolev space W^1,2().The following lemma holds true.Let (Ω,Γ)∈𝒜(^N) and u∈Θ(,Γ). Then u∈ L^2() with u_L^2()≤ C( ∇ u_L^2(;^N)+γ_l(u)_L^2(∂^*∪Γ)+γ_r(u)_L^2(∂^*∪Γ)),where C=C(N,||).Clearly the space Θ(,Γ) is closed under truncation. For every n∈ let us consideru_n:=max{min{u,n},-n}∈Θ(,Γ).By the chain rule in BV (see <cit.>) we get u^2_n∈ BV(^N) with ∇(u_n)^2=2u_n∇ u_nandγ_l/r(u_n^2)=γ_l/r(u_n)^2.Using the embedding of BV(^N) into L^N/N-1(^N) and since u is supported inwe can write employing also the Cauchy inequalityu_n^2_L^2()≤ ||^1/Nu^2_n_L^N/N-1(^N)≤ C_N ||^1/N|D(u_n^2)|(^N) ≤ C_N ||^1/N( 2∫_ |u_n||∇ u_n| dx+∫_∂^*∪Γ |γ_l(u_n^2)-γ_r(u_n^2)| d) ≤1/2u_n^2_L^2()+C(∫_ |∇ u_n|^2 dx+∫_∂^*∪Γ[γ_l(u_n)^2+γ_r(u_n)^2] d) ≤1/2u_n^2_L^2()+C(∫_ |∇ u|^2 dx+∫_∂^*∪Γ[γ^2_l(u)+γ^2_r(u)] d),where C depends on || but not on n. The conclusion follows by letting n→+∞. The computations of the previous proof shows that indeed u∈ L^2N/N-1() withu_L^2N/N-1()≤ C( ∇ u_L^2(;^N)+γ_l(u)_L^2(∂^*∪Γ)+γ_r(u)_L^2(∂^*∪Γ)). By generalizing to (^N) the approach developed in <cit.> concerning open bounded sets with rectifiable topological boundary in dimension two, we can endow the space Θ(Ω,Γ) with a scalar product which turns it into a Hilbert space, a variant of which can be used to generalize the Robin boundary value problem. For every u,v∈Θ(Ω,Γ) let us set(u,v)_Θ(Ω,Γ):=∫_Ω∇ u·∇ v dx +∫_∂^*Ω∪Γ(γ_l(u)γ_l(v)+γ_r(u)γ_r(v)) d.The following property holds true.Let (Ω,Γ)∈𝒜(^N). The space Θ(Ω,Γ) is a Hilbert space with respect to the scalar product (<ref>). Moreover the embedding Θ(Ω,Γ)↪ L^2() is compact.In view of Lemma <ref>, we need simply to check the completeness of the scalar product. Let (u_n)_n∈ be a Cauchy sequence in Θ(Ω,Γ) with respect to the scalar product (<ref>). Taking into account Lemma <ref>, there exist Φ∈ L^2(^N;^N), u∈ L^2(^N), α_1,α_2∈ L^2(∂^*∪Γ) such that Φ=u=0 a.e. on ^c, u_n→ ustrongly in L^2(^N), ∇ u_n→Φstrongly in L^2(^N;^N), γ_l/r(u_n)→α_l/rstrongly in L^2(∂^*∪Γ).It is readily seen, thanks to Ambrosio's theorem (see <cit.>), that u∈ SBV(^N) with Φ=∇ u and J_u∂^*∪Γ. Moreover, we have that (u_n)_n∈ is a Cauchy sequence in BV(^N): indeed thanks to Lemma <ref> we may write u_n-u_m_BV(^N) ≤∇ u_n-∇ u_m_L^1(^N;^N) +∫_∂^*∪Γ[|γ_l(u_n-u_m)|+|γ_r(u_n-u_m)|] d +u_n-u_m_L^1() ≤ C[ ∇ u_n-∇ u_m_L^2(^N;^N)+γ_l(u_n-u_m)_L^2(∂^*∪Γ). .+γ_r(u_n-u_m)_L^2(∂^*∪Γ]^1/2→ 0.Thanks to the continuity of the (locally defined) trace operators γ_l/r with respect to the BV norm, we infer that α_l/r=γ_l/r(u).We conclude that u∈Θ(,Γ) and that u_n→ u with respect to the scalar product (<ref>). Let us check the compact embedding into L^2(). Let (u_n)_n∈ be a bounded sequence in Θ(,Γ). In view of(<ref>), we have that (u_n)_n∈ is bounded in L^2N/N-1(). Since (u_n)_n∈ is bounded also in BV(^N), and supported inwhich has finite measure, up to a subsequence we have that u_n converges to some u strongly in L^1(). From the previous bounds, we have that u∈ L^2N/N-1() and by interpolation that the convergence is strong in L^2(), so that the conclusion follows. Notice that the space Θ(,Γ) and its associated Hilbert structure given by the scalar product (<ref>) are intrinsically defined, i.e., they do not depend on the decomposition and on the orientation of ∂^*∪Γ used to define the trace operators γ_l,γ_r according to Remark <ref>. Indeed, in view of the general theory for BV functions (see <cit.>), traces admit an a.e. pointwise characterization as Lebesgue values on semiballs (the direction being that of the chosen normal): as a consequence, every choice for the decomposition or the orientation will cause simply a possible switch between γ_l(u) and γ_r(u) (so that the condition defining Θ(,Γ) remains the same), but will pair γ_l/r(u) with the corresponding γ_l/r(v), leaving the scalar product unchanged. §.§ Generalization of the Robin-Laplacian boundary value problem Let us fix β>0 and let (,Γ)∈(^N). Given f∈ L^2() we say that u∈Θ(,Γ) is the solution of the Robin-Laplacian boundary value problem on (,Γ) with coefficient β -Δ u=f in ∖Γ ∂ u/∂ n+β u=0 on ∂^*∪Γif∀ v∈Θ(,Γ) :a_β(u,v)=∫_ fv dx,where a_β(u,v) is the bilinear forma_β(u,v):=(∇ u,∇ v)_L^2(;^N)+β (γ_l(u),γ_l(v))_L^2(∂^*∪Γ)+β (γ_r(u),γ_r(v))_L^2(∂^*∪Γ).Thanks to the results of Section <ref>, the following proposition holds true.Let (,Γ)∈(^N). Then for every f∈ L^2() there exists one and only one solution in Θ(,Γ) to problem (<ref>). Moreover the resolvent operator from L^2() into Θ(,Γ) is compact.Existence and uniqueness follow by Lax-Milgram theorem. Concerning the compactness of the resolvent operator, let (f_n)_n∈ be a bounded sequence in L^2(), and let u_n∈Θ(,Γ) be the solution associated to f_n. Up to a subsequence we may assume f_n fweakly in L^2()andu_n uweakly in Θ(,Γ).We immediately infer that u is the solution associated to the right hand side f. Moreover, thanks to the compact embedding in L^2() given by Proposition <ref> we haveu_n→ ustrongly in L^2().We can thus writea_β(u_n,u_n)=∫_ f_n u_n dx →∫_ fu dx=a_β(u,u)which yields the strong convergence in Θ(,Γ) of u_n to u, so that the proof is concluded.§.§ Eigenvalues of the generalized Robin-Laplacian Using the weak formulation of the Robin-Laplacian, we can immediately define eigenvalues and eigenfunctions: we say that λ̃ is an eigenvalue with associated eigenfunction u∈Θ(,Γ) if u≠0 and∀ v∈Θ(,Γ) :a_β(u,v)=λ̃∫_ uv dx.In view of the Hilbert space structure of the generalized boundary value problem (<ref>), and of the properties of the associated resolvent operator (see Proposition <ref>), we deduce the existence of a sequence of eigenvalues (,Γ)→ +∞ which can be characterized (taking multiplicity into account) through the min-max formula(Ω,Γ)=min_V⊂Θ(Ω,Γ) V=kmax_v∈ V∖{0}R_β(v), where R_β(v) is the Rayleigh quotientR_β(v):= ∫_|∇ v|^2 dx+β∫_∂^*Ω∪Γ[γ_l^2(v)+γ_r^2(v)] d/∫_v^2 dx.It turns out that the space which provides the minimum in (<ref>) is given by V_k:=span{u_1,…,u_k}, where for j≤ k the function u_j is a L^2-normalized eigenfunction associated to λ̃_j,β(,Γ).Notice that, if Ω⊂^N is a Lipschitz domain λ_k,β(Ω)=(Ω,∅).Moreover if u_1,…,u_k∈ W^1,2(Ω) are the first k Robin eigenfunctions on Ω, then u_1,…, u_k, extended by zero outside Ω, belong to Θ(Ω,∅) and provide the first k-eigenfunction of the generalized formulation. For every (,Γ)∈(^N), the following generalized Faber-Krahn inequality holds true:λ_1,β(B)≤λ̃_1,β(,Γ),where B is a ball such that |B|=||. In other words, balls are optimal domains for λ̃_1,β under a volume constraint. Indeed if u∈Θ(,Γ) is the first eigenfunction of (,Γ), then we can assume u≥ 0 so thatu∈ SBV^1/2(^N) according to the notation introduced in <cit.> (i.e., u^2∈ SBV(^N)). Thenλ̃_1,β(,Γ)=R_β(u)= ∫_|∇ u|^2 dx+β∫_∂^*∪Γ[γ_l^2(u)+γ_r^2(u)] d/∫_u^2 dx ≥∫_^N|∇ u|^2 dx+β∫_J_u[γ_l^2(u)+γ_r^2(u)] d/∫_^Nu^2 dx≥λ_1,β(B),the last inequality following from <cit.>, so that (<ref>) holds true. The standard scaling property for λ_k,β extends readily to : for t>0 we have(tΩ,tΓ)=1/t^2λ̃_k,tβ(Ω,Γ).In particular, we have that (tΩ,tΓ)<λ̃_k,β(Ω,Γ) for every t>1: we will refer to this inequality as the monotonicity under dilation of .Let (_1,Γ_1), (_2,Γ_2)∈(^N) with Ω_1^1∩Ω_2^1=∅ and Ω_1^1,Ω_2^1 lying at positive distance. Then, using the weak equation for the eigenfunctions, it is readily seen that the spectrum of (Ω_1∪Ω_2,Γ_1∪Γ_2) is the union of the spectra of (Ω_1,Γ_1) and (Ω_2,Γ_2). In particular we have the standard formula(Ω_1∪Ω_2,Γ_1∪Γ_2)=min_i=0,…,kmax{λ̃_i,β(Ω_1,Γ_1),λ̃_k-i,β(Ω_2,Γ_2)},where we assume λ̃_0,β(Ω_j,Γ_j):=0.The abstract Hilbert space formulation of the problem yields the generalization to the present setting of the classical result on the boundedness of eigenfunctions. Let (,Γ)∈(^N), and let λ̃>0 be an eigenvalue for (<ref>). Then, there exists a positive constant C, depending only on N, || and β, such that for every (L^2-normalized) eigenfunction u for λ̃ it holdsu_∞≤ C λ̃^N. Let u be an eigenfunction for λ̃, i.e., u≠0 and∀ v∈Θ(,Γ) :a_β(u,v)=λ̃(u,v)_L^2().For every 0<h<u_∞, the functionu_h:=(u-h)_+-(u+h)_-∈Θ(Ω,Γ)is not identically null. Using u_h as a test in equation (<ref>) we get∫_Ω∇ u·∇ u_h dx+β∫_∂^*Ω∪Γ[γ_l(u)γ_l(u_h)+γ_r(u)γ_r(u_h)] d=λ̃∫_Ω uu_h dx.Since u_h is supported in A_h:={|u|>h} and ∇ u_h=∇ u on A_h one gets∫_Ω|∇ u_h|^2 dx +β∫_(∂^*Ω∪Γ)∩{u>h}[γ_l(u)γ_l(u-h)+γ_r(u)γ_r(u-h)] d+β∫_(∂^*Ω∪Γ)∩{u<-h}[γ_l(u)γ_l(u+h)+γ_r(u)γ_r(u+h)] d=λ̃∫_{u>h} u(u-h) dx+λ̃∫_{u<-h} u(u+h) dx.Consequently, we obtain∫_Ω|∇ u_h|^2 dx +β∫_∂^*Ω∪Γ[γ_l^2(u_h)+γ_r^2(u_h)] d≤λ̃∫_Ω u_h^2 dx+λ̃h∫_Ω |u_h| dx≤λ̃∫_Ω u_h^2 dx+λ̃/2[∫_Ωu_h^2 dx+h^2|A_h|]=3λ̃/2∫_Ωu_h^2 dx+h^2λ̃/2|A_h|.In view of Hölder inequality and of Remark <ref> we getu_h_L^2(Ω) ≤u_h_L^2N/N-1(Ω) |A_h|^1/2N≤C_1[∫_Ω|∇ u_h|^2 dx+β∫_∂^*Ω∪Γ[γ_l^2(u_h)+γ_r^2(u_h)] d]^1/2|A_h|^1/2N,so that plugging the estimate in (<ref>),∫_Ω|∇ u_h|^2 dx+β∫_∂^*Ω∪Γ[γ_l(u_h)^2+γ_r^2(u_h)] d ≤3λ̃/2 C_1^2[∫_Ω|∇ u_h|^2 dx+β∫_∂^*Ω∪Γ[γ_l(u_h)^2+γ_r^2(u_h)] d]|A_h|^1/N+h^2λ̃/2|A_h|.Here C_1 depends on N, || and β. Let us consider h_0>0 such that 3λ̃/2C_1^2 (1/h_0∫_|u| dx)^1/N=1/2.If u_∞≤ h_0, then (<ref>) immediately follows. Let us thus assume that u_∞>h_0. Then for h_0≤ h<u_∞ we have|A_h|≤1/h∫_ |u| dxso that in view of the choice of h_0 the first term in the right hand side of (<ref>) can be absorbed by the left hand side leading to ∫_Ω|∇ u_h|^2 dx+β∫_∂^*Ω∪Γ[γ_l(u_h)^2+γ_r(u_h)^2] d≤ h^2λ̃|A_h|.In view of (<ref>) we getu_h_L^2(Ω)≤ C_1 h√(λ̃)|A_h|^1/2+1/2N,so that by applying Hölder inequality we deduce the key estimateu_h_L^1(Ω)≤ C_1 h√(λ̃)|A_h|^1+1/2N.By settingg(h):=∫_Ω|u_h| dx=u_h_L^1(Ω),one has g'(h)=-|A_h| for a.e. h>0 and (<ref>) becomesg(h)≤ C_1 h√(λ̃)(-g'(h))^1+1/2N,so that1/h^1-1/2N+1≤- (C_1√(λ̃))^2N/2N+1g'(h)/g(h)^1-1/2N+1.This last inequality shows that u_∞<+∞. Indeed, if this is not the case, integrating from h_0 to +∞, we have that the left hand side diverges, but the right hand side converges. Integrating from h_0 to u_∞ we deduce u_∞^1/2N+1-h_0^1/2N+1≤ (C_1√(λ̃))^2N/2N+1u_1^1/2N+1,from which inequality (<ref>) follows. The proof is thus concluded. § THE MAIN RESULTS Being interested in a perimeter constraint for our optimization problem, we need to generalize the notion of perimeter to admissible configurations in (^N).For every (Ω,Γ)∈(^N) we setΩΓ:=Per(Ω)+2(Γ),where Per() denotes the usual perimeter ofin ^N. The definition of the generalized perimeter for the configuration (,Γ) is based on the idea that the inner crack Γ is seen as a degenerated hole or a degenerated inner fold of the outer boundary, so that its contribution to the perimeter is given by twice its surface (Γ). The first main result of the paper is the following.Let p>0. The problemmin{(Ω,Γ): (Ω,Γ)∈𝒜(^N)with ΩΓ=p}is well posed. Moreover every minimizer (Ω,Γ) is such that Ω is bounded and(Ω,Γ)=inf{λ_k,β(A) : A⊂^N is a bounded Lipschitz open set with Per(A)=p}.For k=1, thanks to the Faber-Krahn type inequality (<ref>) and to the monotonicity of the eigenvalues under dilations, it is readily seen that the only minimizers of both problems are the couples (B,∅), where B are balls.Thanks to Theorem <ref>, we can handle more general problems involving several eigenvalues at the same time.Let ℓ∈, ℓ≥ 1 and f:]0,+∞[^ℓ→ ]0,+∞[ be a Lipschitz continuous function such that the following items hold true. (f1) f(y)→+∞ as |y|→+∞;(f2) There exists C>0 such that for every y=(y_1,…, y_ℓ),y'=(y'_1,…, y'_ℓ)∈ ]0,+∞[^ℓ with y_i≥ y'_i for every i=1,…,ℓ it holdsf(y)≥ f(y')+C|y-y'|. A typical example for f is given by f(y):=y_1+…+y_ℓ or more generally f(y):=(y_1^p+…+y_ℓ^p)^1/p with p>1. Given k_1,…,k_l∈∖{0} let us setF(Ω,Γ):=f(λ̃_k_1,β(Ω,Γ),…,λ̃_k_l,β(Ω,Γ)).The following result holds true.Let p>0, and let f:]0,+∞[^ℓ→ ]0,+∞[ satisfy (f1) and (f2).Then the problemmin{F(Ω,Γ): (Ω,Γ)∈𝒜(^N)with ΩΓ=p}is well posed. Moreover every minimizer (Ω,Γ) is such that Ω is bounded and F(Ω,Γ)=inf{F(A,∅) : A⊂^N is a bounded Lipschitz open set with Per(A)=p}.We can also treat the problem where the perimeter is involved as a penalization term, i.e.,min{F(,Γ)+ΛΩΓ:(Ω,Γ)∈𝒜(^N)},where Λ>0. Also problem (<ref>) is well posed, every minimizer is bounded, and the minimum value is given by the infimum of the values achieved on Lipschitz regular sets (see Remark <ref>).In the case β<0, eigenvalues for regular domains are still well defined, and the interesting associated variational problem is their maximization, both for the perimeter and the measure constraint. As highlighted in <cit.>, the problem can be framed within the class of measurable sets of finite perimeter, without taking into account inner cracks. The reason of that is the negative sign of the surface energies: roughly speaking, one can take the eigenfunctions for a couple (Ω,∅) as test functions for (Ω,Γ), obtaining a lower value of the functional and so a worse competitor for the associated maximization. § COMPACTNESS, LOWER SEMICONTINUITY AND APPROXIMATION RESULTS In this section we collect some technical results which are fundamental for our analysis: in particular we are interested in compactness properties of admissible configurations, lower semicontinuity results for the associated eigenvalues, and their approximation through regular sets.§.§ Compactness and lower semicontinuity propertiesThe following result holds true.Let (Ω_n,Γ_n)∈𝒜(^N) be such that Ω_nΓ_n≤ Cfor some positive constant C>0 independent of n. Assume that 1__n→ 1_strongly in L^1(^N)for some set of finite perimeter ⊂^N.Then there exists Γ⊂^N rectifiable such that (Ω,Γ)∈𝒜(^N) and the following items hold true. (i)We haveΩΓ≤lim inf_n→+∞Ω_nΓ_n.(ii) If u_n∈Θ(_n,Γ_n)with∫_Ω_n|∇ u_n|^2 dx+∫_∂^*∪Γ_n[γ_l^2(u_n)+γ_r^2(u_n)] d≤ C,then there exists u∈Θ(,Γ) such that, up to subsequences,u_n→ustrongly in L^2(^N), ∇ u_n∇ uweakly in L^2(^N;^N),and∫_∂^*Ω∪Γ[γ_l^2(u)+γ_r^2(u)] d≤lim inf_n→+∞∫_∂^*Ω_n∪Γ_n[γ_l^2(u_n)+γ_r^2(u_n)] d.We will make use of the notion of σ^2-convergence of rectifiable sets introduced in <cit.> (see Section <ref>). FromPer(_n)+2(Γ_n)=_nΓ_n≤ C,and Theorem <ref>, employing also a diagonal argument, we deduce that there exists a rectifiable set K⊂^Nsuch that, up to a subsequence (not relabelled), for every D⊂^N open and bounded we have(∂^*Ω_n∪Γ_n)∩ D → K∩ D in the sense of σ^2-convergence.Since J_1_Ω_n ∩ D (∂^*Ω_n∪Γ_n)∩ D and ∇ 1_Ω_n ∩ D=∇ 1_∩ D=0, in view of (<ref>) and of property (a) in Definition <ref> of σ^2-convergence, we deduce that ∂^*Ω∩ D K∩ D.Since D is arbitrary, we infer ∂^* K. Let now decompose K as K=(K∩Ω^0)∪∂^*Ω∪(K∩Ω^1)and let us setΓ:=K∩Ω^1,so that (,Γ)∈(^N).We divide the proof in several steps.10ptStep 1. We claim that for every D⊂^N open and bounded, there exist v,v_n∈ SBV(D) with v_∞,v_n_∞≤ C, v=v_n=0 a.e. outsideand _n respectively, ∇ v,∇ v_n∈ L^2(D;^N), J_v(∂^*Ω∪Γ)∩ D, J_v_n (∂^*Ω_n∪Γ_n)∩ D, such thatv_n → vstrongly in L^2(D),and∇ v_n ∇ vweakly in L^2(D;^N).By definition of σ^2-convergence there exist w,w_n∈ SBV(D) with w_∞,w_n_∞≤ C, J_wK∩ D, J_w_n (∂^*Ω_n∪Γ_n)∩ D, such thatw_n→w strongly in L^1(D)and∇ w_n∇ w weakly in L^2(D;^N).The first convergence is indeed also a convergence in L^2(D) in view of the uniform bound on the L^∞-norms. For ε>0 let us setv:=(w+)1_∩ Dand v_n:=(w_n+)1_Ω_n∩ D.Clearly we haveΓ∩ D J_w1_Ω∩ D (∂^*Ω∪Γ)∩ Dand so, for a.e. ε>0 we deduce (see Remark <ref>)J_vJ_w1_Ω∩ D∪ J_1_Ω∩ D (∂^*Ω∪Γ)∩ D.SinceJ_v_n (∂^*Ω_n∪Γ_n)∩ D,v_n→ v strongly in L^2(D) and ∇ v_n∇ v weakly in L^2(D;^N), we get that the claim follows by choosingoutside a negligible set. 10ptStep 2. Let us check item (i). Let D⊂^N be open and bounded and let ε>0. Let us consider the functions on Dψ_ε:=1_Ω+ε v andψ_n,ε:=1_Ω_n+ε v_n,where v,v_n are the functions given by Step 1. For a.e. >0 we have that J_ψ_ε J_1_∪ J_v(∂^*Ω∪Γ)∩ D and J_ψ_n,ε J_1__n∪ J_v_n (∂^*Ω_n∪Γ_n)∩ D.Sinceψ_n,ε→ψ_εstrongly in L^1(D)and∇ψ_n,ε∇ψ_εweakly in L^2(D;^N),lower semicontinuity in SBV entails∫_J_ψ_ε[γ_l^2(ψ_ε)+γ_r^2(ψ_ε)] d≤lim inf_n→+∞∫_J_ψ_n,ε[γ_l^2(ψ_n,ε)+γ_r^2(ψ_n,ε)] d.Taking into account bound (<ref>), for a.e. >0 small enough we deduce(∂^*∩ D)+2(Γ∩ D)-Cε ≤∫_J_ψ_ε[γ_l^2(ψ_ε)+γ_r^2(ψ_ε)] d≤lim inf_n→+∞∫_J_ψ_n,ε[γ_l^2(ψ_n,ε)+γ_r^2(ψ_n,ε)] d≤lim inf_n→+∞[(∂^*_n ∩ D)+2(Γ_n∩ D)]+C≤lim inf_n→+∞Ω_nΓ_n+Cε,so item (i) follows by letting ε→0^+ and D invade the whole ^N. 10ptStep 3. Let us come to item (ii).Since(J_u_n) ≤(∂^*_n ∪Γ_n)≤_nΓ_n≤ C,the uniform bound on the Robin energy (<ref>) and the convergence(<ref>) yield|Du_n|(^N) =∫_Ω_n|∇ u_n| dx+∫_J_u_n|γ_l(u_n)-γ_r(u_n)| d≤∇ u_n_L^2(^N)| _n |^1/2+∫_J_u_n[|γ_l(u_n)|+|γ_r(u_n)|] d≤∇ u_n_L^2(^N)| _n |^1/2+(J_u_n)+1/2∫_J_u_n(γ_l(u_n)^2+γ_r(u_n)^2) d≤C_1for some C_1>0 independent of n. Taking into account Remark <ref>, we deduce that there exists u∈ BV(^N) with u=0 a.e. in Ω^c.andu_n→ ustrongly in L^2(^N). Let us fix D⊂^N open and bounded. In view of Ambrosio's theorem we infer that u∈ SBV(D) with∇ u_n∇ uweakly in L^2(D;^N).Let us consider for every M>0 the truncated functionsu_n,M:=max{min{u_n,M},-M}and u_M:=max{min{u,M},-M}Clearly u_n,M→ u_Mstrongly in L^2(D), and∇ u_n,M∇ u_Mweakly in L^2(D;^N). Since J_u_n,M K_n∩ D, from the properties of σ^2-convergence and since u_M=0 a.e. on ^c∩ D we infer J_u_M (∂^*∪Γ)∩ D.In view of the arbitrariness of M and D we conclude that u∈ SBV(^N)with∇ u_n∇ uweakly in L^2(^N;^N).andJ_u∂^*Ω∪Γ.Let us check (<ref>). For every ε>0 let us setw_n:=u_n+ε v_nand w:=u+ε v, where v_n,v are given by Step 1.For a.e. ε>0 we haveJ_w=J_u+ε v J_u∪ J_v (∂^*Ω∪Γ)∩ D and J_w_n=J_u_n+ε v_n(∂^*Ω_n∪Γ_n)∩ D.By lower semicontinuity in SBV we may write∫_(∂^*Ω∪Γ)∩ D[γ_l^2(u+ε v) +γ_r^2(u+ε v)] d=∫_J_w[γ_l^2(w)+γ_r^2(w)] d≤lim inf_n→+∞∫_J_w_n[γ_l^2(w_n)+γ_r^2(w_n)] d≤lim inf_n→+∞∫_(∂^*Ω_n∪Γ_n)∩ D[γ_l^2(u_n+ε v_n)+γ_r^2(u_n+ε v_n)] d.Since the functions v,v_n are uniformly bounded in L^∞(D),and in view of the perimeter bound (<ref>), we get by letting ε→0^+, and since D is arbitrary∫_∂^*Ω∪Γ[γ_l^2(u)+γ_r^2(u)] d≤lim inf_n→+∞∫_∂^*Ω_n∪Γ_n[γ_l^2(u_n)+γ_r^2(u_n)] d≤ C.Thanks to (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), we conclude that u∈Θ(,Γ), so that item (ii) is proved. In the previous proof we used the following result: if D⊆^N is open and u,v∈ SBV(D) with (J_u),(J_v)<+∞, then for a.e. >0 J_u+ v J_u∪ J_v. For a proof of this fact we refer to <cit.>. A key result for our analysis is the following lower semicontinuity result foralong converging configurations according to Theorem <ref>. Let (Ω_n,Γ_n)∈(^N) converge to (Ω,Γ)∈(^N) in the sense of Theorem <ref>. Then(Ω,Γ)≤lim inf_n→+∞(Ω_n,Γ_n). Let us assume (Ω_n,Γ_n)≤ C and let us considerV_n:=span{u_n,1,…,u_n,k}⊂Θ(Ω_n,Γ_n) such that(Ω_n,Γ_n)=max_V_nR_β.Without loss of generality we can consider the u_n,j to form a L^2-orthonormal family. Then we get for j=1,…,k∫__n|∇ u_n,j|^2 dx+∫_∂^*_n∪Γ_n[γ_l^2(u_n,j)+γ_r^2(u_n,j)] d≤C̃.By Theorem <ref> we deduce that there exists u_j∈Θ(,Γ) such thatu_n,j→ u_jstrongly in L^2(^N) ∇ u_n,j∇u_jweakly in L^2(^N;^N)with∫_∂^*Ω∪Γ[γ_l^2(u_j)+γ_r^2(u_j)] d ≤lim inf_n→+∞∫_∂^*Ω_n∪Γ_n[γ_l^2(u_n,j)+γ_r^2(u_n,j)] d.Clearly {u_j :j=1,…,k} forms an orthonormal family in L^2, so that V:=span{u_1,…,u_k} is a k-dimensional subspace of Θ(Ω,Γ). Using again Theorem <ref>, for every a_j∈ we haveR_β(∑_j a_j u_j) ≤lim inf_n R_β(∑_j a_j u_n,j) ≤lim inf_n max_V_n R_β=lim inf_n (_n,Γ_n)which entails(Ω,Γ)≤max_V R_β≤lim inf_n (_n,Γ_n),and the conclusion follows.§.§ Approximation through regular domainsThe following result concerns the approximation of bounded configurations in (^N) through regular domains. Let (Ω,Γ)∈𝒜(^N) such thatis bounded. Then, there exists a sequence of bounded regular open sets (Ω_n)_n∈ such thatlim sup_n→+∞λ_k,β(Ω_n) ≤(Ω,Γ)andlim sup_n→+∞(∂Ω_n)≤ΩΓ.Let u_1,…,u_k be the first k eigenfunctions of (,Γ). It is not restrictive to assume Γ=J_(u_1,…,u_k): indeed, by employing the min-max characterization ofwe have(, J_(u_1,…,u_k))≤max_span{u_1,…,u_k} R_β=(,Γ). We divide the proof in several steps.10ptStep 1: Approximation from inside.Let us approximate Ω with smooth sets “from inside” using Proposition <ref> with the choice μ:=⌊ (∂^*∪Γ). We find _n⊂^N smooth open set such that|_n ΔΩ|→ 0, Per(_n)→ Per(Ω),with(∂^*∩_n)→ 0, (Γ∖_n)→ 0, (∂_n∖Ω^1)→ 0.If D is open and bounded such that ⊂⊂ D, we can assume that also _n⊂⊂ D. If we set Γ_n:= (Γ∩_n) ∪ (∂^*∩_n),then clearly (_n,Γ_n)∈(^N) withlim_n→+∞_nΓ_n=Γ. We claim thatlim sup_n→+∞(_n,Γ_n)≤λ_k(,Γ).In order to prove this, we first show that (,Γ) is the limit configuration of the sequence (_n,Γ_n) according to Theorem <ref>and that, in addition,if u∈Θ(,Γ)∩ L^∞(^N), then u_n:=u 1__n∈Θ(_n,Γ_n) withlim_n→ +∞∫_∂^*Ω_n∪Γ_n[γ^2_l(u_n)+γ^2_r(u_n)] d= ∫_∂^*Ω∪Γ[γ^2_l(u)+γ^2_r(u)] d.Indeed, let (,K) be a limit configuration of (_n,Γ_n) up to a subsequence. Since u_n→ u strongly in L^2(^N) and satisfies the bound (<ref>), we infer that J_u K. By choosing u equal to the eigenfunctions u_1,…,u_k of (,Γ), and since we are assuming that Γ is the union of the their jump sets, we deduce Γ K. FromK≤lim inf_n→ +∞_nΓ_n=Γ,we get KΓ. The lower semicontinuity (<ref>) entails∫_∂^*Ω∪Γ[γ_l(u)^2+γ_r(u)^2] d≤lim inf_n→+∞∫_∂^*Ω_n∪Γ_n[γ_l^2(u_n)+γ_r^2(u_n)] d,while the same argument applied to the functionw:=√((u_∞+1)^2-u^2)1_yields(u_∞+1)^2 Γ-∫_∂^*∪Γ[γ^2_l(u)+γ^2_r(u)] d ≤lim inf_n→+∞[(u_∞+1)^2_nΓ_n-∫_∂^*_n∪Γ_n[γ^2_l(u_n)+γ_r^2(u_n)] d ]so that the convergence of the perimeter (<ref>) entailslim sup_n→ +∞∫_∂^*Ω_n∪Γ_n[γ^2_l(u_n)+γ^2_r(u_n)] d≤∫_∂^*Ω∪Γ[γ^2_l(u)+γ^2_r(u)] d.which together with (<ref>) yields (<ref>). Let us come to claim (<ref>) concerning the eigenvalues. Recall that(,Γ)=max_V_kR_β=R_β(u_k),where R_k is the Rayleigh quotient (<ref>) and V_k is the space generated by the firstk eigenfunctions u_1,…,u_k. We deduce that u_i,n:=u_i1__n∈Θ(_n,Γ_n) for i=1,…,k, withu_i,n→ u_istrongly in L^2(^N), ∇ v_i,n→∇ u_istrongly in L^2(^N;^N),and for every a_1,…,a_k∈lim_n→ +∞∫_∂_n∪Γ_n[γ_l^2(∑_i a_i u_i,n)+γ_r^2(∑_i a_i u_i,n)] d=∫_∂∪Γ[γ_l^2(∑_i a_i u_i)+γ_r^2(∑_i a_i u_i)] d.The convergence of the surface energies follows from (<ref>) since the eigenfunctions are in L^∞. If we setV_k^n:=span{u_1,n,…,u_k,n},we get that V_k^n⊂Θ(_n,Γ_n) is k-dimensional for n large andlim sup_n→+∞max_V_k^n R_β≤max_V_kR_β.Indeed we can assumemax_V_k^n R_β=R_β(∑_i a_i,nu_i,n)for some a_i,n→ a_i with |(a_1,…,a_k)|=1. Then (<ref>), (<ref>) and (<ref>) together with the uniform bound in L^∞ and the fact (∂_n∪Γ_n)≤ C imply thatR_β(∑_i a_i,nu_i,n)→ R_β(∑_i a_iu_i)≤max_V_kR_β.We can thus writelim sup_n→+∞λ_k(_n,Γ_n)≤lim sup_n→+∞max_V_k^n R_β≤max_V_kR_β=λ_k(,Γ),and claim (<ref>) follows.10ptStep 2. In view of Step 1, it is not restrictive to assume ⊂^N open with smooth boundary.We apply Theorem <ref> to approximate the vector valued function on u:=(u_1,…,u_k)∈ SBV(;^k)∩ L^∞(;^N),where u_1,…,u_k are the first k eigenfunctions of (,Γ). There exists v^n=(v^n_1,…, v^n_k)⊂ L^2(;^N) with v^n_∞≤u_∞ such thatv^n→ ustrongly in L^2(;^k), ∇ v^n→∇ ustrongly in L^2(;^kN), J_v^n polyhedral in v^n∈ W^m,∞(∖J_v^n;^N)for every m≥ 1,and, for any upper semicontinuous function φ:×^N×^N× S^N-1→ [0,+∞[ locally bounded near the boundary and any open set A⊆lim sup_n→+∞∫_J_v^n∩ Aφ(x,γ_l(v^n),γ_r(v^n),ν_J_v^n) d≤∫_J_v∩ Aφ(x,γ_l(u),γ_r(u),ν_J_u) d.Since the edges of (N-1)-dimensional simplexes have two capacity zero, we can assume that J_v^n is composed of a finite family of disjoint simplexes compactly contained in .Notice that (choose φ=1)lim sup_n→+∞(J_v^n)≤(J_u)=(Γ).Moreover|Dv^n|()→ |Du|() This follows from (<ref>) and (<ref>) with the choice φ(x,a,b,ν)=|a-b| and A=. As a consequence we obtain the convergence of the associated traces on ∂, since the trace operator is continuous under strong L^1 convergence together with the convergence of the total variation (it is the so called strict convergence in BV): the convergence of the traces holds also in L^2 in view of the uniform bound on v^n_∞. By choosing φ(x,a,b,ν)=|α· a|^2+|α· b|^2 where α∈^k, and using the convergence of the traces on ∂ we may writelim sup_n ∫_∂∪ J_v^n [γ_l^2(α_1 v^n_1+…+α_k v^n_k)+γ_r^2(α_1 v^n_1+…+α_k v^n_k)] d ≤∫_∂∪Γ [γ_l^2(α_1 u_1+…+α_k u_k)+γ_r^2(α_1u_1+…+α_k u_k)] d,the inequality being uniform in α for |α|=1: this is due to the uniform bound in L^∞ for v^n together with (<ref>). In view of the geometric structure of J_v^n we can writeJ_v^n=⋂_k A^k_nwhere (A_n^k)_k∈ is a decreasing sequence of smooth open set compactly contained inwith lim_k→ +∞(∂ A^k_n)=2(J_v^n).In view of the regularity of v^n outside J_v^n, by using a diagonal argument we find k=k_n such thatlim sup_n ∫_∂∪∂ A_n^k_nγ_l^2(α_1 v^n_1+…+α_k v^n_k) d ≤∫_∂∪Γ [γ_l^2(α_1 u_1+…+α_k u_k)+γ_r^2(α_1u_1+…+α_k u_k)] d,the inequality being uniform in α∈^N with |α|=1, andlim sup_n→ +∞(∂ A_n^k_n)≤ 2lim sup_n→+∞(J_v^n)=2^d.1(Γ).By considering the smooth set _n:=∖ A_n^k_n we havelim sup_n→ +∞(∂_n)=(∂)+lim sup_n (∂ A_n^k_n)≤(∂)+2^d.1(Γ)=Γ.Then in view of (<ref>), by using the same arguments of Step 1 we infer thatlim sup_n→+∞λ_k,β(_n)≤(,Γ)and the proof is concluded.Notice that the previous construction shows that indeedlim sup_n→+∞λ_h,β(_n)≤λ̃_h,β(,Γ)for every 1≤ h≤ k, as the function u involved in Step 2 is such that its components(u_1,…,u_k) are the first k eigenfunctions of (,Γ). § PROOF OF THE MAIN RESULTS In this section we provide the proof of the main results of the paper. §.§ Proof of Theorem <ref> Let us start with the following general inequality, which is based on a cutting argument used in <cit.>, Let (Ω,Γ) ∈(^N), and for t∈Ω_t:=Ω∩{x_1<t},andΓ_t:=Ω∩{x_1<t}. Then for a.e. t large enough we have(Ω_t,Γ_t)≤(Ω,Γ)+C(Ω∩{x_1=t}),where C>0 is independent of t.We can assume |∖_t|>0 for every t∈, i.e.,is unbounded in the positive x_1 direction. Let u_1,…,u_k∈Θ(,Γ) be the first k eigenfunctions of (,Γ), which we can assume to form a L^2-orthonormal basis.For every t∈, we define the functions u_j,t∈Θ(Ω_t,Γ_t) by setting, for each j=1,…,ku_j,t:=u_j1_{x_1<t}.The functions u_j,t are linearly independent for t sufficiently large. Let us consider the coefficients α_1,t,…,α_k,t∈ with ∑_j=1^kα^2_j,t=1. such that the functionU_t:=∑_j=1^kα_j,t u_t,jrealizes the maximum of the Rayleigh quotient R_β on span{u_1,t,…,u_k,t}. We get(Ω_t,Γ_t)≤∫_Ω_t|∇ U_t|^2 dx+β∫_∂^*Ω_t∪Γ_tU_t^2 d/∫_Ω_tU_t^2 dx=∫_Ω_t|∇ U_t|^2 dx+β∫_∂^*Ω_t∪Γ_tU_t^2 d/ 1-∫_Ω∖Ω_tU_t^2 dxin view of the L^2-orthonormality of u_1,…,u_k. Moreover since∫_Ω∖Ω_tU_t^2 dx≤ 2∑_j=1^k∫_Ω∖Ω_tu_j^2 dx→ 0as t→+∞, we can assume that for t sufficiently large1/ 1-∫_Ω∖Ω_tU_t^2 dx≤1+2∫_Ω∖Ω_tU_t^2 dx≤ 2.We thus obtain the following estimates(Ω_t,Γ_t)≤(1+2∫_Ω∖Ω_tU_t^2 dx)(∫_Ω_t|∇ U_t|^2 dx+β∫_∂^*Ω_t∪Γ_tU_t^2 d) ≤(1+2∫_Ω∖Ω_tU_t^2 dx) ·((Ω,Γ)-∫_Ω∖Ω_t|∇ U_t|^2 dx-β∫_(∂^*Ω∪Γ)∩{x_1>t}U_t^2 d+β∫_{x_1=t}U_t^2 d) ≤(Ω,Γ)+2(Ω,Γ)∫_Ω∖Ω_tU_t^2 dx -∫_Ω∖Ω_t|∇ U_t|^2 dx -β∫_(∂^*Ω∪Γ)∩{x_1>t}U_t^2 d+2β∫_{x_1=t}U_t^2 d.Let us consider the restrictions of the functions u_1,…,u_k to Ω∖Ω_t and let us reflect them across the hyperplane {x_1=t}. We obtain the functions w_j,t∈Θ(A_t, K_t), where A_t is obtained by symmetrizing ∖{x_1≤ t}, while K_t is obtainedby symmetrizing Γ∖{x_1≤ t}. Setting W_t:=∑_j=1^kα_j,tw_j,t, in view of the Faber-Krahn inequality (<ref>) we may writeλ_1,β(B(t))≤∫_ A_t|∇ W_t|^2 dx+β∫_∂^* A_t∪ K_tW_t^2 d/∫_ A_tW_t^2 dx ≤2∫_Ω∖Ω_t|∇ U_t|^2 dx+2β∫_(∂^*Ω∪Γ)∩{x_1>t}U_t^2 d/2∫_Ω∖Ω_t U_t^2 dx=∫_Ω∖Ω_t|∇ U_t|^2 dx+β∫_(∂^*Ω∪Γ)∩{x_1>t}U_t^2 d/∫_Ω∖Ω_tU_t^2 dx,where B(t) is the ball of ^N such that |B(t)| =|A_t|=2|Ω∖Ω_t|, so that∫_Ω∖Ω_t|∇ U_t|^2 dx+β∫_(∂^*Ω∪Γ)∩{x_1>t}U_t^2 d≥λ_1,β(B(t))∫_Ω∖Ω_tU_t^2 dx.Then, looking at the estimate2(Ω,Γ) ∫_Ω∖Ω_tU_t^2 dx-∫_Ω∖Ω_t|∇ U_t|^2 dx-β∫_(∂^*Ω∪Γ)∩{x_1>t}U_t^2 d≤[2(Ω,Γ)- λ_1,β(B(t))]∫_Ω∖Ω_tU_t^2 dx,we deduce that the right hand side is strictly negative for t large enough (since the quantity λ_1,β(B(t)) diverges as the measure of B(t) goes to zero). In conclusion, coming back to (<ref>), we obtain (Ω_t,Γ_t)≤(Ω,Γ)+2β∫_{x_1=t}U^2_t d,and claim (<ref>) follows since U_t is bounded in L^∞ by a constant independent of t (as the eigenfunctions u_j are bounded in L^∞). We can now state the following boundedness property for configurations which are minimizers of the problem.Let (Ω,Γ)∈(^N) be a minimizer for Problem (<ref>). Then Ω is bounded. Let (Ω,Γ) be a minimizer for (<ref>) and let us suppose that Ω is unbounded. It is not restrictive to assume, up to translations and rotations, thatis unbounded in the positive direction x_1. Let us setΩ_t:=Ω∩{x_1<t},andΓ_t:=Ω∩{x_1<t}. Clearly (_t,Γ_t)∈(^N), and by projection on the hyperplane x_1=t we getΩ_tΓ_t≤ΩΓ.We divide the proof in two steps. 10ptStep 1. We claim that for a.e. t large enough(∂^*(Ω∖_t))≤ C_1(Ω∩{x_1=t}),where C_1 is independent of t. Indeed, lettingη(t):=(ΩΓ/Ω_tΓ_t)^1/N-1≥ 1and considering the dilated couple (Ω̃_t,Γ̃_t)∈(^N) withΩ̃_t:=η(t)Ω_t,andΓ̃_t:=η(t)Γ_t,the optimality of (Ω,Γ) and the admissibility of (Ω̃_t,Γ̃_t) for Problem (<ref>) yield (Ω,Γ)≤(Ω̃_t,Γ̃_t)=1/η(t)^2λ̃_k,η(t)β(Ω_t,Γ_t)≤1/η(t)(Ω_t,Γ_t),where we used the rescaling property of Remark <ref>. Since for a.e. t∈Ω_tΓ_t=ΩΓ-((∂^*Ω∪Γ)∩{x_1 > t})+(Ω∩{x_1=t}),by the very definition of η(t) we getη(t)=(1+((∂^*Ω∪Γ)∩{x_1>t})-(Ω∩{x_1=t})/Ω_tΓ_t)^1/N-1.Since by projection on the hyperplane x_1=t((∂^*Ω∪Γ)∩{x_1>t})≥(∂^*Ω∩{x_1>t})≥(Ω∩{x_1=t}),we deduce((∂^*Ω∪Γ)∩{x_1>t})-(Ω∩{x_1=t})→ 0as t→+∞. Hence we get for a.e. t large enoughη(t)≥ 1+ C_2 (((∂^*Ω∪Γ)∩{x_1>t})-(Ω∩{x_1=t}))for some positive constant C_2 independent of t. In view of estimates (<ref>) and (<ref>) we thus getη(t)(Ω,Γ)≤(Ω_t,Γ_t)≤(Ω,Γ)+C(Ω∩{x_1=t})so that using (<ref>) we infer((∂^*Ω∪Γ)∩{x_1>t})-(Ω∩{x_1=t})≤ C_3(Ω∩{x_1=t})for a suitable C_3>0independent of t, which readily implies claim (<ref>).10ptStep 2. In view of (<ref>), using the isoperimetric inequality we get for a.e. t large enough|Ω∖Ω_t|^N-1/N≤ C_4(Ω∩{x_1=t}),for some C_4>0 independent of t. By setting g(t):=|Ω∖Ω_t| we get for a.e. t that (Ω∩{x_1=t})=-g'(t) and thus (recall that we are assuming by contradiction g(t)≠0 for t sufficiently large)g'(t)/g(t)^N-1/N≤-1/C_4.But then if t_0 is sufficiently large-Ng(t_0)^1/N=∫_t_0^+∞g'(t)/g(t)^N-1/N dt=-∞which is a contradiction. We are now in a position to prove Theorem <ref>. Using the monotonicity under dilations of Remark <ref>, problem (<ref>) is equivalent tomin{(Ω,Γ): (Ω,Γ)∈𝒜(^N)with ΩΓ≤ p}.If minimizers (,Γ) exist, then clearly Γ=p, whileis bounded according to Proposition <ref>. Finally, property (<ref>) follows from Theorem <ref>. Let us proceed, as usual for these optimization problems, by induction on k∈. For k=1, according to Remark <ref>, minimizers are balls of perimeter p. Let us now assume that a minimizer exists for every j<k. Let (Ω_n,Γ_n)_n be a minimizing sequence for problem (<ref>).We can assume up to a subsequence lim_n→+∞|Ω_n|=m,with 0<m<+∞.Indeed, thanks to the isoperimetric inequality we have the upper bound|Ω_n|^N-1/N≤ C Per(Ω_n)≤ C Ω_nΓ_n≤ Cp.On the other hand, if |Ω_n| vanishes, we would obtain thanks to the Faber-Krahn inequality (<ref>)(Ω_n,Γ_n)≥λ̃_1,β(Ω_n,Γ_n)≥λ_1,β(B_n)→ +∞,where B_n is the ball having the same measure of Ω_n,against the fact that (_n,Γ_n) is a minimizing sequence. Let us apply a concentration-compactness argument to the sequence (1_Ω_n)_n∈. For every r>0 let us consider the monotone increasing functions α_n:[0,+∞[→ [0,+∞[α_n(r):=sup_y ∈^N|_n ∩ Q_r(y)|,where Q_r(y) is the cube centered at y with side r. Up to a subsequence, in view of Helly's theorem, we may assume thatα_n →αpointwise on [0,+∞[for a suitable monotone increasing function α:[0,+∞[→ [0,+∞[.The following situations may occur. (a) Vanishing: lim_r→+∞α(r)=0;(b) Dichotomy: lim_r→+∞α(r)=α̅∈ ]0,m[;(c) Compactness: lim_r→+∞α(r)=m.Let us deal with the three cases separately.10ptStep 1: Vanishing cannot occur. Indeed, if it was the case, one would have for every r>0sup_y∈^N|Ω_n∩ Q_r(y)|→ 0.Let u_k,n be a L^2-normalized k-th eigenfunction of (Ω_n,Γ_n). Since∫_^N|∇ u_k,n|^2 dx+∫_J_u_n,k[γ^2_l(u_k,n)+γ^2_r(u_k,n)] d ≤∫_Ω_n|∇ u_k,n|^2 dx+∫_∂Ω_n∪Γ_n[γ^2_l(u_k,n)+γ^2_r(u_k,n)] d=λ_k,β(Ω_n,Γ_n)≤ C,in view of <cit.> applied to both the negative and positive parts of u_k,n, there exists y_n∈^N such that|Ω_n∩ Q_1(y_n)| ≥|supp(u_k,n)∩ Q_1(y_n)|≥ C'_N(1/2C+2)^N>0,against (<ref>).10ptStep 2: Compactness. If compactness occurs, there exists a set of finite perimeter Ω⊂^N such that1_Ω_n→ 1_Ωstrongly in L^1(^N). Let Γ be given by Theorem <ref>, so that (,Γ)∈(^N) withΓ≤lim inf_n _nΓ_n≤ pand, according to Theorem <ref>,λ_k,β(Ω,Γ)≤lim inf_n→+∞λ_k,β(Ω_n,Γ_n).We infer that (Ω,Γ) is a minimizer for problem (<ref>).10ptStep 3: Dichotomy.Let dichotomy occur. Then there exists α̃∈]0,m[ such that the following assertion holds true: we can find x_n∈^N and 0<r_n<R_n, R_n-r_n→+∞, such that settingΩ_n,1:=Ω_n∩ B_r_n(x_n),Γ_n,1:=Γ_n∩ B_r_n(x_n)andΩ_n,2:=Ω_n∖ B_R_n(x_n),Γ_n,2:=Γ_n∖ B_R_n(x_n),we have||Ω_n,1|-α̃|→ 0,||Ω_n,2|-(m-α̃)|→ 0,with(Ω_n∩∂ B_r_n(x_n))→ 0,(Ω_n∩∂ B_R_n(x_n))→ 0.Notice that Ω_nΓ_n≥Ω_n,1Γ_n,1+Ω_n,2Γ_n,2-_nwith _n→ 0. Up to a subsequence we may assumeΩ_n,1Γ_n,1→ p_1>0andΩ_n,2Γ_n,2→ p_2>0with p_1+p_2≤ p. Now, by testing the Rayleigh quotient for Ω_n,1∪Ω_n,2 on the eigenfunctions of Ω_n, taking into account their uniform boundedness in L^∞ given by Theorem <ref> and Remark <ref> which characterizes the spectrum of _n^1∪_n^2 we get(Ω_n,Γ_n) ≥(Ω_n,1∪Ω_n,2,Γ_n,1∪Γ_n,2)-δ_n=min_i=0,…,kmax{λ̃_i,β(Ω_n,1,Γ_n,1),λ̃_k-i,β(Ω_n,2,Γ_n,2)}-δ_n=max{λ̃_i̅,β(Ω_n,1,Γ_n,1),λ̃_k-i̅,β(Ω_n,2,Γ_n,2)}-δ_n,where δ_n→ 0 and i̅ is independent of n (up to subsequences). We point out that i̅<k. Otherwise we would have(Ω_n,Γ_n) ≥(Ω_n,1,Γ_n,1) -δ_nand, in view of (<ref>) and (<ref>) ,Ω_n,1Γ_n,1)+<Ω_nΓ_n,for some >0: this contradicts the fact that (Ω_n,Γ_n) is a minimizing sequence in view of the monotonicity under dilations. Let (Ω_1,Γ_1) and (Ω_2,Γ_2) be minimizing couples for problem (<ref>) relative to λ̃_i̅,β with perimeter constraint p_1 and λ̃_k-i̅,β with perimeter constraint p_2 respectively, whose existence is guaranteed by our induction assumption. We claim thatλ̃_i̅,β(Ω_1,Γ_1)≤lim inf_n→+∞λ̃_i̅,β(Ω_n,1,Γ_n,1)andλ̃_k-i̅,β(Ω_2,Γ_2)≤lim inf_n→+∞λ̃_k-i̅,β(Ω_n,2,Γ_n,2).Since _1,_2 are bounded sets by Proposition <ref>, we can assume that they are at positive distance.Let us setΩ:=Ω_1∪Ω_2andΓ:=Γ_1∪Γ_2.Clearly (,Γ)∈(^N) withΩΓ=Ω_1Γ_1+Ω_2Γ_2=p_1+p_2≤ p,while thanks to Remark <ref>(Ω,Γ)≤max{λ̃_i̅,β(Ω_1,Γ_1),λ̃_k-i̅,β(Ω_2,Γ_2)}.We infer that (Ω,Γ) is an admissible couple for the minimization ofand, taking into account (<ref>) and claims (<ref>), (<ref>) we have(Ω,Γ)≤max{λ̃_i̅,β(Ω_1,Γ_1),λ̃_k-i̅,β(Ω_2,Γ_2)} ≤lim inf_n→+∞(max{λ̃_i̅,β(Ω_n,1,Γ_n,1),λ̃_k-i̅,β(Ω_n,2,Γ_n,2)}) ≤lim inf_n→+∞(Ω_n,Γ_n).We conclude that (Ω,Γ) is a minimizer for Problem (<ref>).In order to conclude, we need to check claims (<ref>) and (<ref>). Let us prove the first one, the other being similar. If Ω_n,1Γ_n,1≤ p_1, there is nothing to prove in view of the minimality of Ω_1. On the other hand, if Ω_n,1Γ_n,1>p_1, by (<ref>) there exists ε_n>0, ε_n→ 0, such that Ω_n,1Γ_n,1=p_1+ε_n. Let us define the quantityt_n:=(p_1/p_1+ε_n)^1/N-1<1and the setsΩ̃_n,1:=t_nΩ_n,1,Γ̃_n,1:=t_nΓ_n,1.Clearly Ω̃_n,1Γ̃_n,1)=p_1, so thatλ̃_i̅,β(Ω_1,Γ_1)≤λ̃_i̅,β(Ω̃_n,1,Γ̃_n,1)thanks the minimality of (Ω_1,Γ_1). On the other hand, in view of the scaling property given by Remark <ref> it holdsλ̃_i̅,β(Ω̃_n,1,Γ̃_n,1)=1/t_n^2λ̃_i̅,t_nβ(Ω_n,1,Γ_n,1)≤1/t_n^2λ̃_i̅,β(Ω_n,1,Γ_n,1).Putting together (<ref>) and (<ref>) and observing that t_n→ 1 we finally get (<ref>).§.§ Proof of Theorem <ref>In this section we provide the proof of Theorem <ref>. To this aim, we adapt to our context an induction argument applied, for instance, in <cit.>. Using the monotonicity under dilations of Remark <ref>, problem (<ref>) is equivalent tomin{F(Ω,Γ): (Ω,Γ)∈𝒜(^N)with ΩΓ≤ p}. It is convenient to frame (<ref>) within a larger class of problems.Let us fix 0≤ k_ℓ,2<k_ℓ and 0<γ_1≤γ_2≤…≤γ_k_ℓ,2, and let k_ℓ,1:=k_ℓ-k_ℓ,2. For every (,Γ)∈(^N), let us consider the ordered k_ℓ-tupleμ(,Γ):=(λ̃_1,β(,Γ),…, λ̃_k_ℓ,1,β(,Γ),γ_1,…, γ_k_ℓ,2)^ord(in the case k_ℓ,2=0, we mean that no γ is involved, so that μ(,Γ)=(λ̃_1,β(,Γ),…, λ̃_k_ℓ,β(,Γ)). If we setF_γ(,Γ):=f(μ_k_1(,Γ),…, μ_k_ℓ(,Γ)),then problem (<ref>) is a particular case of the following one:min{F_γ(Ω,Γ): (Ω,Γ)∈𝒜(^N)with ΩΓ≤ p}. Considering the more general problem (<ref>) makes it easier to formulate the induction scheme needed to prove existence: more precisely, the dichotomy case for problem (<ref>) forces to consider problems of the form (<ref>). The following result holds true.Problem (<ref>) admits a bounded minimizer. Notice that in principle not every minimizer is bounded: this is because some coefficients are "frozen" and set equal to γ_1,…,γ_k_ℓ,2 (think of F(,Γ)=λ̃_k,β(,Γ) and fix γ_1<min{λ̃_k,β(,Γ) : Γ≤ p}). Let us divide the proof in two steps.10ptStep 1. Let us prove that if problem (<ref>) admits a minimizer (,Γ)∈(^N), then it admits also a bounded minimizer.For t∈ let us set_t:=∩{x_1<t}andΓ_t:=Γ∩{x_1<t}.We will show that either (,Γ) is bounded in the positive direction x_1, or (_t,Γ_t) is a minimizer for some t large enough: iterating the argument with the negative x_1-direction, and repeating the considerations for the other directions, the result follows. Let us assume thatis unbounded in the positive x_1 direction. Lettingη(t):=(ΩΓ/Ω_tΓ_t)^1/N-1> 1and considering the dilated couple (Ω̃_t,Γ̃_t)∈(^N) withΩ̃_t:=η(t)Ω_t,andΓ̃_t:=η(t)Γ_t,the optimality of (Ω,Γ) and the admissibility of (Ω̃_t,Γ̃_t) for Problem (<ref>) yield F_γ(Ω,Γ)≤ F_γ(Ω̃_t,Γ̃_t).Since λ̃_h,β(_t,Γ_t)→λ̃_h,β(,Γ)for every h≥ 1 and η(t)→ 1 as t→ +∞, we can assume that there exists t_0>0 such that for every t≥ t_0 there exist indexes k∈{1,…,k_ℓ}, i∈{1,…,k_ℓ,1}, and a constant δ>0 such that[μ(Ω,Γ)]_k<[μ(Ω̃_t,Γ̃_t)]_k=λ̃_i,β(Ω̃_t,Γ̃_t)≤λ̃_i,β(Ω_t,Γ_t)<[μ(Ω,Γ)]_k+1-δ,where we mean [μ(Ω,Γ)]_k_ℓ+1:=[μ(Ω,Γ)]_k_ℓ+1 for k=k_ℓ. Indeed, if this is not the case (Ω̃_t,Γ̃_t) would be a minimizer bounded in the positive direction x_1. Let E_k,i be the set of those t satisfying (<ref>): the sets E_k,i cover [t_0,+∞[ and are in a finite number.From (<ref>) we inferF_γ(_t,Γ_t)-F_γ(_t,Γ̃_t)≤ F_γ(_t,Γ_t)-F_γ(,Γ).In view of Lemma <ref>, and of the monotonicity and Lipschitz continuity of f, we haveF_γ(_t,Γ_t)-F_γ(,Γ)≤ C_1 (∩{x_1=t})for some C_1>0 independent of t. Moreover, if t∈ E_k,i the Lipschitz estimate from below given by property (f2) and (<ref>) entailF_γ(_t,Γ_t)-F_γ(_t,Γ̃_t)≥ C [λ̃_i,β(_t,Γ_t)-λ̃_i,β(_t,Γ̃_t)],so that we get thanks to Remark <ref>λ̃_i,β(_t,Γ_t)≤λ̃_i,β(_t,Γ̃_t)+C_2 (∩{x_1<t}) ≤1/η(t)λ̃_i,β(_t,Γ_t)+C_2 (∩{x_1<t}),where C_2:=C_1C^-1. Since λ̃_i,β(_t,Γ_t)≥ C_3 on [t_0,+∞[ for every i=1,…,k_ℓ,1, we conclude easily that for a.e. t∈ [t_0,+∞[ the following inequality is satisfied(∂^*(Ω∖_t))≤ C_4(Ω∩{x_1=t}).where C_4>0. Following the arguments of Step 2 in the proof of Lemma <ref>, we infer thatis bounded in the positive direction x_1, a contradiction, and the step is concluded.10ptStep 2. Let us prove existence of bounded minimizers proceeding by induction on the order k_ℓ of the highest eigenvalue involved.For k_ℓ=1, since f is increasing and taking into account Remark <ref>, minimizers are balls of perimeter p. Let us now assume that a bounded minimizer exists for functionals with k_ℓ≤ k, and let us prove it for k_ℓ≤ k+1.Let (Ω_n,Γ_n)_n∈ be a minimizing sequence for problem (<ref>). We can assume that Ω_nΓ_n→p̅,where p̅>0 is the minimalvalue which can be achieved in the limit by the generalized perimeters of a minimizing sequence. The fact that p̅>0 is due to the fact that otherwise we would have |_n| → 0, and then F_γ(_n,Γ_n)→ +∞ thanks to property (f1). We can assume moreoverlim_n→+∞|Ω_n|=m,with 0<m<+∞.Let us apply, as in the proof of Theorem <ref>, a concentration-compactness argument to the sequence (1_Ω_n)_n∈.The vanishing case, as above, cannot occur. If compactness holds true, thanks to the monotonicity of the function fwe obtain the existence of a minimizer (,Γ)∈(^N): in view of Step 1, we can assume that it is also bounded. In order to get the conclusion, we need thus to deal with the dichotomy case. Following the arguments in the proof ofTheorem <ref>, and using the Lipschitz continuity of f, we come up easily with a minimizing sequence of the form (_n,1∪_n,2,Γ_n,1∪Γ_n,2)_n∈,where _n,1 and _n,2 are well separated withΩ_n,1Γ_n,1=p̅_1andΩ_n,2Γ_n,2=p̅_2,p̅_1,p̅_2>0 and p̅_1+p̅_2=p̅. We know that the spectrum of (Ω_n,1∪Ω_n,2Γ_n,1∪Γ_n,2) is given by the union of those of (Ω_n,1,Γ_n,1) and (Ω_n,2,Γ_n,2) by the formulaλ̃_k,β(Ω_n,1∪Ω_n,2,Γ_n,1∪Γ_n,2)= min_i=0,…,kmax{λ̃_i,β(Ω_n,1,Γ_n,1),λ̃_k-i,β(Ω_n,2,Γ_n,2)}.We observe that the eigenvalues appearing in the computation of F_γ(Ω_n,1∪Ω_n,2,Γ_n,1∪Γ_n,2) involve both (Ω_n,1,Γ_n,1) and (Ω_n,2,Γ_n,2): otherwise, if for example only those of (Ω_n,1,Γ_n,1) were involved, then (Ω_n,1,Γ_n,1) would be a minimizing sequence with a perimeter below the minimal threshold p̅, which is impossible. As a consequence, up to a subsequence, we may assume that the computation F_γ(Ω_n,1∪Ω_n,2,Γ_n,1∪Γ_n,2) involves the ordered k_ℓ-tuple({λ̃_i,β(Ω_n,1,Γ_n,1)}_i=1,…, k^1_ℓ,1, {λ̃_j,β(Ω_n,2,Γ_n,2)}_j=1,…, k^2_ℓ,1, {γ_h}_h=1,…, k_ℓ,2)^ord where k^1_ℓ,1,k^2_ℓ,1>0 are independent of n with k^1_ℓ,1+k^2_ℓ,1=k_ℓ,1. Thanks to assumption (f1), we can assume thatλ̃_i,β(Ω_n,1,Γ_n,1) →δ_i>0andλ̃_j,β(Ω_n,2,Γ_n,2) →η_j>0for every i=1,…, k^1_ℓ,1 and j=1,…,k^2_ℓ,1. Let (A^*,Γ^*) be a bounded minimizer of the functional(A,Γ)↦ f(({λ̃_i,β(A,Γ)}_i=1,…, k^1_ℓ,1,{η_j}_j=1,…,k^2_ℓ,1,{γ_h}_h=1,…, k_ℓ,2)^ord)under the perimeter constraint p̅_1, and let (B^*,K^*)be a bounded minimizer of the functional(B,K)↦ f(({δ^*_i}_i=1,…,k^1_ℓ,1, {λ̃_j,β(B,K)}_j=1, …, k^2_ℓ,1,{γ_h}_h=1,…, k_ℓ,2)^ord)under the perimeter constraint p̅_2, whereδ^*_i:=λ̃_i,β(A^*,Γ^*).The existence of (A^*,Γ^*) and (B^*,K^*) is guaranteed by the induction step. It turns out that we can put the two configurations at a positive distance in order to create (A^*∪ B^*, Γ^*∪ K^*)∈(^N) with generalized perimeter equal to p̅. We get easily that (A^*∪ B^*, Γ^*∪ K^*) is a minimizer of the problem. Indeed using the Lipschitz continuity of flim inf_n→ +∞ F_γ(_n,Γ_n)=lim inf_n→ +∞F_γ(_n,1∪_n,2,Γ_n,1∪Γ_n,2)=lim inf_n→ +∞f(({λ̃_i(Ω_n,1,Γ_n,1)}, {λ̃_j(Ω_n,2,Γ_n,2)},{γ_h})^ord)=lim inf_n→ +∞f(({λ̃_i(Ω_n,1,Γ_n,1)}, {η_j}, {γ_h})^ord)≥ f(({δ^*_i}, {η_j}, {γ_h})^ord)= lim inf_n→+∞f(({δ^*_i}, {λ̃_j,β(_n,2,Γ_n,2)}, {γ_h})^ord)≥ f(({λ̃_i,β(A^*,Γ^*)},{λ̃_j,β(B^*,K^*)}, {γ_h})^ord)≥ F_γ(A^*∪ B^*,Γ^*∪ K^*),the last inequality coming from the monotonicity of f and the fact that, as ordered k_ℓ-tuples,({λ̃_k,β(A^*∪ B^*,Γ^*∪ K^*}_k=1,…, k_ℓ,1, {γ_h}_h=1,…, k_ℓ,2)^ord ≤ ({λ̃_i,β(A^*,Γ^*}_i=1,…, k^1_ℓ,1, {λ̃_j,β(B^*,K^*)}_j=1,…, k^2_ℓ,1, {γ_h}_h=1,…, k_ℓ,2)^ord.We are now ready to prove Theorem <ref>. The existence of bounded minimizers follow from the more general result given by Theorem <ref>. The density issue concerning Lipschitz domains follows from the monotonicity of f together with Theorem <ref> and Remark <ref>.In order to conclude, we need to show that every minimizer (,Γ) is bounded. Indeed, following the arguments of Step 1 in the proof of Theorem <ref>, since no γ is involved, the strict monotonicity of f entails that the key inequality (<ref>) is always satisfied, whichyields the boundedness of . The result for the perimeter penalized version of the problem given in Remark <ref> follows easily by slightly modifying the arguments of the proof of Theorem <ref> and dealing with (,Γ)↦ F_γ(,Γ)+ΛΓ.Boundedness of every minimizer follows from the fact that minimality entailsΛ[Γ-_tΓ_t] ≤ F(_t,Γ_t)-F(,Γ)≤ C (∩{x_1=t})which yields (∂^*(Ω∖_t))≤ C_1(Ω∩{x_1=t})for a.e. t large enough, from which boundedness follows. As far as existence is concerned, only the dichotomy case needs some small modifications: in particular the configurations (A^*,Γ^*) and (B^*,K^*) constructed by considering the associated problems with fixed perimeters p̅_1 and p̅_2 are still sufficient to get the conclusion. *Acknowledgements The authors S.C and A.G. have been supported in their work, respectively, by the National Research Projects “Elliptic and parabolic problems, heat kernel estimates and spectral theory” (PRIN 20223L2NWK) and “Variational methods for stationary and evolution problems with singularities and interfaces” (PRIN 2022J4FYNJ), funded by the Italian Ministry of University and Research. Both authors are members of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). S.C. acknowledges the support of the INdAM - GNAMPA 2023 Project “Problemi variazionali per funzionali e operatori non-locali”. This manuscript has no associated data. plain
http://arxiv.org/abs/2312.16597v1
{ "authors": [ "Simone Cito", "Alessandro Giacomini" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20231227144616", "title": "Minimization of the $k$ -th eigenvalue of the Robin-Laplacian with perimeter constraint" }
Department of Science and Technology, University of Twente, [email protected]: Imagine a paper with 𝑛 nodes on it where each pair undergoes a coin toss experiment; if heads we connect the pair with an undirected link, while tails maintain the disconnection. This procedure yields a random graph. Now consider duplicating this network onto another paper with a slight bias-a fraction of its links (approximately 1/10) undergo rearrangement.If we shuffle the two papers, how can we distinguish the pure random graph from the biased one? Results: In response to this challenge, we propose a novel metric called “Randomness Index" (RI). The closer the metric to zero is, the higher degree of randomness in the graph.The RI can distinguish between dense small-world networks and dense random graphs; a distinction which is impossible by conventional small-world properties like clustering coefficient and average path length. To validate its effectiveness, we apply the RI to temporal correlation networks of stock indices. Our findings reveal a reduction in randomness during global economic recession periods. Conclusion: The RI emerges as a powerful metric capable of characterizing small-world topology, especially in scenarios where other network measures fail. Beyond its utility in network analysis, the RI is promising for change-point (anomaly) detection in dynamical systems studied by means of multivariate time series.§ INTRODUCTION r0.25 < g r a p h i c s > The Critical Point in blue. Whatever is inside the neighbourhood is random, and whatever is outside the neighborhood is small-world. Contribution Small-world topology is characterized by high clustering coefficient and low average path length and constitutes an independent feauture of the inherently-sparse social networks. On the other hand, dense graphs exhibit both high clustering and low average path length (such as the interracial cortical network in the primate brain <cit.>) which hinders the discernation of the small-world phenomenon. We propose a method that addresses this problem by distinguishing dense small-world networks from dense random networks through a classification methodology. The latter is based on a critical point and a neighbourhood around this point defined in an euclidean space(see Fig. <ref>). By embedding an observed network into a point of the same euclidean space, we say that an observed network is random if the point lies inside the neighbourhood of the critical point. On the contrary, a network is small-world if it lies outside the neighbourhood. The critical point is a six dimensional point whose coordinates constitute relative frequencies of motifs. Network types  The research of Gilbert, Erdos, and Renyi <cit.> on random graph theory inspired the study of real world complex networks. Over the last years, network science provides metrics and measures that characterize either the network or its nodes <cit.>, and models that describe the different network topologies and generate networks with desirable properties. Without doubt, one of the most popular network topologies is the small-world.Small-world networks constitute an analogy to the small-world phenomenon of Stanley Milgram <cit.> firstly described by the Watts-Strogatz model <cit.>. The Watts and Strogatz (WS) small-world network model is a three-parameter model (𝑉,𝑘,𝑝_𝑟) where: 𝑉 is a set of vertices placed on a circular configuration, 𝑘 denotes the number of neighbors each node is attached, and 𝑝_𝑟 is the probabilty of rearrangement of a link in the network. On the other hand, the Erdos-Renyi (ER) random graph model is considered here as a two-parameter model (𝑉,𝑝): 𝑉is the number of nodes and 𝑝 denotes the probability of a link's occurence between any pair of nodes. This article compares the two network topologies in terms of motifs <cit.>.l0.25 < g r a p h i c s > (Left) Sequential pattern. (Right) Star pattern. Some preliminary thoughts Consider the two patterns of Fig. <ref>, namely the sequential and star pattern on 4 nodes. Which one of these patterns (considered as induced subgraphs) has higher frequency in the pure random graph introduced in the abstract? To answer this, we need to reconfigure the two motifs into square configurations, like in Fig. <ref>, wherein the sequential pattern is Motif 1, and the star pattern is Motif 2. We prove in form of a theorem that, because of symmetries, the sequential pattern has larger frequency than the star pattern.Motifs and GraphletsThe patterns shown in Fig. <ref> can be found in the case of statistical mechanics as motifs. They were initially explored as partial subgraphs which recur in real-world complex networks with higher frequency than in randomized networks <cit.>. However, in computer science and bioinformatics these patterns are also known as graphlets <cit.> and are considered as small connected non-isomorphic induced subgraphs.In this study, we call them motifs but we strictly refer to induced subgraphs with 4 nodes. We display the motifs (as in <cit.>) m_i with i=1,...,6 in Fig. <ref>.In the case of bioinfiormatics <cit.>, the authors define relative graphlet frequency distance and compare protein-to-protein interaction networks with various random graph models. In the case of statistical mechanics, a transformation from univariate time series to complex networks has been introduced, and studied the frequency of different connected tetrads of nodes in these networks <cit.>. Motifs' distribution could distinguish different types of continuous dynamics: periodic, chaotic and periodic with noise, as well as chaotic maps, hyperchaotic maps, and noise data when applied to discrete data. Particularly, the authors proved that noisy periodic signals correspond to random visibility networks whereas chaotic time series generate visibility networks that exhibit small-world and scale-free features <cit.>.Inspired by statistical mechanics, our crucial hypothesis is that we can use such patterns to identify the topology of real-world observed networks. Thus, we motivate the connected tetrads of nodes of Fig. <ref> (considered as induced subgraphs and called motifs) in order to introduce a novel classification method. The article is summarized as follows: Section <ref>, we introduce the novel Network Randomness Index (NRI) , in Section <ref>, we validate the efficiency of the NRI by his ability to discern random and small-world networks, in Section <ref> we display how NRI can be used for anomaly detection in multivariate time series, and in Section <ref> we conclude, discuss our findings, and motivate the readers to new horizons for further investigations. All relevant algorithms are implemented in Matlab (see <https://github.com/GeorgiosArg/Diagnosis-of-Small-World-Bias-in-Random-Graphs>) and are described in the Appendix.§ THE RANDOMNESS INDEX  In this section we introduce the Network Randomness Index (NRI) and elaborate on the preliminary thoughts. Particularly, section <ref> provides the background with the preliminary definitions, insection <ref> we define the critical point and introduce the NRIas the euclidean distance between the critical point and another point that corresponds an observed network, and in Section <ref>, we elaborate on the preliminary thoughts by proving that the sequential pattern has larger frequency than the star pattern. §.§ Preliminaries   We start by defining the notion of a graph.  A graph G is a pair (V,E) where V is a set of vertices (or nodes), and E⊆ V × V is a set of edges (or links), with |V| ∈ℕ being the number of nodes and |E| ∈ℕ being the number of edges. We denote with f_m_i the number of occurences of the motif m_i considered as induced subgraph in a graph G. Next, we define the relative frequency, also introduced in <cit.> as a measure of randomness, as follows: Let f_m_i be the frequency of motif m_i. We define the relative frequency of motif m_i as: F_m_i = f_m_i/∑_i=1^6f_m_iThe critical point is a 6-dimensional point where each coordinate corresponds to the relative frequency of a motif and, for a given graph G, is formally defined as follows: Let G be a graph and F_m_i be the relative frequency of motif m_i. We define the relative frequency point of a graph G as: _G = (F_m_1, F_m_2, F_m_3, F_m_4, F_m_5, F_m_6)We finish this section by reminding the reader that f_m_i is the absolute frequency of the motif m_i, F_m_i its relative frequency (𝑖 ∈{1, …, 6}), and _G the RFP of a graph G.§.§ The critical point   We start by defining the notion of an Erdos-Renyi Random Graph. An Erdos-Renyi (ER) random graph (denoted by ) is a pair (V,p) where V is a set of vertices while p ∈ [0,1] is the probability of occurence of an edge between any pair of vertices. As we have mentioned before, the critical point is a six-dimensional point whose coordinates are the relative frequencies of each motif.In the case of Erdos-Renyi random graphs, these frequencies can be calculated analytically by closed form formulas. We derive these formulas in this section according to Table <ref>. The first column presents the 11 sets of 4-node graphs, where each set is a class of isomorphic graphs. Note that there is a bijection that preserves edges, mapping a pattern to each other pattern within the same set.The column order contains the number of links of each pattern in class, the patterns refers to the number of patterns in each set, in description we provide a description of themotif, and the last column contains the probability of occurence of each pattern. Note that graphs which belong to the same set occur with equal probability in ER graphs, i.e., the probability of occurence of a pattern is a graph invariant under graph isomorphism.For a network of size n, there are n4 tetrads of nodes. If we denote with N_i the number of isomorphs the motif i has, and l_i its order (the number of links of i), the absolute frequency f_m_i is given by the following lemma. Let 𝒢=(V,p) be an Erdos-Renyi random graph with |V|=n >4 and m_i=(4,l_i) be a motif (induced subgraph) on 4 nodes with l_i edges. We denote with N_m_i denote the number of isomorphic patterns of motif m_i. The expected frequency of motif m_i in the random graphis given by the following formula: f_m_i(n,p) = n4*N_m_i*p^l_i*(1-p)^6-l_iA formula akin to equation <ref> was previously established for directed networks in <cit.>. In this study, the authors, employing approximations for the average number of subgraphs in an ensemble of random networks with an arbitrary degree sequence, reached the conclusion that specific subgraphs tend to occur more frequently in real-world and scale-free networks featuring a specified power-law node degree distribution compared to randomized graphs. Moreover, they consider the number of isomorphic patterns 𝑁_𝑚_𝑖 as a parameter λ of order 1 (𝑂(1)). Nevertheless, these values merely represent the number of isomorphic patterns that a motif (induced subgraph) can exhibit. We denote with _(n,p) the RFP of the Erdos-Renyi graph and call it critical point which is the point of Fig. <ref>. The uniqueness of the critical point is secured by the following theorem of Erdos and Rado.  Let =(V,p) be the random graph over a countably infinite set of vertices (|V|=n=∞) for some fixed p. The Rado graph _∞=(∞,p) is unique under isomorphism. In words, the theorem guarantees the following counter-intuitive phenomenon; if several people have an infinitely big paper with infinitely many nodes on it and, for each pair of nodes, they perform the toin coss experiment mentioned in the abstract, then everybody will end up with the same graph (under isomorphism). The previous theorem can be found in <cit.> (pages 228-229). The following Corollary immediately follows: Let _∞ be the Rado graph. The __∞ is unique. The following remark is crucial for the development of the classification method of Section <ref>.[Network Randomness Index]  Consider an observed real world network G=(V,E) and _G its RFP. The greater the proximity of the observed network's frequency vector, denoted as _G, to the critical point _ in terms of Euclidean distance (_-_G), the higher the randomness exhibited by the observed network. This happens because for a random graph on a finite n number of vertices, the point _ lies in a concrete neighborhood (see Fig. <ref>).§.§ On the preliminary thoughts  We next provide a theorem that relates the frequencies of subgraphs in the Rado graph, with their isomorphs. The following theorem gives also the solution to the problem stated in the preliminary thoughts of the introduction. Let G_1=(V_1,E_1), G_2=(V_1,E_1) be two graphs with the same number of nodes |V_1|=|V_2|=m and the same number of edges |E_1|=|E_2|=l. The expected frequency of G_1 is higher than the expected frequency of G_2 in a random graph =(V,p) with |V|=n ≥ m if and only if G_1 has more isomorpfic patterns than G_2, i.e., N_G_1≥ N_G_2. We denote with N_G_1, N_G_2 the number isomorphic patterns, and f_G_1, f_G_2 the relative frequency of the graphs G_1, G_2 respectively. We also denote with m the number of nodes which is equal. It holds that: f_G_1<f_G_2⇔ ⇔ N_G_1·nm· p^l · (1-p)^m(m-1)/2-l <N_G_2·nm· p^l · (1-p)^m(m-1)/2-l⇔ ⇔ N_G_1<N_G_2The following remark immediately follows.Considering the preliminary thoughts of the Section <ref>, it is straightforward that the sequential motif has higher frequency than a star motif in the Rado graph. We refer the reader to Table <ref> in the column patterns. Notice that the star motif has 4 patterns while the sequential motif has 12 patterns. Density is a critical parameter that can dramatically change the frequency of motifs. It is particularly easy to calculate.  Let G=(V,E) be a subgraph with |V|=m and |E|=l. The density of G is defined as follows: d_G = l/m * (m-1)/2Next, we provide a Theorem that guarantees that a subgraph achieves its maximum frequency in a random graph when the density of the subgraph equals with the parameter p of the random graph i.e. the probability of occurrence of an edge. To prove this we use the maximum likelihood estimation. Let G=(U,E) be a graph with N_G isomorphic patterns |U|=m, |E|=l, and =(V,p) be a random graph with |V|=n > m. The finite induced graph G maximizes its frequency inwhen d_G = p. The frequency of G in the random graphis given by the following formula: F_G = N_G * nm * p^l * (1-p)^m*(m-1)/2-l The function log(x) is monotonic on x and, hence, it holds that max (F_G) = max (log(F_G)). We first calculate log(F_G): log(F_G)= log(N_G * nm * p^l*(1-p)^m*(m-1)/2-l)= log(N_G) + log(nm) + log(p^l) + log((1-p)^m*(m-1)/2-l) = log(N_G) + log(nm) + l*log(p) + (m*(m-1)/2-l) * log(1-p) To find the maximum in terms of the probability p, we calculate the derivative of log(F_G) up to p and equalize it to zero: ∂ log(F_G)/∂ p = 0 l*1/p + (m*(m-1)/2-l) * 1/1-p = 0(p-1) * l + p * (m*(m-1)/2-l)=0 p * l - l + p * (m*(m-1)/2-l)=0p * l + p * (m*(m-1)/2-l) = l p * (l+m*(m-1)/2-l) = lp * (m*(m-1)/2) = lp= l/m*(m-1)/2 Therefore, we conclude that the frequency is maximized when p=d_G (see Definition <ref>). § CLASSIFICATION OF SMALL-WORLD AND RANDOM GRAPHS   In this section, we present a novel classification method designed to differentiate between small-world and random networks. Section <ref> introduces the Watts-Strogatz small-world networks along with the classification method. The validation of our method through Monte Carlo simulations is carried out in Section <ref>. Further refinement of the Monte Carlo experimental validation, aimed at assessing the classification power of our framework, is discussed in Section <ref>. §.§ Small-world Networks and Classification Methodology   Small-world Networks We first give the definition of a small-world network.  A small-world network, denoted by , is a triple (V,k,p_r) where V is a set of nodes circular configuration (see top part of Fig. <ref>), k is the number of neighbors that each node is attached, and p_r ∈ [0,1] the probability of rearrangement of an edge.Consider the regular lattice depicted in the upper left part of Fig. <ref>. This lattice forms a small-world network denoted aswith 𝑘=2-each node is connected to precisely two of its nearest neighbors. Additionally, 𝑝_𝑟=0 since no edges undergo rearrangement. Moving to the upper middle part of the figure, we see a small-world network with 𝑝_𝑟=0.1. The top right part of the figure illustrates awith 𝑝_𝑟=1.Small-world networks, characterized by different values of 𝑝_𝑟, exhibit unique relative frequency points in the six-dimensional space of motif frequencies. Specifically, when 𝑝_𝑟=0, the corresponding Relative Frequency Point (RFP) lies outside the neighborhood of the Critical Point depicted in Fig.<ref> (highlighted by the red point in the bottom left part of Fig.<ref>). However, as 𝑝_𝑟 increases, the RFP progressively approaches the neighborhood of the critical point. In the limit where p_r → 1, it crosses the boundary and enters the neighborhood, as indicated by the blue point in Fig. <ref>. The Relative Frequency Point (RFP) of a small-world network is denoted as _, representing a function of the three parameters (𝑛, 𝑝_𝑟, 𝑘), as defined in Definition <ref>. The empirical computation of _ involves generating 100 networks with varying values of 𝑝_𝑟 and 𝑘 for a standard 𝑛. The RFP is then calculated for each of these networks, and _(n, p_r, k) is determined as the average across this set of 100 networks. The detailed steps for this empirical computation are elucidated in Algorithm <ref> in Appendix <ref>.Hypothesis The crucial hypothesis of this section is the following: suppose that we have an observed real-world network, could we find which model generates (approximates) it? To answer this question we develop the following classification methodology.The Classification Methodology  Considering an observed network as a graph 𝐺=(𝑉,𝐸), as defined in Def. <ref>, our approach for identifying the model that best approximates an observed network is summarized in the following 2 steps.*To begin, we calculate the RFP of the observed network, denoted as _G, utilizing the ComputeRFP routine (see <ref>). Then, we apply the Embedding Algorithm, detailed in Appendix <ref>, to map the network topologies into their corresponding RFPs for the standard size 𝑛=|𝑉| of the observed network. For instance, the random graph is positioned within the critical point _, whereas the small-world network finds its place at another point, denoted by _.* Following the insights of Remark <ref>, the generative model (target function) of the observed network is the one that minimizes the euclidean distance between the RFP _G of step 1 and the RFP of the corresponding model. In other words, the observed network is random if _-_G < _-_G and small-world if _-_G > _-_G. We propose that the previous classification method of an observed network to its approximated topology (small-world or random). The method is summarized in Algorithm <ref>.§.§ Experimental Validation with Monte Carlo Simulations  In this section, we validate our classification method through Monte Carlo simulations. This involves (i) generating a substantial number of networks using each model, with variation in each model parameter (refer to the explanation in Configuration), (ii) presenting the Results, which detail the models to which the generated networks are mapped using our classification method, and (iii) drawing conclusions from these results in the Interpretation. Configuration We generate networks for each one of the models ER (n,p), WS (n,k,p_r) and for each one of their parameters n,p, k,p_r. Particularly, for 𝑛=25, 50 we generate 100 networks but for 𝑛=75 we generate only 50 networks for better computational efficiency. We validate our method to 56.750 generated networks overall: * 12.900 networks of size 𝑛=25 (900 ER, i.e., 100 for each 𝑝=0.1, …, 0.9 + 12.000 WS, 100 for each 𝑘=2, …, 24 for each 𝑝_𝑟=0, 0.1, …, 0.9), * 24.900 networks of size 𝑛=50 (900 ER, i.e., 100 for each 𝑝=0.1, …, 0.9 + 24.000 WS, 100 for each 𝑘=2, …, 48 for each 𝑝_𝑟=0, 0.1, …, 0.9), * 18.950 networks of size 𝑛=75, (900 ER, i.e., 100 for each 𝑝=0.1, …, 0.9 + 18.500 WS, 50 for each 𝑘=2, …, 74 for each 𝑝_𝑟=0, 0.1, …, 0.9).We then classify each one of these networks according to the classification methodology <ref> [For the experimental validation, we also considered scale-free networks constructed both by the Barabasi-Albert preferential attachment mechanism and another method that constructs scale-free networks with exponential degree distribution. However, we do not present these results because it is out of the scope of this article. The full results can be found on <https://github.com/GeorgiosArg/Diagnosis-of-Small-World-Bias-in-Random-Graphs>. Scale-free networks can be identified by naked eye because of their low density. ].ResultsWe present part of the results in the following tables. In the first row, the initial cell contains the size of the generated networks (𝑛), while the remaining cells specify the models employed for generating the 100 networks. The models to which the classification method maps the generated networks are presented in the first column. Specifically, ER corresponds to Erdos-Renyi, with the adjacent number indicating the probability (𝑝) of edge occurrence. WS represents Watts-Strogatz, where the first number adjacent to it is the probability of edge rearrangement (𝑝_𝑟), and the second number denotes the value of parameter 𝑘-the number of neighbors to which each node is attached.It is evident that when 𝑝=0.1, the method struggles to classify effectively due to the absence of formed motifs. As 𝑝 increases, facilitating motif formation, the method achieves more accurate classification for nearly 50% of the networks. Notice, however, that most of the networks that are not classified to the correct model, are classified to “relative" models, i.e., WS models with high probability of rearrangement of an edge, and similar density. For example, if 𝑝=0.5, the generated network has expected density 𝑑_𝐺=0.5. A WS network has density 𝑑_=𝑛 × 𝑘/𝑛×(𝑛-1)/2 and, hence, if 𝑑_=0.5, we have that 𝑘=6. We display the relevant situation with bold in the corresponding entries of Table <ref>.For 𝑛=50, we observe the same phenomenon that we present in Table <ref>. For example, considering the case of 100 ER (25,0.5) graphs, we notice that 58 networks are correctly classified to their generative model, but 41 networks are classified to the related models of similar equal density. Interpretation The classification method fails to classify correctly a large amount of ER graphs because ER graphs are mapped to “related" models. This phenomenon arises due to the combinatorial explosion of generative models. To mitigate the combinatorial explosion, we decrease the number of possible generative models by considering known the density of the generated networks. §.§ Refinement of the Experimental Validation Considering the density of an observed network known, we reduce the number of possible generative models by calculating the parameter 𝑝 of the Erdos-Renyi random graph, and the parameter 𝑘 of the Watts-Strogatz model. Let 𝑑_𝐺 denote the density of an observed network. Configuration The configuration of the refined experimental validation is the same as in <ref>; we generate the same amount of networks (56.750). However, we first compute their density (𝑑_𝐺) before we classify them with our method. With this, we can calculate the parameters 𝑝 of the ER graphs and 𝑘 of the WS graphs; If we want to approximate the density of the observed network with the ER model we have to set 𝑝=𝑑_=𝑑_𝐺. If we want to approximate the density of the observed network with the WS model, we have to set:d_=d_Gd_G=n × k/n×(n-1)/2 k=d_G × (n-1)/2ResultsThe results are presented as in previous section; in the first row, the initial cell contains the size of the generated networks (𝑛), while the remaining cells specify the models employed for generating the 100 networks. The models to which the classification method maps the generated networks are presented in the first column.Interpretation The effectiveness of the classification method is more sound for larger networks, as evident in Tables <ref>,<ref>, and <ref>. This phenomenon can be attributed to Theorem <ref>, which ensures the uniqueness of the Rado graph and the resilience of the neighborhood surrounding the critical point. Conversely, as observed in Tables <ref> and <ref>, the method demonstrates robustness in the opposite direction. The majority of networks are accurately mapped to the corresponding small-world model with the parameters used for their generation.Let us consider the distinction between the pure random and the biased graph that we introduced in the abstract. In Table <ref>, we see that, for 𝑛=50 and 𝑝=0.5, there still exist 4 networks that look biased (numbers in bold). The latter leads us to the fact that any bias can be can be introduced arbitrarily. Conversely, in Table <ref>, we see that biased networks can also look random (numbers in bold).§ APPLICATION TO TEMPORAL NETWORKS  We apply RI to temporal networks; networks with a standard number of nodes and whose connectivity changes throughout time i.e. links between any pair of nodes can appear or disappear between two time steps. In Section <ref> we discuss the process of deriving a temporal correlation network from a dynamical system studied by means of multivariate time series. Here, the connections between the nodes represent correlation. In Section <ref>, we apply RI to stock networks <cit.>. §.§ Derivation of Temporal Correlation Network   Each node 𝑣_𝑖 of the derived temporal networks represents a vector 𝑥_𝑖 of 𝑛 observed values which correspond to measurements of time series data: x_i = [x_i(1), … x_i(n)]We consider a dataset that contains the Morgan Stanley Capital International's (MSCI) market capitalization weighted index of 55 developed markets. This means that the derived network consists of 55 nodes i.e. 𝑉={𝑣_1, … 𝑣_55}, where each node 𝑣_𝑖 corresponds to a vector of values 𝑥_𝑖 of a particular market. Each vector 𝑥_𝑖 comprises 1305 daily indices for each market in the period 5 of March 2004 until 5 of March 2009, excluding weekends and holidays. Thus, x_i = [x_i(1), …, x_i(1305)]. For example, at time t=15 the market i has the value x_i(15). We first apply a procedure of standard preprocessing steps to the original time series data. The overall procedure is summarized in Fig. <ref>. Dismiss Trends and Periodicity In order to relieve the time series from tendency and periodicity, we compute the first differences of the logarithms of the original time series for the shake of stationarity: y_i(t-1)=log(x_i(t))-log(x_i(t-1)), ∀ it ∈ [2, …, 1305]The difference reduces the tendency and the use of the logarithm decreases the variance. Therefore, we obtain the relevant returns 𝑦_𝑖 for every market 𝑖. Timeseries Prewhitening The prewhitening process removes the autocorrelations of the timeseries 𝑦̅_𝑖 which may cause spurious cross-correlations. Firstly, the mean is subtracted from each 𝑦̅_𝑖(𝑡) using the formula y̅'_i = y̅_i - μ̅_i where μ̅_𝑖 represents a vector whose entries is the mean value of 𝑦̅_𝑖. Subsequently, an autoregressive (AR) model of order 𝑝=20 is fitted to each 𝑦̅'_𝑖 time series using the arx() function in Matlab. The AR model provides coefficients for the estimated model. Predicted values 𝑦̅”_𝑖 are obtained using the predict() function, and the mean subtracted earlier is added back to yield 𝑦̅”'_𝑖.Finally, the residuals 𝑧̅_𝑖 are calculated by subtracting the predicted values from the observed values i.e. 𝑧̅_𝑖 = 𝑦̅”'_𝑖 - 𝑦̅_𝑖, ∀ 𝑖. To conclude, the final result of the whole transformation is an array of residuals 𝑧̅_𝑖. Once the residuals are effectively whitened, they can be analyzed using standard statistical techniques without the complication of autocorrelation. We explain more analytically the prewhitening process in <ref>. Dataset partition We now partition the residuals 𝑧_𝑖 into an equal amount of chronologically consecutive measurements:z_i_1 = [z_i(1), …,z_i (k)] z_i_2 = [z_i(k+1), …,z_i (2k)] … z_i_m = [z_i((m-1) · k+1), …, z_i (n)] The 𝑧_𝑖_𝑙 represent the time windows, and a network is derived for each of them. Consider the MSCI dataset. We first perform prewhitening as described previously to obtain the residuals 𝑧_𝑖. We then partition 𝑧_𝑖 into 87, each one of them containing 15 values 𝑧_𝑖(𝑡) in a chronological order:z_i_1 = [z_i(1), …,z_i (15)], z_i_2 = [z_i(16), …,z_i (30)], …, z_i_87 = [z_i(86 · 15 + 1), …, z_i (1305)].Create the adjacency matrices For each 𝑧_𝑖_𝑙 with 1 ≤ 𝑙 ≤ 𝑚, we derive a correlation network, denoted by 𝐴(𝑙). In this network, the markets are represented by nodes while statistical significant cross-correlations according to the last measurement of the time window 𝑧_𝑖_𝑙 between the markets are added as links.All previous measurements are utilized for assessing the statistical significance of the cross-correlation at the last time-point. For the evaluation of statistical significance, we conduct a significance test based on the randomization of the original time series.This testing procedure involves randomizing the original time series and requires specifying a significance levelα, which determines the density of the correlation network. A largerα implies more statistically significant correlations and, consequently, higher density of the network. The overall process of deriving the adjacency matrices is summarized in Fig. <ref>. §.§ Application to Stock Networks   Hypothesis and Dataset We conduct an econometric application by considering stock networks; networks whose nodes correspond to stock indexes while edges correspond to statistically significant correlation between the stock indexes. We propose that our framework can detect anomalies and change-points in systems studied by means of multivariate time series.Configuration We partition the 55 time-sequence vectors of the MSCI dataset into 29 and 87 subvectors, each one of them containing 45 and 15 returns in a time row. That is:z_i_1 = [z_i(1), …,z_i (45)] z_i_2 = [z_i(46), …,z_i (90)] … z_i_29 = [z_i(28 · 45 + 1), …, z_i (1305)]while in the second case we have:z_i_1 = [z_i(1), …,z_i (15)] z_i_2 = [z_i(16), …,z_i (30)] … z_i_87 = [z_i(86 · 15 + 1), …, z_i (1305)]Compute randomnessIn order to compute the randomness of each element we follow Fig. <ref>.For each 𝐴(𝑙), we compute the relevant RFP denoted by _𝐴(𝑙) (as described in Routine <ref> and displayed in the upper left arrow of Fig. <ref>). We also compute the density of 𝐴(𝑙) denoted by 𝑑_𝐴(𝑙) and insert it as the probability parameter for the calculation of the critical point _(55,𝑑_𝐴(𝑙)) (see bottom left arrow of Fig. <ref>). The randomness for a particular time window 𝑙 is the Euclidean distance:_A(l)-_(55,d_A(l))ResultsWe summarize the results in Fig. <ref>.The x-axis are the dates and the y-axis the value of randomness. The figures display the results for different values of α as we display in the legends in the upper-right part of each figure. Particularly, the empty circles are the values of randomness in the case that α=0.05, the full circle in the case that α=0.03, and the triangle in the case that α=0.1. Interpretation As we see in Fig. <ref>, the peak is reached in August 2007 (just before 2008 in the X-axis), marking the onset of the 2007-2008 global financial crisis-a profound economic downturn that unfolded in the early 21st century. It's worth noting that the zenith remains consistent across the three selected values of the parameter α. Even when the dataset is divided into 29 subsets instead of 87, the results regarding the timing of the NRI's maximum value are unchanged.The application to stock networks, in this case, shows that NRI is high during recession periods; fact that reassures about the robustness of NRI. We propose that NRI can be utilized for anomaly detection in dynamical systems studied by means of multivariate time series. § CONCLUSION AND FUTURE WORK  We propose a graph embedding technique where we represent a graph as a point in the 6 dimensional space. Each coordinate in this space is the relative frequency of a particular network motif. By defining the unique point of the random (Rado) graph in the same 6𝐷 space, we can quantify the randomness of an observed network the proximity of its point to the point of the random graph. Motifs drive network randomness beyond probability. Future Work By extending to directed graphs, we can study the randomness of causality networks. It will be also interesting to apply the RI to other dynamical systems studied under means of multivariate time series (like EEGs). Some rough simulations that we performed, show that multivariate time series with trend and autocorrelation construct random networks in contrast with prewhitened time series (noisy signals) that tend to construct ordinary lattices. Instead of the Pearson correlation coefficient and the prewhitening, one could also use the mutual information. Last but not least, an extension to graphlets of size 5 is also feasible. Related Work The initial work was containing comparative analysis and separation between three different network types: random, small-world and scale-free networks. For the case of scale-free networks we considered two different generators; the Barabasi-Albert mechanism and a mechanism that was generating scale-free networks with a power-law degree distribution of a user-defined exponent. However, scale-free networks can be recognised to the naked eye, so we dismiss this analysis. For a motif analysis of different generators of scale-free networks, we refer the reader to <cit.>. Moreover in <cit.>, the authors experimentally prove that scale-free networks display a wider spectra of properties that the Barabasi-Albert preferential attachment mechanism fails to reproduce. Recent evidence propose that the node degree distribution of scale-free networks is heavy-tailed and not always power-law.§ ACKNOWLEDGEMENTS Part of this work has been done during the master on Web Science at the School of Sciences, Department of Mathematics of Aristotle University of Thessaloniki in Greece (2014-2016, see thesis <cit.>). Thanks to Dimitris Kugiumtzis for his guidance in the very early steps of this work, for providing some pieces of matlab codes and for providing the Morgan Stanley Capital Investment dataset.The work was also partially supported by the Ph.D. school of education of the Technical University of Denmark (Department of Applied Mathematics and Computer Science, Section Software Systems Engineering) while the writer of current manuscript was a Ph.D. student.  § EMPIRICAL CONSTRUCTION OF _  The algorithm for the empirical construction of _ incorporates two routines/functions: the CreateSmallWorldGraph routine and theComputeRFP routine. All implementations can be found on <https://github.com/GeorgiosArg/Diagnosis-of-Small-World-Bias-in-Random-Graphs>. §.§ CreateSmallWorldGraph Routine  The CreateSmallWorldGraph routine is a function taken from from <cit.>. It inputs the Small-world parameters (n,k,p_r) of Def. <ref> and generates a small-world graph according to the Watts-Strogatz model. §.§ ComputeRFP Routine   For the shake of this work, we implemented the ComputeRFP routine which inputs a network and outputs its RFP. This routine first enumerates the occurrences of each one of the motifs of Fig. <ref>. The initial implementation inputs the graph as a matrix and iterates over all different tetrads (i.e. 4-tuples) of nodes, i.e. each 4×4 submatrix of the matrix represantion of the network, and maps it to the corresponding motif of Fig. <ref>-if any of these are constructed-. This is computationally expensive.Each motif of Fig. <ref> can be represented by more than one adjacency matrix; and the adjacency matrices are as many as the different patterns of each motif in Fig. <ref>. To overcome this problem, smart implementations like the Brain Connectivity Toolbox <cit.> use dictionaries as the data structure to represent graphs. We incorporated this implementation and in order to further reduce complexity, we employed one more fact: For graphs with equal size n, and n≤ 4 it holds that: the graphs have the same degree distribution if and only if they are isomorphic (see Fig. <ref>). §.§ The Construction Algorithm The 𝑚 determines how many networks we generate or, in other words, the number of RFPs that we average. When we explore networks of size 𝑛=25, and 𝑛=50, we generate 100 networks, while for size 𝑛=75 we average over 50 networks due to the complexity of the ComputeRFP routine that we explain later.Inside the two for-loops which iterate over 𝑝_𝑟 and 𝑘, we compute the RFP for that specific 𝑝_𝑟 and 𝑘. 𝑝_𝑟 iterates over 10 possible values, i.e., 0, 0.1, …, 0.9. Thereafter, the algorithm iterate over 𝑘 from 2 to k_max. The k_max in the second iteration of this algorithm refers to the possible values that the parameter 𝑘 can obtain which increases with respect to 𝑛; the more the nodes of the circle, the more potential neighbors a node can obtain. The step is 2 because 𝑘 has to be an even number since every node is connected to the two nearest neighbours (one to the left and one to the right) in the circular configuration of the Watts-Strogatz model. It has to hold that 𝑘_𝑚𝑎𝑥<𝑛, i.e., the degree of a node k_max must not exceed the size of the network. Hence, 𝑘_𝑚𝑎𝑥 is given by the following piecewise function: k_max =n-2n-1 So sum up, the length of the list__ (the amount of the generated RFPs) depends on the size (𝑛=|𝑉|) of the obseved network that we aim to classify.For example, if the observed that we want to classify has size 55, the value of 𝑘_𝑚𝑎𝑥 is 54/2.Hence, the list contains 10 × 27=270 RFPs because 𝑝_𝑟 can take 10 different values 0, 0.1, …, 0.9.§ CONSTRUCTION OF THE LIST OF THE RFPS THAT CORRESPOND TO THE ER AND SW NETWORK TOPOLOGIES  For example, if n=25 we constructed 9 RFPs for the ER graph for different values of 0.1 … 0.9 and number of RFPs for the WS graph for different values of 𝑝_𝑟, and 𝑘. The list in this case contains number RFPs. Similarly, for n=50 the list contains number and for n=75 the list contains number RFPs. The routines CreateSmallWorldGraph, and ComputeRFP are explained in the previous Appendix <ref>, in the corresponding Sections <ref>, <ref>. All implementations can be found on <https://github.com/GeorgiosArg/Diagnosis-of-Small-World-Bias-in-Random-Graphs>.§ THE CLASSIFICATION ALGORITHM  § FURTHER ANALYSIS OF THE CLASSIFICATION METHOD * Construction of the RFP for each model and for each one of their parameters so as to compute the corresponding averaged point _j: * j = 1, …,9 for ER(p) when p=0.1, 0.2, …, 0.9 * j = 10, 11, …, n/2× 10 for WS when k=2, 4, 6, …, n/2 and p= 0, 0.1, 0.2, …, 0.9. * Computing the corresponding RFP of the observed network G which is denoted as _G. * The generative model/target function is the one that minimizes the euclidean distance between point _G and the point constructed in step 1: _j _G-_jIn other words, we first embed the topologies into their relative frequency points (RFPs) which are 6-dimensional points with each dimension corresponding to the relative frequency of a motif. The topology of an observed network is determined by the following steps: (i) compute the RFP of the observed network, (ii) compute the Euclidean distance between the RFP of the observed network and the RFPs that characterize the different network topologies, and (iii) the topology is the one that its RFP minimizes the Euclidean distance with the RFP of the observed network. § PREWHITENING  The prewhitening removes the autocorrelations of the timeseries 𝑦̅_𝑖 which may cause spurious cross-correlations. We first subtract the mean as follows:y̅'_i = y̅_i - μ̅_i, ∀ iwhere μ̅_𝑖 is an array whose entries μ̅_𝑖(𝑡), 𝑡 ∈{1, …, 1305} is the mean μ_𝑖 of 𝑦̅_𝑖 averaged over all 𝑡. We now fit an autoregressive (AR) model to the timeseries 𝑦̅_𝑖. We selected an autoregressive model of order 𝑝=20 by default.AR = arx(y̅'_i,p), ∀ iThe arx() is a built-in function of Matlab incorporated into the System Identification Toolbox which fits an autoregressive model of order 𝑝 to the time series 𝑦̅'_𝑖= 𝑦̅_𝑖 - μ̅_𝑖. The resulting AR object contains the coefficients of the estimated model. By feeding the AR model with the timeseries 𝑦̅'_𝑖 , we obtain the predicted values 𝑦̅”_𝑖:y̅”_i = predict(AR,y̅'_i), ∀ iThe predict() is another built-in function of Matlab incorporated into the System Identification Toolbox. AR is the autoregressive model obtained from the arx() function of equation (5), 𝑦̅'_𝑖 The input data used for prediction. We then add the mean that we subtracted in equation (4).y̅”'_i = y̅”_i+μ̅_i, ∀ iFinally, we proceed with the calculation of the residuals; the part of the original series that is not explained by the fitted model. We calculate the residuals by subtracting the predicted values from the observed values.z̅_i = y̅”'_i - y̅_i, ∀ iThe final result of the whole transformation is an array of residuals 𝑧̅_𝑖. Once the residuals are effectively whitened, they can be analyzed using standard statistical techniques without the complication of autocorrelation. All implementations can be found on <https://github.com/GeorgiosArg/Diagnosis-of-Small-World-Bias-in-Random-Graphs>.
http://arxiv.org/abs/2312.16525v1
{ "authors": [ "Georgios Argyris" ], "categories": [ "cs.SI" ], "primary_category": "cs.SI", "published": "20231227110932", "title": "Diagnosis of Small-world Bias in Random Graphs" }
St.Petersburg State University, Universitetskaya nab. 7/9, St.Petersburg, 199034, RussiaThe paper addresses the construction an error correction code for quantum calculations based on squeezed Fock states. It is shown that the use of squeezed Fock states makes it possible to satisfy the Knill-Laflamme (KL) criteria for bosonic error correction codes. It is shown that the first squeezed Fock state corrects both photon loss and dephasing errors better than higher-order states. A comparison of the proposed protocol with an error correction protocol based on the squeezed Schrodinger's cat states is carried out on the basis of the KL cost function. It is shown that the squeezed first Fock state better protects a channel with photon loss and dephasing.Error Correction Using Squeezed Fock States S. B. Korolev, E. N. Bashmakova, T. Yu. Golubeva January 14, 2024 ====================================================§ INTRODUCTIONThe main goal of quantum informatics and quantum optics is to build universal quantum computing. At the moment, quantum computing is in the so-called NISQ (Noisy Intermediate-Scale Quantum) era <cit.>. This era is characterized by the creation of computing systems with a limited number of logic elements and no error correction procedure. When developing medium-scale computing, the primary concern is reducing errors during computations associated with imperfections of the physical systems and the impossibility to completely isolate them from the environment. However, error suppression is not enough to build a full-scale computing procedure. An error correction is necessary to prevent errors from accumulating as operations are performed. Quantum error correction (QEC) codes are used to detect and correct errors <cit.>. At present, error correction protocols on different quantum states have been proposed. For example, encoding information by GKP (Gottesman-Kitaev-Preskill) states <cit.> allow one to construct a protocol for displacement error correction. However, the generation of states with properties approaching GKP states is an extremely difficult experimental task, although there is some progress in this direction <cit.>.Schrodinger cat states <cit.> are another example of quantum states applied for QEC. These states being a superposition of two coherent states, (|α⟩ and |-α⟩) can protect information from photon loss errors. For effective protection, cat states with amplitude α = 2 or higher are required <cit.>. Currently, several protocols have been proposed to generate states similar to Schrodinger cat states <cit.>. However, the achievable values in the optical range (|α| ≤ 1.9) <cit.> are insufficient for error correction protocols to work. In the microwave range, Schrodinger cat states with high fidelity and large α amplitude values can be generated <cit.>. However, in such systems, there are other difficulties related to the always-on Kerr nonlinearity suppression, which prevents individual control of logic states <cit.>. From the point of view of QEC codes, the squeezed Schrodinger's cat (SSC) states <cit.> are being of interest. Based on SSC states, an error correction code was developed that is capable of simultaneously correcting two types of errors: phase errors and photon loss errors. Unlike the traditional Schrödinger cat states, the SSC states should have a large squeezing degree and a small amplitude α for further incorporation into QEC protocols. Protocols for generating SSC state are proposed and discussed in <cit.>. However, the question of efficient generation of these quantum states with high fidelity and probability remains open. Error correction codes based on GKP states and Schrodinger's cat states are examples of bosonic QEC codes <cit.> where information is redundantly encoded in quantum oscillator states. At the heart of each QEC code is the requirement to satisfy the Knill and Laflamme conditions (KL-conditions) <cit.>. When these conditions are approximately met, the code is known as an approximate quantum error correction code <cit.>. All bosonic codes known to date are approximate QEC codes.Squeezed Fock (SF) states <cit.> are an example of well-studied non-Gaussian states. However, such states have not, to the authors' knowledge, been considered yet as a resource for QEC. In our work, we demonstrate that it is possible to construct an approximate error correction code based on SF states. Using the KL cost function <cit.> as a measure, we first examine different SF states and answer the question which SF states are better suited for correcting photon loss errors and dephasing errors. Next, using this function, we compare our protocol with a protocol using SSC states.§ ERROR CORRECTION USING SQUEEZED FOCK STATESLet us say that we have a quantum channel in which there are two types of errors: particle loss and dephasing. The evolution of the state described by the density matrix ρ̂ in this channel is given by the following Lindblad master equation <cit.>:∂_tρ̂(t) =ℒρ̂(t)=κ_1𝒟[â] ρ̂(t)+κ_2𝒟[â^†â] ρ̂(t),where𝒟[Ĵ] ρ̂(t) =Ĵρ̂(t) Ĵ^†-Ĵ^†Ĵρ̂(t)+ρ̂(t) Ĵ^†Ĵ/2,the annihilation â and creation â^† operators obey the canonical commutation relation [â,â^†]=1. The superoperators 𝒟[â] and 𝒟[â^†â] describe the particle loss and dephasing at rates of κ_1 and κ_2, respectively. The solution to equation (<ref>) can be written in the form of the Kraus decomposition <cit.>:ρ̂(t)=∑_j=0^∞K̂_jρ̂(0) K̂_j^†.In the case when the error accumulation rate is small κ_1,2 t ≪ 1, only three operators can be left in the decomposition:K̂_0=Î-κ_1 t/2â^†â-κ_2 t/2(â^†â)^2, K̂_1=√(κ_1 t)â, K̂_2=√(κ_2 t)â^†â .It turns out that at low error rates (κ_1,2 t ≪ 1), in a channel with particle loss and dephasing, it is possible to protect information if one is able to correct the following set of elementary errors:ℰ={Î, â, â^†â,(â^†â)^2}. Thus, our goal is to encode information in a way that protects it from these errors. To this end, we consider a bosonic QEC code <cit.>, in which we encode two logical states of a qubit into a quantum oscillator. In the bosonic QEC we consider, the code space to be a two-dimensional subspace of an infinite-dimensional Hilbert space. The code space is characterized by two basis states (code words) that encode the logical states |0⟩_L and |1⟩_L. When individual errors act on the code words, they move to other states from the error subspace. The error subspace vectors associated with different errors must be orthogonal to each other and to the code subspace vectors. All these requirements are conveniently written in the form of KL conditions <cit.>:⟨ i_L|Ê_a^†Ê_b| j_L⟩ =δ_i jα_a b,where i,j∈{ 0,1},and Ê_a, Ê_b∈ℰ, and α_a b is a matrix that does not depend on i and j. The presented condition is a necessary and sufficient condition for the recovery of the code words after the action of the errors.To correct for both particle loss and dephasing errors, we use bosonic QEC code based on SF states. These states are defined as follows:|r,n⟩=Ŝ(r)|n⟩,where Ŝ(r) is the squeezing operator, r is the squeezing parameter, and |n⟩ is the Fock state. To correct for photon loss and dephasing errors, we require states with specific parity <cit.> and structure in phase space <cit.>. SF states satisfy all these properties.As code words, we will consider two states of the form:|0_L;n⟩=Ŝ(r)|n⟩,|1_L;n⟩=Ŝ(-r)|n⟩,The represented states are squeezed in orthogonal directions on the phase plane.The code words we entered depend on two parameters: the squeezing degree and the Fock state number. These two parameters can be used to optimize the error correction protocol. Below we will study the dependence of code words on the number of the Fock state, but now let us look at the KL conditions for the first SF state.§ QUANTUM ERROR CORRECTION CODE BASED ON THE FIRST SQUEEZED FOCK STATELet us consider correcting both particle loss and dephasing errors using the first SF state. In this case, the code words are the following states:|0_L;1⟩=Ŝ(r)|1⟩,|1_L;1⟩=Ŝ(-r)|1⟩.Using these states, we can write KL conditions for the orthogonality of the states after errors from set (<ref>) act on them. All these conditions are presented in Table <ref>.The KL condition associated with the error norm is always satisfied for code words (<ref>) and (<ref>). I. e., for any pair of errors from set (<ref>), the following statement is true:⟨ 0_L;1|Ê_a^†Ê_b| 0_L;1⟩=⟨ 1_L;1|Ê_a^†Ê_b| 1_L;1⟩.The exact fulfillment of this condition is an undoubted advantage of this code. This distinguishes our proposed approximate code from others in which this condition is approximately satisfied <cit.>. It is important to note that the states used for encoding can be accurately generated experimentally <cit.>. This means that the exact equality of error norms condition is not violated in the experiment.For code words based on the first squeezed Fock state to be useful for correcting particle loss and dephasing errors, all elements in Table <ref> must be small.I.e., we need to require the following expressions to tend to zero:A_1=⟨ i_L;1| j_L;1⟩=1/cosh ^3/2 2r,B_1=⟨ i_L;1|â^†â| j_L;1⟩=3-cosh 2 r/2 cosh^5/22 r,C_1=⟨ i_L;1|(â^†â)^2| j_L;1⟩=25-12 cosh 2 r-5cosh 4 r/8 cosh^7/22 r, D_1=⟨ i_L;1|(â^†â)^3| j_L;1⟩=282-129 cosh 2 r-138 cosh 4 r+17 cosh 6 r/32 cosh^9/22 r,E_1=⟨ i_L;1|(â^†â)^4| j_L;1⟩=4203-1560 cosh 2 r-3236 cosh 4 r+600 cosh 6 r+121 cosh 8 r/128 cosh^11/22 r,where i≠ j=0,1. For the convenience of analyzing the presented functions, let us build their absolute value in Fig. <ref>.The graph shows that different functions have different zeros. At the same time, the zeros do not coincide, and the zero of one function corresponds to large values of the other. In addition, function |A_1| has no zeros, indicating the absence of orthogonality of code words for any squeezing parameter r. Thus, the only way to achieve orthogonality of states after the action of errors is to use states with a large parameter r. For example, for r=2, we get the following values: A_1≈ 7 · 10^-3, B_1≈ -3 · 10^-3, C_1 ≈ -9 · 10^-3, D_1 ≈ 10^-2, E_1 ≈ 5 · 10^-2. We see that they are small but not equal to zero. This means that the code we have presented is an approximate bosonic QEC <cit.>. All bosonic QEC codes belong to this type of code <cit.>. Thus, the proposed QEC code based on the first SF state can correct low-rate particle loss and dephasing errors under large squeezing parameters. § COMPARISON OF QUANTUM ERROR CORRECTION CODES BASED ON SQUEEZED FOCK STATES WITH DIFFERENT NUMBERSIn the previous section we considered first SF states as code words. Now, let us look at other SF states and understand which ones are best suited for correcting particle loss and dephasing errors. Appendix <ref> presents KL conditions for encodings using different SF states (with different n). From the presented conditions, it is clear that for any n, KL conditions on the error norm are exactly satisfied, while the KL conditions on error orthogonality are approximately satisfied for large r. All this means that codes based on the SF state with an arbitrary n can be used to correct photon loss and dephasing errors.Once we have established that SF states of arbitrary number n are suitable for QEC, we can proceed to compare them. As a measure for comparing codes, we will use the KL cost function proposed in <cit.> and used to compare two codes in the paper <cit.>.The KL cost function is given by the following expression:C_KL({Ê})=∑ _a,b| f_00ab-f_11ab|^2+| f_01ab|^2,where the KL tensor is given by the expressionf_ijab=⟨ i_L |Ê^† _aÊ_b|j_L⟩,and {Ê}⊂ℰ is represented by a set of error operators over which the sum is taken. I. e., we can evaluate codes by their ability to correct both individual types of errors and their totality. In the limit of perfect code, for which the KL conditions (<ref>) are exactly satisfied, the value of the KL cost function is equal to zero. The larger the KL cost function value, the worse the approximate code corrects errors <cit.>. Fig. <ref> shows the dependencies of the KL cost function for various sets of errors and different code words depending on the squeezing parameter.Fig. <ref>a shows that to correct particle loss errors, we can achieve zeros at specific finite values of the parameter r for n=2,3,4. In other words, when used to encode the second, third, fourth, and higher orders of SF states, we have an ideal code that protects information from the loss of particles. However, in this case, we can not protect information from dephasing. From Fig. <ref>b and <ref>c, it is clear that in the region r∈[0.2,1], codes with n=2,3,4 have values close to the maximum. Given that our goal is to correct particle loss and dephasing errors, we need to pay attention to Fig. <ref>d. It shows the KL cost functions for both types of errors. It is clear from this figure that the KL cost function takes minimum values for the first SF state starting from a certain squeezing parameter r>1.7. This means that for protecting information from two types of errors, the first SF state with a high squeezing degree is better suited (among all the cases considered).The statement we have obtained can be strengthened by saying that the first SF state best (among SF states with different numbers) protects information from loss and dephasing errors. This statement follows directly from two facts. First, as was shown in Appendix <ref>, the KL conditions for SF states of odd numbers tend to zero faster than the same conditions for even numbers. Second, it follows from Table <ref> that the first SF state has the smallest asymptote for the KL cost function (for large values of r) among all possible odd SF states. Thus, we can conclude that if we have a channel with only particle loss error, then SF states with numbers greater than one will be the best for information protection. If, in the experiment, we manage to generate these states with certain values of r (values corresponding to the zeros of the KL cost function), then we will get an ideal QEC code for the particle loss error. However, since the dips corresponding to the zeros of the cost function (see Fig. <ref>a) are very narrow, we will need high accuracy in generating the states. If we have a channel with two types of errors, then it is better to use the first SF state with the maximum possible squeezing degree.§ COMPARISON OF ERROR CORRECTION PROTOCOLS BASED ON SQUEEZED SCHRODINGER'S CAT STATES AND SQUEEZED FOCK STATES Let us compare our protocol with the protocol based on the SSC states. This comparison is motivated by the fact that the protocol using the SSC states is able to correct particle loss and dephasing errors simultaneously <cit.> . Furthermore, this code is similar to our proposed code in terms of experimental realization. Both SF states <cit.>and states close to the SSC states can be obtained experimentally (in an optical scheme with measurements). Thus, we compare two codes that can be realized experimentally.Two states are used as code words in the protocol based on the SSC states:|0_L,SCS⟩=1/N_+(|α,r⟩+|-α,r⟩),|1_L,SCS⟩=1/N_-(|α,r⟩-|-α,r⟩),where |α,r⟩=Ŝ(r)D̂(α)|0⟩ are squeezed coherent states, and N_±=√(2(1± e^-2α ^2 e^2r)) is the normalization factor. As we found out in the previous section, the best way to correct the two types of errors is to use the encoding with the first SF state. We will compare this encoding with the encoding based on the SSC states.The KL cost function graphs for two different encodings are shown in Fig. <ref>.Fig. <ref>a shows that the first SF state corrects particle loss errors significantly better. Indeed, for almost all squeezing parameters r, the KL cost function is smaller for the encoding with the first SF state than for the encoding with the SSC states.As for the dephasing errors (Figs. <ref>b and <ref>c), it all depends on the relationship between the amplitude and the squeezing parameter of the SSC state. For example, for α=0.5 andr<1.7, the first SF state corrects the phase error better (Fig. <ref>b). However, for large amplitudes of W, the situation is the opposite. The ability to correct the (â^†â)^2 error also depends on the value of the parameter α.When the amplitude value is large, the code using the SSC states corrects this type of error better compared to the first SF state (Fig. <ref> c). At the same time, the larger the amplitude of the cat state, the smaller its ability to correct particle loss errors (see Fig. <ref>a). The combination of these two factors leads to the fact that the SF state being generally better at correcting errors from the ℰ set (see Fig. <ref>d).It is important to note that it does not follow from Fig. <ref>d that the code based on the SSC state is bad at correcting errors of both types. To talk about the quality of the code, we should perform a more detailed analysis with estimation of the channel fidelity and construction of recovery operations. The results obtained by using the KL cost function only allow us to compare encodings. Thus, the obtained estimates allow us to declare that, other things being equal, the encoding with the first SF state corrects errors better than the encoding with the SSC states. § CONCLUSIONThe paper addressed the problem of quantum error correction (QEC) in a quantum channel with particle loss and dephasing errors are present. We have shown that squeezed Fock (SF) states can be used to encode information in such a channel. These states have a certain parity and a structure in the phase space. That is why we considered them as the main resource for QEC. We have shown that the approximate Knill-Laflamme (KL) conditions are satisfied for the SF states with arbitrary numbers n. The conditions on the orthogonality of the errors are satisfied when the squeezing of SF states tend to infinity. The condition on the norm of the errors for SF states is always satisfied. To compare different quantum codes with each other, we exploit the KL cost function. This is a function indicating how much the KL condition is violated for selected code words. Using this function, one can give a quantitative measure for evaluating different code words.Applying this measure, we have shown that the first SF state is the best for information protection in a channel with both particle loss and dephasing errors. In this case, the squeezing degree should be large enough. In this paper, we have found that for a squeezing parameter r>1.7, the code based on the first SF state performs better than the code based on SF states with any other number.Considering a channel with only particle loss errors, we obtain a perfect QEC code using SF states with a certain squeezing degree and number n>1. For this code, the KL conditions are perfectly satisfied. However, it is important to note that implementing such a code in an experiment is quite challenging since we have to tune the parameter r precisely. We compared code words based on the first SF state with code words based on the squeezed Schrodinger's cat states. We have shown that the code based on the first SF state is better suited for information protection in a channel where both particle loss and dephasing errors are present. We demonstrated that for the same squeezing degree of the two states, the KL cost function of the SF state is smaller for a channel with two types of errors. In other words, the first SF states better protect the information in a channel where both particle loss and dephasing errors are present.§ FUNDINGThis work was financially supported by the Russian Science Foundation (Grant No. 24-22-00004).§ ACKNOWLEDGMENTS The authors are grateful to Prof. A. K. Tagantsev for fruitful discussion and valuable advice.§ DISCLOSURESThe authors declare no conflicts of interest.§ DATA AVAILABILITYData underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.§ KL CONDITIONS FOR ENCODING USING SQUEEZED FOCK STATES WITH ARBITRARY N Let us see what the KL conditions (<ref>) look like for an encoding usingSF states with an arbitrary number n. |0_L;n⟩=Ŝ(r)|n⟩,|1_L;n⟩=Ŝ(-r)|n⟩,As for the first SF state, the error norm condition is exactly satisfied for any n:⟨ 0_L;n|Ê_a^†Ê_b| 0_L;n⟩=⟨ 1_L;n|Ê_a^†Ê_b| 1_L;n⟩It turns out that the KL conditions for orthogonality depend on the parity of the codes used. The KL conditions for error orthogonality for encoding using odd SF states are presented in Table <ref>. We see that all non-zero conditions tend to zero for large squeezing parameter r, as 𝒪(e^-3r).The KL conditions for error orthogonality for encoding using even SF states are presented in Table <ref>. From the table we see that all non-zero conditions tend to zero at large squeezing parameter r, as 𝒪(e^-r). Comparing Table <ref> with Table <ref>, we can conclude that the use of SF states with odd numbers is better because the KL conditions for them tend to zero faster than the same conditions for even numbers SF states.Furthermore, it is not difficult to show that every non-zero element of Table <ref> takes its minimum value in the case where n=1. This means that the KL conditions are best satisfied for the first SF state. It also follows that for large r the KL cost function for the first SF state will have the minimum value among all possible SF states.
http://arxiv.org/abs/2312.16000v1
{ "authors": [ "S. B. Korolev", "E. N. Bashmakova", "T. Yu. Golubeva" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226110951", "title": "Error Correction Using Squeezed Fock States" }
plotmarks,calc,decorations,decorations.markings,decorations.pathmorphing shapes,arrows.meta,automata,positioning@epsinputcheck#1 inputcheck"#1" in: doi=false string+doi[1] doi#1http://dx.doi.org/doi#1 titlestring+doi#1 [article]titlestring+doi#11.25 =21.5cm =16.5cm =-0cm =-0.5cmequationsection*§ * §.§ *§.§.§ **-5pt-5pt-5pt[-5pt]
http://arxiv.org/abs/2312.16396v1
{ "authors": [ "Dmitri Bykov", "Anton Pribytok" ], "categories": [ "hep-th", "math-ph", "math.MP" ], "primary_category": "hep-th", "published": "20231227035608", "title": "Supersymmetric deformation of the $ \\mathbb{CP}^{1} $ model and its conformal limits" }
Clustering Sets of Functional Data by Similarity in Law Dedicated to the memory of Antonio Galves Antonio Galves^1,†, Fernando Najman^2, Marcela Svarc^3,4, and Claudia D. Vargas^5 ^1Instituto de Matemática e Estatística, Universidade de São Paulo, São Paulo, Brazil, † Deceased ^2Instituto de Computação, Universidade Estadual de Campinas, Campinas, Brazil, [email protected] ^3Departamento de Matemática y Ciencias, Universidad de San Andrés, Argentina , [email protected] ^4CONICET ^5Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil., [email protected] We introduce a new clustering method for the classification of functional data sets by their probabilistic law, that is, a procedure that aims to assign data sets to the same cluster if and only if the data were generated with the same underlying distribution. This method has the nice virtue of being non-supervised and non-parametric, allowing for exploratory investigation with few assumptions about the data. Rigorous finite bounds on the classification error are given along with an objective heuristic that consistently selects the best partition in a data-driven manner. Simulated data has been clustered with this procedure to show the performance of the method with different parametric model classes of functional data. Keywords: Kolmogorov Smirnov statistics, Concentration inequalities, Projection procedure. § INTRODUCTION An important problem in functional data analysis is to identify among different data sets those generated by the same probabilistic law. To solve this problem, we propose a novel method that clusters sets of functional data by their similarity in law. The method is non-parametric and non-supervised giving it a broad range of applications. The procedure measures the similarity of the data sets empirical distributions and uses this similarity measure to conduct a hierarchical clustering procedure. The partition of the data sets results in a label for each data set which represents estimated equivalence in law. While there are diverse options proposed for clustering functional data samples, clustering procedures for sets of functional data are a less discussed topic.Clustering sets of real data by their distribution was discussed in Mora-López and Mora <cit.> and Zhu et al. <cit.>. However, methods for higher dimensional data cannot be trivially extended from the one-dimensional case. To measure the similarity in law between each pair of data sets we used a random projection strategy. We first project each individual functional data sample associated with each data set into a fixed number of randomly generated directions. Then, for each pair of data sets and each random direction, we measure a distance between the one-dimensional empirical distributions distance between the real-valued projections. Our method uses these distances to construct an estimation of the distance of the probabilistic law generating the functional samples. This approach is inspired by the results of Cuesta-Albertos, Fraiman and Ransford <cit.>, where they show conditions under which the distribution of the projections of the samples of two data sets will be equal if and only if the two laws are the same. We also provide finite bounds for the probability of encountering a large error obtained by approximating the distance between laws by the estimate obtained with a finite sample. A non-parametric bound for multivariate data has been presented in Naaman <cit.>, however, the bound increases linearly with the dimensionality of the data, making it undefined for the functional case. The bound presented here holds for data in Hilbert spaces, i.e. it does not depend on the data dimensionality. The bound also does not assume that the data be generated by any parametric model class. Without loss of generality, we assume in the following that the data are functions living in L^2([0,T]), for some fixed real T > 0. We also present an adapted version of the bound under H_0 designed to be sharper, and show the clustering procedure good performance with functional data simulated with models from two different parametric model classes. Informally, the algorithm works as follows. Let 𝒰 be a finite set such that S_U= |𝒰| ≥ 2, and let also 𝒴_N^u be a family of sets with N functional data samples indexed by u ∈𝒰, that is, 𝒴_N^u = (Y^u_1, ⋯, Y^u_N) with Y^u_n ∈ L^2([0,T]). For simplicity, we assume that all data sets have exactly Nsamples, but we note that all steps in the procedure can naturally be adapted for sets with different sample sizes. For each u ∈𝒰, we denote by Q^u the probabilistic law that generates the samples of the data set 𝒴_N^u. Let also (B_1, ⋯, B_M) be M random directions in L^2([0,T]) generated in a suitable way. Using a random projection strategy, we construct an estimated distance D̂_N,M(u,v) between the empirical distributions of the 𝒴^u_N and 𝒴^v_N functional data sets. We use this estimated distance as a dissimilarity measure for a hierarchical clustering procedure to partition the functional data sets indexed by u ∈𝒰 by their law, that is, our goal is to assign two data sets 𝒴_N^u and 𝒴_N^v to the same cluster if and only if Q^u = Q^v. § CLUSTERING PROCEDURE We propose to use the following hierarchical clustering procedure to retrieve a partition of the functional data sets. Let 𝒫 be the set of all partitions of the set 𝒰. Given P ∈𝒫, let D be some dissimilarity between elements of 𝒰. We also define a dissimilarity D for any pair of clusters which by abuse of notation we also denote by D. This strategy of extending the dissimilarity to cluster pairs is usually called the linkage of the hierarchical clustering procedure. Here we propose to use the complete linkage, that is, for any pair of clusters C and C', we define the linkage as D(C,C') = sup_u ∈ C, v ∈ C'{D(u,v)} . Denote C_1(P) and C_2(P) as two elements of P satisfying C_1(P) ≠ C_2(P) and D(C_1(P), C_2(P)) ≤ D(C, C'),C ∈ P,C' ∈ P,C ≠ C'. In the following, we assume that all pairs have different dissimilarity values such that C_1(P) and C_2(P) are uniquely defined. This will be the case with probability one for our chosen dissimilarity, which will be introduced later and which takes real values. Let r_D(P) be the dissimilarity between C_1(P) and C_2(P), i.e., r_D(P) = inf{ D(C, C'): (C, C') ∈ P × P, C ∩ C' = ∅} . Let us denote C_1,2(P) = {C_1(P) ⋃ C_2(P)}. Consider the following family of recursive partitions. Let P_1(D) = {{u} : u ∈𝒰} be the partition of singletons, and for k = 2, …, S_U let P_k(D) = {C ∈ P_k-1 : C ≠ C_1(P_k-1), C ≠ C_2(P_k-1) }⋃ C_1,2(P_k-1). We call a dendrogram model the pair consisting of the family P_1:S_U(D) = (P_1(D), ⋯, P_S_U(D)) and the associated function r_D. Dendrogram models can be represented graphically as a rooted and labelled tree. An example of a graphical representation of a dendrogram model is shown in Fig. <ref>. Our goal is to obtain a plug-in D dissimilarity measure for data sets indexed by the elements of 𝒰. In addition, we want to choose a measure that gives strong and consistent estimates of the similarity in law between the sets. In the following, we present a measure with such properties. § PRELIMINARY DEFINITIONS We propose a dissimilarity function for functional data sets based on random projections which estimates the similarity between the laws associated with these data sets. Let us start by giving some proper definitions to present Proposition <ref>. Let 𝒰 be a finite set with cardinality denoted by S_𝒰. For some fixed real value T > 0 and probability space (Ω, 𝒜, ℙ), let F = L^2([0, T]) and let ℱ be the Borel σ-algebra on F. For all u ∈𝒰, let Q^u be a probability measure on L^2[0,T] and Y^u:Ω→ L^2[0,T] bea random element of L^2[0,T] generated with distribution Q^u. Moreover, let h be an element of the dual space of F, which is L^2[0,T]. Denote Q^u_h to the univariate distribution of the random variable < Y^u,h > and f^u,h to its cumulative distribution function. We say that a probability measure Q on (F, ℱ) satisfies the Carleman condition if, for all i >0, its absolute moments m_i = ∫||h||Q(dh) are finite and ∑_j ≥ 1m_i^-1/j = +∞ . For any h ∈ F, let Q_h((-∞,t]) = ℙ(x ∈ F : <x, h> < t). We say that a probability measure Q on (F, ℱ) is a continuous law if for any h ∈ F and t ∈ℝ, Q_h((-∞,t]) is continuous. We denote by 𝒬 the set of all continuous probability measures on (F, ℱ) which satisfy the Carleman condition. Given (u,v) ∈𝒰^2, we define a distance D between Q^u ∈𝒬 and Q^v ∈𝒬. For all (u,v) ∈𝒰^2, let Q^u and Q^v be measures over L^2([0,T]) satisfying the regular conditions given by Definitions <ref> and <ref>. Let also W be an independent Gaussian measure on L^2([0,T]). Then D(u,v) = ∫ ||f^u,h-f^v,h||_∞dW(h) gives a metric over 𝒬. By construction D(u,v) is symmetric and inherits the triangle inequality from the infinite norm, || · ||_∞. Moreover, as a consequence of Theorem 4.1 in Cuesta-Albertos, Fraiman and Ransford <cit.> D(u,v)=0if and only if Q^u=Q^v. For each u ∈𝒰 and n ∈{ 1, ⋯, N }, let Y^u_n ∈ F be a function generated by law Q^u ∈𝒬. For each u ∈𝒰 we call 𝒴^u _N= {Y^u_n : n ∈{1,⋯, N}} as the sample set u. For every n = 1, …, N and every u ∈𝒰 we call projection of Y^u_nonto direction h the following inner product R^u,h_n = ∫_0^Th(t)Y^u_n(t)dt. For each u ∈𝒰, the projection of the data set u in the direction h is naturally defined as 𝒴_N^u,h = {R^u,h_n: Y^u_n ∈𝒴^u_N}. The empirical cumulative distribution function of 𝒴^u,h_N is given by f̂^u,h_N(t) = 1/N∑_R^u,h_n∈𝒴^u,h_N1_{R^u,h_n≤ t} , t ∈ℝ , where 1 denotes the indicator function. Given u,v ∈𝒰 consider the data sets 𝒴^u _N and 𝒴^v _N respectively, denote D^u,v_N(h) to the L_∞ distance between the empirical distributions ofdensity functions 𝒴^u,h _N and 𝒴^v,h _N, i.e., D^u,v_N(h) = sup_t ∈ℝ{|f̂^u,h_N(t)-f̂^v,h_N(t)| }. Cuesta-Albertos, Fraiman and Ransford <cit.> introduce a goodness-of-fit test for functional data based on the equation (<ref>), which, although has discriminating power from an asymptotic theoretical perspective in practice, can be unstable for finite sample size. Hence, a natural idea is to propose a statistic that takes many directions into account. § BOUNDS ON THE ERROR RATES Let B = (B_1, ⋯, B_M) be M independent realisations of elements in F generated with a Gaussian measure. Following the reasoning of Duarte et al. <cit.>, we take this measure to be the Brownian bridge. Let us define the empirical distance between the sample sets u and v as D̂_N,M(u,v) = 1/M∑_m = 1^M D^u,v_N(B_m). Let us present some useful notations. For each (u,v) ∈𝒰^2 consider the L_∞ distance between the projections in direction B, D^u,v(B) := ||f^u,B-f^v,B||_∞ , Denote also Δ_N^u,v(B) := Δ_N^u(B)+ Δ_N^v(B), where Δ_N^u(B)=||f̂_N^u,B-f^u,B||_∞ and Δ_N^v(B)=||f̂_N^v,B-f^v,B||_∞. While we do not have direct access to the distance D(u,v), the distance D̂_N,M(u,v)gives us a finite approximation. Theorem <ref> shows that the probability of a large error between the estimate and the true distance decays in an exponential manner. For all (u,v) ∈𝒰^2, let 𝒴^u_N and 𝒴^u_N be sample sets which satisfy the regular conditions <ref> and <ref>. Let D̂_N,M(u,v) be defined as in equation (<ref>). Then, for any γ∈ [0,1], exists C>0 such that ℙ(|D̂_N,M(u,v) - D(u,v)| ≥γ) ≤ 2e^-Mγ^2/2+2e^-Mγ^2/32+2Ce^-Nγ^2/16 . Observe that ℙ(|D̂_N,M(u,v) - D(u,v)| ≥γ) ≤ℙ(D̂_N,M(u,v) - D(u,v) ≥γ) + ℙ(D(u,v) - D̂_N,M(u,v)≥γ). Let us start by finding the bound of the first term, i.e. ℙ(1/M∑_m = 1^MD^u,v_N(B_m)-D(u,v)≥γ). By triangle inequality, we obtain that D^u,v(B) ≤ D^u,v_N(B) + Δ^u,v_N(B). Then, ℙ(1/M∑_m = 1^M D^u,v_N(B_m)-D(u,v)≥γ) ≤ ℙ(1/M∑_m = 1^M D^u,v(B_m) - D(u,v) + 1/M∑_m = 1^MΔ^u,v_N(B_m) ≥γ) ≤ ℙ(1/M∑_m = 1^M D^u,v(B_m) - D(u,v) ≥γ /2) + ℙ(1/M∑_m = 1^MΔ^u,v_N(B_m) ≥γ/2). We have D(u,v) = ∫ D^u,v(h)dW'(h) = 𝔼_B[D^u,v(B)], where W' is the Brownian bridge measure. Therefore, by the law of total probability, the first term in (<ref>) is bounded by Hoeffding's inequality <cit.> ℙ(1/M∑_m = 1^M D^u,v(B_m) - D(u,v) ≥γ/2) ≤ e^-Mγ^2/2 The second term of (<ref>) can be bounded as follows, ℙ(1/M∑_m = 1^MΔ^u,v_N(B_m) ≥γ/2) = ℙ(1/M∑_m = 1^MΔ^u,v_N(B_m) - 𝔼_B[Δ^u,v_N(B)] + 𝔼_B[Δ^u,v_N(B)] ≥γ/2) ≤ ℙ(1/M∑_m = 1^MΔ^u,v_N(B_m) - 𝔼_B[Δ^u,v_N(B)] ≥γ/4) + ℙ(𝔼_B[Δ^u,v_N(B)] ≥γ/4). Inequality (<ref>) holds by the law of total union. As in equation (<ref>) the first term is again bounded by Hoeffding ℙ(1/M∑_m = 1^MΔ^u,v_N(B_m) - 𝔼_B[Δ^u,v_N(B)] ≥γ/4) ≤ e^-Mγ^2/32 . Theorem 3.1 of Cuesta-Albertos, Fraiman and Ransford <cit.> gives us that the distribution of Δ^u,v_N(h) is independent of h for any h ∈ F/{0}. Therefore, it exits C>0 such that we have ℙ(𝔼_B[Δ^u,v_N(B)] ≥γ/4) = ℙ(Δ^u,v_N(B) ≥γ/4) ≤ Ce^-Nγ^2/16 , with probability 1 by the two sample DKW inequality given in Theorem 1 in Wei and Dudley <cit.>. To finish, note that by the triangle inequality D^u,v_N(B) ≤ D^u,v(B) + Δ^u,v_N(B), we also have ℙ(D(u,v)-1/M∑_m=1^MD^u,v_N(B_m)≥γ) ≤ ℙ(D(u,v)-1/M∑_m=1^M D^u,v(B_m)+1/M∑_m = 1^MΔ^u,v_N(B_m)≥γ)≤ ℙ(D(u,v)-1/M∑_m=1^M D^u,v(B_m) ≥γ /2)+ℙ(1/M∑_m = 1^MΔ^u,v_N(B_m)≥γ/2). As in the previous case, the first term in equation (<ref>) is bounded by Hoeffding and the second is bounded following the exact same reasoning as the bound of the second term of (<ref>). The constant C is discussed in Wei and Dudley <cit.>. They show that the bound holds for any C≥ e, and also holds for some C_N which approaches 2 as N increases. We refer to this article for the choice of C_N, as to obtain a more powerful statistic. § CONSISTENCY OF THE PARTITION SELECTION FROM A DENDROGRAM MODEL The results of Section <ref> show that D̂_M,N is a consistent estimator of a distance in law between the sets given by equation (<ref>) since it is a sum of exponentially decreasing terms. Therefore, we propose to use D̂_M,N as the plug-in distance for the procedure described in Section <ref>, which returns a (P_1:S_U(D̂_M,N),r_D̂_M,N) random dendrogram model, containing a family of nested partitions. Given a real-valued threshold a partition can be selected from the dendrogram. This can be shown graphically, as in Fig. <ref>. For our classification goal, we need a heuristic to choose cut threshold. We start by noting that the distance measure D̂_M,N is closely related to the statistic proposed in Cuesta-Albertos, Fraiman and Ransford <cit.> to perform the goodness-of-fit tests. Cuesta-Albertos, Fraiman and Ransford<cit.> introduced test based on one-dimensional random projections of 𝒴_N^u and 𝒴_N^v to determine whether both sets were generated by the same law, i.e. testing H_0: Q^u=Q^vH_A: Q^u≠ Q^v. The test statistic is KS(f̂_N^u,B,f̂_N^v,B) = √(N/2) D_N^u,v(B). The null hypothesis is rejected at level α when KS( f̂_N^u,B,f̂_N^v,B)>η_α, the critical value is obtained from the asymptotic Kolmogorov distribution Kolmogorov <cit.>. The results presented in Section <ref> allow us to perform a test with the finite sample exponential bound on the statistic D̂_M,N. In summary, we can select the partition from the dendrogram model using a criterion based on the goodness-of-fit test statistic behavior under the null hypothesis. The null hypothesis is rejectedat level α whenever D̂_M,N(u,v)>γ_α, where γ_α is a value which ensures the level of the test. Informally, this choice of threshold gives us a consistent clustering procedure for the following reasons. Let𝒴_N^u_1,…,𝒴_N^u_S_U be random data sets generated according to u_1,…,u_S_U∈𝒰. Let [u^*] be the equivalence class given by, [u^*]={ u∈𝒰 | Q^u=Q^u^*} . Let P^* be the partition given by the quotient set, where all the indexes u of the data sets 𝒴_N^u belonging to the same group of the partition have been generated by the same distribution. Let 𝒟∈ℝ^S_U × S_U be the distance matrix, whose entry (u,v), has the distance between Q^u and Q^v given by (<ref>). Then D(u,v)>0 if and only if Q^u≠ Q^v, with D(u,v)=0 otherwise. Without loss of generality, consider a permutation of the rows and columns of 𝒟 such that we have a block matrix, where the blocks of the diagonal are zero-squared matrices, each defining a cluster,while outside the zero-squared blocks of the diagonal the entries are positive distances. So from 𝒟 we get the partition P^*. Let 𝒟_N∈ℝ^S_U × S_U be the empirical counterpart of 𝒟, where each entry D̂_M_N,N(u,v) is the empirical distance between 𝒴_N^u and 𝒴_N^v as defined in (<ref>). This matrix is an empirical approximation of 𝒟 and will therefore give us the correct partition for a sufficiently large N. To show a consistency result of the clustering procedure, let us give some definitions. Let α = α_N such that α_N → 0 andlog(2/α_N)/N→ 0 as N → +∞. Let P̂_k^* be the partition obtained from 𝒟_N using the threshold γ_α_N, P̂_k^* = {P_K(D̂_M,N): K = max_k {r_D̂(P_k(D̂_M,N)) ≤γ_α_N}} . We also take M as a function of N and denote it as M_N. Let M_N be such that log(2/α_N)/M_N → 0, as N →∞. Under the same setting stated in Theorem <ref>, for every u ∈𝒰, let 𝒴^u be a set of functional data assuming values in F with associated law Q^u ∈𝒬, such that satisfy the regular conditions given by Definitions <ref> and <ref>. Then, lim_N →∞ℙ(P̂_k^*≠ P^*) = 0. Let U = {(u, v) ∈𝒰^2 : Q^u ≠ Q^v} and U' = {(u, v) ∈𝒰^2 : Q^u = Q^v}. Let also d(u,v) = inf_(u,v)∈ U^2{D(u,v):u≠ v}. Then, for any β∈ (0, d(u,v)) we define 𝒰_N = ⋃_(u,v) ∈ U{D̂_M_N,N(u, v) ≤β}, and 𝒰'_N = ⋃_(u,v) ∈ U'{D̂_M_N,N(u, v) > β} Then ℙ(P_k^*≠ P^*) ≤ℙ(𝒰_N ∪𝒰'_N) = ℙ(𝒰_N) + ℙ(𝒰'_N). Hence, it is enough to show that exists N_0(ζ,β) such that ℙ(𝒰_N) < ζ/2 and N_1(ζ,β) such that ℙ(𝒰'_N) < ζ/2 for every N > N_0(ζ,β) and N>N_1(ζ,β). Theorem <ref> gives an exponential bound between the distance(<ref>) and (<ref>), then ℙ( D̂_M_N,N(u,v)>γ_α_N)≤α_N, if Q^u = Q^v. Moreover, we get that γ_α_N→ 0 whenever max{12√(2log(2/α_N)/M_N) ,12√(log(2C/α_N)/N)}⟶ 0, as α_N→ 0.Note this convergence is guaranteed by the definitions of α_N and M_N. Then, whenever Q^u ≠ Q^v, for N large enoughthere exists d̃>0 such that ℙ( D̂_M_N,N(u,v)<d̃+γ_α_N)≥ 1-α_N , Then, for β<d̃+γ_α_N ℙ(𝒰_N)≤ ∑_(u,u') ∈ Uℙ( D̂_M_N,M(u,v) ≤β) ≤≤ S_U (S_U-1)/2α_N. Therefore there exists N_0(ζ, β) such that ℙ(𝒰_N) < ζ/2 for any ζ. Also, for N large enough we have β > γ_α_N, and therefore ℙ(𝒰'_N)≤ ∑_(u,u') ∈ U'ℙ( D̂(u,v) > β) ≤≤ S_U (S_U-1)/2α_N. Then there exists N_1(ζ, β) such that ℙ(𝒰'_N) < ζ/2 for any ϵ. Then, for N large enough the clustering procedure with k clusters will retrieve a partition which is coincident with P^* with arbitrarily high probability. § AN EMPIRICAL BERNSTEIN BOUND FOR THRESHOLD SELECTION The proof of the theorem <ref> uses Hoeffding's inequality and relies on the fact that the random variablesD_N^u,v(B) are bounded in [0,1]. However, if the variance of D_N^u,v(B) is small compared to the range of the bound, a sharper bound can be obtained using a Berstein-type inequality. This strategy presents a challenge, since the variance of D_N^u,v(B) is unknown. To overcome this problem, we propose to use an empirical Berstein inequality, introduced in Maurer and Pontil <cit.>, which uses the empirical variance of D_N^u,v(B) instead of the theoretical one. Using this strategy, let us show that under H_0 we have an alternative bound for the estimation error. First, let us define the following ancillary variables. Let V̂_B[D̅^u,v_N] be the empirical variance in B of D^u, v_N(B), that is V̂_B[D̅^u,v_N] =1/M-1∑_m=1^M [D^u, v_N(B_m)-D̅^u,v_N ]^2 , where D̅^u,v_N = 1/M∑_m=1^MD^u, v_N(B_m). Let also, for any δ∈ (0,1), Γ^u,v(δ) = √(2V̂_B[D̅^u,v_N]log(2/δ)/M), and ϵ(δ) = 7log(2/δ)/3(M-1) . Let 𝒴^u_N and 𝒴^v_N be two sample sets which follow the regular conditions <ref> and <ref>. Let also Q^u=Q^v. Then for any γ∈ (0,1) and δ such that ϵ(δ)<γ we have ℙ(|D̂_M,N(u,v) - D(u,v)| ≥Γ^u,v(δ)+γ) ≤ Cexp(-N(γ-ϵ(δ))^2)+δ where C is the same constant discussed in Remark <ref>. Under H_0 we have D(u,v) = 0. Therefore the probability simplifies to ℙ(1/M∑_m=1^M D^u,v_N(B_m) ≥Γ^u,v(δ)+γ). Then we have, ℙ(1/M∑_m=1^MD^u,v_N(B_m) +𝔼_B[D^u,v_N(B)] - 𝔼_B[D^u,v_N(B)]≥Γ^u,v(δ)+γ-ϵ(δ)+ϵ(δ)) ≤ ≤ℙ(1/M∑_m =1^MD^u,v_N(B_m) - 𝔼_B[D^u,v_N(B)]≥Γ^u,v(δ)+ϵ(δ)) + ℙ(𝔼_B[D^u,v_N(B)]≥γ-ϵ(δ)). Now note that ℙ(𝔼_B[D^u,v_N(B)]≥γ-ϵ(δ)) ≤ℙ(𝔼_B[Δ^u,v_N(B)]≥γ-ϵ(δ)) ≤ Ce^-N(γ-ϵ(δ))^2 by the same arguments used in the proof of Theorem <ref>. The first term of (<ref>) is bounded by δ following the empirical Berstein's inequality of Maurer and Pontil<cit.> which achieves the upper bound and finishes the proof. Since we have access to Γ^u,v(δ) from the data for all (𝒴^u_N,𝒴^v_N), this bound gives us a heuristic for choosing a partition from the dendrogram model. We control the number of random directions and suggest choosing them such that M > N. This results in the bound obtained in Theorem <ref> being dominated by its first term, giving us a bound close to a DKW-type bound Massart <cit.>. The random variables V̂_B[D̅^u,v_N] do not necessarily have the same value for all pairs (u,v). To deal with this, we select our threshold using the worst case among all the comparisons, this means taking the maximum variance among all thepairs (u,v). Let V^* be V^* = max_(u,v) ∈𝒰^2{V̂_B[D̅^u,v_N]} , then replace Γ^u,v(δ) by Γ^*(δ) = √(2V^*log(2/δ)/M). The following threshold controls D̂_M,N(u,v) for all pairs (u,v)∈𝒰^2, γ_α_N^* = inf_δ∈(0,α_N){Γ^*(δ)+√(log(C/α_N-δ)/N)+ϵ(δ)} . This value can be approximated numerically. § PSEUDO-CODE AND SIMULATIONS In this section, we show a pseudo-code of the clustering algorithm and also give some considerations on the numerical aspects of the method. All codes used can be found at < https://github.com/fanajman/Clustering-Sets-of-Functional-Data-by-Law>. In the following we will present a simulation study to show the performance of the clustering procedure. For this purpose, we propose two models to generate functional data sets. We call the first model a θ-scaled Brownian Bridge (SBB). Each sample was generated independently as follows Y^u_n(t) = (W(t)-t/TW(T))θ_u, where W is the Wiener process and θ_u ∈ℕ is the fixed parameter defining of the θ-scale of the Brownian Bridge in this simulation. We call the second model is Autoregressive (AR). Each sample was generated independently as Y^u_n(t) = Y^u_n(t-1)θ'_u + ξ_n, where Y^u_n(0) = ξ_0 and ξ_n are independent random variables generated following a standard normal distribution, and θ'_u ∈ (0,1) is the model parameter. In both models, the functions were generated on an equispaced grid of 80 points. Seven functional data sets were generated for each model, these data sets followed three different laws, which means that there are three clusters. For SBB, the parameters θ chosen for each of the seven data sets are (1,1,2,2,2,4,4). While for AR the parameters θ' chosen for each of the seven models are (0.99, 0.99, 0.66, 0.66, 0.66, 0.33, 0.33). Then, in both cases, there are two clusters consisting of two sets of functional data, while the remaining one has three sets of functions. To initialise the clustering procedure we need to set two parameters, namely α_N = √(1/N) and M_N = σ N. For each different combination of sample size N ∈ (40, 60, ⋯, 160), σ∈ (10,30,50) and model, we ran and retrieved a partition P̂_k^* for 100 replicates. Figures <ref> and <ref> show that the performance of the clustering procedure is very satisfactory. The partition retrieved from the SBB model data was correct in more than 90% of the replicates forN≥60 and every value of σ. The case of the AR model is more challenging, achieving the correct partition in more than 85% for N≥ 80. Interestingly, the best result for the AR model is obtained for σ = 10.However, most of the results obtained for the different values of σ are similar for both models, indicating that increasing the value of M_N above 10N does not bring clear benefits. Finally, to better understand the procedure, we will characterize the errors made in cases where the partition obtained does not coincide with P^*. This procedure is based on a goodness-of-fit test, therefore there are two possible errors: on the one hand, the algorithm could assign to the same cluster a pair of data sets generated with different probability distributions. On the other hand, the algorithm could assign to different clusters two sets generated under the same probability law. We call the first a type 1 error and the second a type 2 error.Fig. <ref> (resp. <ref>) shows the results for the error of type 1 (resp. type 2), for both models and σ=10. As expected, the incidence of errors of type 2 decreases with increasing sample size. For sample sizes greater than 100, there are no type 2 errors for any model. For type 1 errors, a small decay pattern is observed, specially with the SBB model. We can also observe that while we obtain a small percentage of incorrect partitions for some combinations of parameters, the final partitions correctly assign most data pairs even for small values of N. To conclude, the clustering procedure was able to retrieve the correct partition from the set of functional data sets with realistic sample sizes. The high dimensionality of the data did not pose a significant challenge to the procedure and was in line with what was expected following the theoretical bound presented in Sections <ref> and <ref>. The lack of assumptions on the law generating the data also makes this method useful for exploratory analysis, being able to retrieve the distribution that generated the models correctly identifying the clustering structure. The simplicity of the procedure and the heuristics of the partition selection method also make this procedure easy to use, minimizing the need for parameter tuning or fitting. This helps to avoid user-induced errors in the analysis. § ACKNOWLEDGMENTS This work is part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant # 2013/ 07699-0 , S.Paulo Research Foundation (FAPESP). This work is supported by CAPES (88882.377124/2019-01) and FAPESP (2022/00784-0) grants. A.G and C.D.V. were partially supported by CNPq fellowships (grants 314836/2021-7 and 310397/2021-9) This article is also supported by FAPERJ ( # CNE 202.785/2018 and # E- 26/010.002418/2019), and FINEP ( # 18.569-8) grants. The authors acknowledge the hospitality of the Institut Henri Poincaré (LabEx CARMIN ANR-10-LABX-59-01) where part of this work was written. 99 Cuesta2007 CUESTA-ALBERTOS, J. A., FRAIMAN, R. and RANSFORD, T. (2007a). Random projections and goodness-of-fit tests in infinite-dimensional spaces. Bulletin of the Brazilian Mathematical Society, New Series37 477-501.cuesta2007sharpCUESTA-ALBERTOS, J. A., FRAIMAN, R. and RANSFORD, T. (2007b). A sharp form of the Cramér–Wold theorem. Journal of Theoretical Probability 20 201–209 duarte_retrieving_2019 DUARTE, A., FRAIMAN, R., GALVES, A., OST, G. and VARGAS, C. D. (2019). Retrieving a Context Tree from EEG Data. Mathematics 7 427.hoeffding1994probabilityHOEFFDING, W. (1994). Probability inequalities for sums of bounded random variables. The collected works of Wassily Hoeffding 409–426. an1933sullaKOLMOGOROV, A. (1933). Sulla determinazione empirica di una legge didistribuzione. Giorn Dell’inst Ital Degli Att 4 89–91.massart1990tightMASSART, P. (1990). The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. The annals of Probability 1269–1283.maurer2009empirical MAURER, A. and PONTIL, M. (2009). Empirical bernstein bounds and sample variance penalization. arXiv preprint arXiv:0907.3740mora2015adaptive MORA-LÓPEZ, L. and MORA, J. (2015). An adaptive algorithm for clustering cumulative probability distribution functions using the Kolmogorov–Smirnov two-sample test. Expert Systems with Applications 42 4016–4021.naaman2021tight NAAMAN, M. (2021). On the tight constant in the multivariate dvoretzky–kiefer–wolfowitz inequality. Statistics & Probability Letters 173 109088. wei2012twoWEI, F. and DUDLEY, R. M. (2012). Two-sample dvoretzky–kiefer–wolfowitz inequalities. Statistics & Probabil- ity Letters 82 636–644.zhu2021clusteringZHU, Y., DENG, Q., HUANG, D., JING, B., ZHANG, B. et al. (2021). Clustering based on Kolmogorov–Smirnov statistic with application to bank card transaction data. Journal of the Royal Statistical Society Series C 70 558–578.
http://arxiv.org/abs/2312.16656v1
{ "authors": [ "Antonio Galves", "Fernando Najman", "Marcela Svarc", "Claudia D. Vargas" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20231227180514", "title": "Clustering Sets of Functional Data by Similarity in Law" }
plain axiomAxiom claim[axiom]Claim theoremTheorem[section] lemma[theorem]Lemmaremark definition[theorem]Definition *exampleExample *factFact (εthmTheorem[section] defiDefinition[section] lemLemma[section] remRemark[section] corCorollary[section] exmExample[section] Bilinear forms for the resolvent of sample covariance matricesA]Yanqing Yin [label=e3][email protected]]Wang Zhou [label=e4][email protected] and Zhou. [A]School of Mathematics and Statistics, Chongqing University[presep=, ]e3[B]Department of Statistics and Data Science,National University of Singapore [presep=, ]e4 In this paper, we introduce a joint central limit theorem (CLT) for specific bilinear forms, encompassing the resolvent of the sample covariance matrix under an elliptical distribution. Through an exhaustive exploration of our theoretical findings, we unveil a phase transition in the limiting parameters that relies on the moments of the random radius in our derived CLT. Subsequently, we employ the established CLT to address two statistical challenges under elliptical distribution. The first task involves deriving the CLT for eigenvector statistics of the sample covariance matrix. The second task aims to ascertain the limiting properties of the spiked sample eigenvalues under a general spiked model. As a byproduct, we discover that the eigenmatrix of the sample covariance matrix under a light-tailed elliptical distribution satisfies the necessary conditions for asymptotic Haar, thereby extending the Haar conjecture to broader distributions. [class=MSC] [Primary ]62H1562B20[; secondary ]62D10high dimension covariance matrix; central limit theorem; Haar conjecture; elliptical distribution; spiked model§ INTRODUCTION AND MOTIVATIONThe covariance matrix assumes a central role in multivariate statistical analysis, and numerous statistical inferences hinge on the spectral properties of the population covariance matrix (PCM). In high-dimensional scenarios, the sample covariance matrix (SCM) ceases to be a reliable estimator for the PCM in a spectral sense. Nevertheless, considerable efforts have been devoted to exploring the relationship between them. This investigation is crucial as it aids in making statistical inferences based on the observed data.Starting from the pioneering work of <cit.>, which examined the properties of the eigenspace of Wishart matrices, researchers have devoted significant efforts to enhancing the generality of data models to better align with real-world applications. In the past decades, the most extensively investigated data model is undoubtedly the independent component structure (ICS). We refer the readers to <cit.> and references therein. This model, serving as a natural extension of the multivariate Gaussian, assumes that a high-dimensional random population is a linear transformation of a random vector with independent and identically distributed (i.i.d.) entries. However, it has been recognized that this model excludes some significant distribution families, such as the elliptical family. We define a random vector 𝐲 to follow anelliptically correlated structure (ECS) if and only if it has a stochastic representation given by:=ρ+.Here, the matrix Γ∈ℝ^p × p and vector μ∈ℝ^p are non-random, with rank(Γ) = p. The scalar variable ρ≥ 0 represents the radius of 𝐲, and 𝐮∈ℝ^p is the random direction. The random direction 𝐮 is independent of ρ and uniformly distributed on the unit sphere S^p-1 in ℝ^p, denoted by 𝐮∼ U(S^p-1) in the subsequent discussion. This data model naturally extends the concept of multivariate normal distributions, offering a distinct orientation compared to the independent component structure (ICS) model. It adeptly captures the dependence structure among multiple variables, providing a versatile framework for analyzing complex multivariate data. Specifically, when ρ^2 ∼χ^2(p), the resulting distribution aligns with the multivariate Gaussian distribution. Elliptical distributions are well-suited for modeling the inherent structure of real-world datasets across diverse fields such as finance, biology, and engineering. Among the commonly employed distributions within this category are the multivariate Student's t-distribution, multivariate Cauchy distribution, and elliptical Gamma distribution.In recent years, there has been a concerted effort to explore the spectral properties of the sample covariance matrix under elliptical distribution using random matrix theory (RMT). For further insights, interested readers are encouraged to explore works such as <cit.>. Nevertheless, numerous spectral properties crucial for statistical inferences remain unexplored. In light of this, our efforts are directed towards addressing this gap. More specifically, we aim to establish a joint CLT for several bilinear forms involving the resolvent of the sample covariance matrix under ECS. Following the establishment of the newly developed CLT, we employ it to systematically investigate two aspects. Firstly, we delve into the asymptotic properties of the eigenvectors of the sample covariance matrix. Secondly, we turn our attention to the spiked sample eigenvalues within the framework of a comprehensive spiked model. The primary contributions of our work are outlined as follows.1. We derived a novel joint CLT for bilinear forms that incorporate the resolvent of the sample covariance matrix under the ECS scenario. This accomplishment allows for the exploration of the interdependence structure among the entries of the resolvent matrix and sheds light on the interdependence structure among their linear combinations. 2. Through the application of the newly established CLT, we have derived a corresponding CLT for eigenvector statistics of the sample covariance matrix under ECS model. Diverging from linear eigenvalue statistics, we have uncovered a noteworthy phase transition regarding the dependence of the CLT on the fourth moment of the radius variable ρ. Specifically, when the asymptotic variance of the squared radius ρ^2 vanishes, the CLT becomes independent of the specific underlying distribution. Conversely, the presence of non-vanishing asymptotic variance leads to a discernible dependence of the CLT on the specific underlying distribution. This phenomenon illuminates the extension of the Haar conjecture regarding sample covariance matrices to encompass all light-tailed elliptical distributions. 3. We establish a connection between bilinear forms and random matrices that govern the asymptotic behaviors of spiked sample eigenvalues in a general spiked model. Consequently, we achieve the CLT for spiked sample eigenvalues under ECS model. Again, in the case of a light-tailed elliptical distribution, the behavior of spiked sample eigenvalues mirrors that observed under the Gaussian case. This observation underscores the universality phenomenon inherent in applying Principal Component Analysis (PCA) across all light-tailed elliptical distributions.The subsequent sections of this paper are structured as follows. In the upcoming section, we will provide essential background results to facilitate a comprehensive understanding of our findings. Following that, we will present our main results. Moving on to the third section, we delve into theapplication of our CLT within the context of two specific statistical problems as mentioned before. The detailed proofs will be deferred to the appendix, following a concluding discussion.Throughout this paper, we represent the spectral norm of a matrix by |·|. The symbol C denotes a constant that may assume different values depending on the context. For any real sequences a_n and b_n, we use a_n=o(b_n) to express the relationship a_n/b_n→ 0 as b_n→∞ and a_n=O(b_n) to indicate that a_n/b_n≤ C as b_n→∞. Throughout this paper, _j represnts the j-th column of the identity matrix.§ PRIOR DEFINITIONS AND MAIN RESULTSThis section aims to present our primary theoretical results. We initiate this discussion by introducing relevant definitions and outlining our model assumptions. §.§ Definitions and model assumptions.Let _n represent a p × p symmetric matrix with eigenvalues λ_1≤…≤λ_p. TheEmpirical Spectral Distribution (ESD) of A_n is defined asF^_n(x)=1/p∑_j=1^pI(λ_j≤ x), where I(·) is the indicator function. If, in the limit as p and n approach infinity, the limit of F^_n(x) exists, it is termed theLimit Spectral Distribution (LSD). A crucial tool in RMT for exploring the spectral properties of _n is theStieltjes transform, denoted by m_F^_n(z)=∫1/y-zdF^_n(y),   z∈ℂ^+≡{z=u+iv∈ℂ:v>0}.It is evident that the Stieltjes transform is linked to theresolvent Υ_n,z≐_n-z^-1,as expressed bym_F^_n(z)=1/pΥ_n,z.The primary model of interest in this paper is the sample covariance matrix under elliptical distributions, expressed as_0,n=1/n_n_n_n^T_n^T. We proceed to enumerate the assumptions as follows. * Assumption (a) [ECS]: The columns of _n follow an elliptical distribution, represented as _j=ρ_j u_j, where 1≤ j≤ n. Here, the random radius ρ_j's are independent and identically distributed (i.i.d.) random variables with (ρ_1^2)=p andρ_1^4=m_p=ν_p+p^2≥ p^2 and the directions u_ji.i.d.∼ U(S^p-1); * Assumption (b) [Bounded norms]: _n is a p× p non-random matrix with a uniformly bounded spectral norm. As n→∞, the ESD of Σ_n=_n_n^T denoted by H_1n converges weakly to a proper distribution H_1. Furthermore, the distribution function H_2n of m_p^-1/2ρ_1^2 converges to H_2 whose support is bounded. * Assumption (c) [High dimensional framework]: The dimension to sample size ratio c_n=p/n→ c∈(0,∞) as n→∞. The ECS model under Assumption (a) encompasses a broad range of elliptical distributions. It is notable that the variance of ρ^2, denoted as ν_p, can assume any order of p. However, it is conceivable that when ν_p/p^2→∞, the spectral properties of _0,n will be primarily determined by the distribution of ρ. Thus, it is reasonable to consider a normalization and shift focus to the normalized sample covariance matrix _n=√(p^2/m_p)/n_n_n_n^T_n^T=√(p^2/m_p)_0,n.Leveraging this normalization, we can explore a more general model than in <cit.>, where the LSD and CLT for linear spectral distributions (LSS) are considered and than in <cit.> where the distribution of largest eigenvalue are considered. Evidently, our model encompasses their models as special cases when ν_p=O(p). Assumptions (b) and (c) are commonly employed in RMT.§.§ First order limit : the LSDAs a foundational step, we first introduce the results related to the LSD of _n. Suppose the Assumptions (a)-(c) hold. With probability 1, as n→∞, the ESD of _n converges weakly to a non-random probability distribution function F^c,H_1,H_2. To be specific, we have I. Trivial scenarios: If H_1=1_[0,∞) or H_2=1_[0,∞), then F^c,H_1,H_2=1_[0,∞); II. Non trivial scenarios: If H_1≠ 1_[0,∞) and H_2≠ 1_[0,∞), for each z∈ℂ^+, m(z)=-z^-1(1-c^-1)-z^-1c^-1∫1/1+q_1(z)ydH_2(y) m(z)=-z^-1∫1/1+q_2(z)xdH_1(x) m(z)=-z^-1-c^-1q_1(z)q_2(z) is viewed as a system of equations for the complex vector (m(z),q_1(z),q_2(z)), then (<ref>) has a unique solution in the set U={(m(z),q_1(z),q_2(z)): m(z)>0,(zq_1(z))>0, q_2(z)>0 }.Also, the stieltjes transform of F^c,H_1,H_2, denoted by m_F(z), together with the other two functions g_1(z) and g_2(z), both of which are analytic on ℂ^+, are given by this solution. This theorem represents a minor extension of Theorem 1 in <cit.>, and as such, we omit its proof. In general, the covariance matrix under an elliptical distribution typically belongs to the domain of the so-called separable sample covariance matrix model. For light-tailed elliptical distributions where ν_p=O(p), it is observed that the distribution of ρ^2/p becomes degenerate, leading to H_2=1_[1,∞). This degeneracy results in the system of equations reducing to the single M-P equation.For future reference, we introduce here thecompanion of _n, defined as _n=√(p^2/m_p)/n_n^T_n^T_n_n. Note that the spectra of _n and _n only differ by |p-n| zero eigenvalues.It follows thatF^_n(x)=(1-c_n)I_[0,∞)+c_nF^_n(x),from which we getF(x)=(1-c)I_[0,∞)+c F(x),m_F^_n(z)=-1-c_n/z+c_nm_F^_n(z), z∈ℂ^+,and as n→∞ m(z):=m_F^c,H_1,H_2(z)=-1-c/z+cm(z),z∈ℂ^+.Therefore, by comparing (<ref>) and (<ref>), we can establish the following relationshipzg_1(z)=-c∫x/1+g_2(z)xdH_1(x), zg_2(z)=-∫y/1+g_1(z)ydH_2(y).§.§ Bilinear forms for resolventAfter the discussion presented in the last subsection, the LSD of _n is clarified under specific scenarios. Regarding the second-order properties of the SCM under ECS, previous research has made significant efforts in this area in recent years. In the case where ν_p=O(p), the CLT for LSS was considered in <cit.> and <cit.>. Their CLT reveals the dependence of the limiting mean and variance on both H_1 and the variance of ρ^2 (ν_p). This dependence, distinct from the results in the ICS case, indicates the influence of the nonlinear dependence of variables in the multivariate population. In <cit.>, the authors establish the CLT for LSS under the case where ν_p=O(p^2), demonstrating a different convergence rate of √(n) in the corresponding CLT. At the heart of the spectral decomposition of SCM lies the eigenmatrix, an ensemble of eigenvectors that unveils intricate patterns within the data. Understanding its behavior under the influence of high dimensionality, varying correlation structures, and non-normality is essential for developing robust methodologies tailored to modern data challenges. To the best of our knowledge, no prior research has specifically explored the properties of eigenvectors in the context of the SCM and the spiked model under ECS. Our study endeavors to bridge these gaps in the existing literature. We will achieve this objective by establishing a joint CLT for processes of bilinear forms related to the resolvent Υ_n,z. More specifically, we consider r correlated bilinear forms:{ℬ_𝔯Υ_n,z=π_n,2𝔯-1^TΥ_n,zπ_n,2𝔯}_𝔯=1^r,where the p-dimensional non-random vectors π_n1,⋯,π_n2𝔯 are assumed to have unit norms without loss of generality. Our focus is on the multivariate process equipped with z. We note that in this paper, our consideration is limited to z values in a proper region denoted as 𝒵, ensuring that both the resolvent and _p+g_2n^0(z)_n^-1 exist and are bounded in spectral norm with high probability for the given scenario.The following theorem establishes the convergence of a single bilinear form ℬ_𝔯Υ_n,z. Under Assumptions (a-c), the following conclusion holds for any z∈𝒵,ℬ_𝔯Υ_n,z+z^-1π_n,2𝔯-1^T _p+g_2n^0(z)_n^-1π_n,2𝔯→0,a.s., 𝔯=1,⋯,r. Herem_F^c_n, H_n1,H_n2(z),g_1n^0(z),g_2n^0(z) is defined in (<ref>) by replacing c,H_1,H_2 with c_n,H_1n,H_2n.We are now ready to establish the convergence of the multivariate processℬ_1Υ_n,z,⋯,ℬ_rΥ_n,z^T.We will present the case r=2 and the general cases are similar and therefore omitted. WriteM_n(z)=[ M_n1(z); M_n2(z) ]=[ √(p)(ℬ_1Υ_n,z+z^-1π_n1^T _p+g_2n^0(z)_n^-1π_n2); √(p)(ℬ_2Υ_n,z+z^-1π_n3^T _p+g_2n^0(z)_n^-1π_n4) ].Under Assumptions (a-c), definer_jk(z_1,z_2)=lim_n→+∞π_nj^T_p+g_2n^0(z_1)_n^-1_n_p+g_2n^0(z_2)_n^-1π_nk, r_jk(z)=lim_n→+∞π_nj^T_p+g_2n^0(z)_n^-2_nπ_nk, j,k ∈{1,2,3,4}.We have the two dimensional process M_n(z) for z∈𝒵 converges weakly to a two dimensional zero-mean Gaussian process M(z) with a covariance functionM_1(z_1),M_2(z_2)= h_1(z_1,z_2)r_14(z_1,z_2)r_23(z_1,z_2)+h_1(z_1,z_2)r_13(z_1,z_2)r_24(z_1,z_2)+h_2(z_1,z_2)r_12(z_1)r_34(z_2). Here h_1(z_1,z_2)= cz_1g_2(z_1)-z_2g_2(z_2)/z_1^2z_2^2g_1(z_1)-g_1(z_2)1-d(z_1,z_2), h_2(z_1,z_2)= cg_2'(z_1)g_2'(z_2)m(z_1)g_2(z_2)-m(z_2)g_2(z_1)/g_2(z_1)g_2(z_2)g_1(z_1)-g_1(z_2), d(z_1,z_2)= 1/z_1z_2z_1g_1(z_1)-z_2g_1(z_2)/g_1(z_1)-g_1(z_2)z_1g_2(z_1)-z_2g_2(z_2)/g_2(z_1)-g_2(z_2). With the aid of the above theorem, one can justify the limiting joint distribution of certain variables related to the resolvent. For instance, by choosingπ_n1=_j,π_n2=_k,π_n3=_l,π_n4=_tand letting z_1→ z_2, we can derive the limiting variances and covariance of the entry lying in the j-th row, k-th column, and the entry lying in the l-th row, t-th column of √(p)Υ_n,z.We would like to delve deeper into the theorem above. It is crucial to note that both the limiting covariance function and the centralizing term rely on the distribution of the radius ρ through g_2n^0(z). This dependence is solely influenced by the properties of m_p^-1/2ρ_1^2, whose distributions remain consistent when ν_p=o(p^2). Consequently, we can deduce that a phase transition will occur as ν_p transitions from o(p^2) to the order of p^2. In other words, for a light-tailed elliptical distribution, the asymptotic properties of the bilinear forms ℬΥ(_n,z) are independent of specific distributions. However, when the fluctuation of ρ^2, denoted by ν_p/p^2, deviates from 0, a critical point is reached, marking a shift in the scenario. At this juncture, the impact of nonlinear dependence, induced by the random radius, becomes pronounced enough to influence the asymptotic properties of ℬΥ(_n,z). § STATISTICAL APPLICATIONS In this dedicated section, we leverage the robust CLT established for bilinear forms and channel its applicability into two distinct directions within the statistical domain. These directions not only broaden the scope of our theoretical framework but also enhance its practical relevance in addressing nuanced challenges encountered in statistical analyses. The first avenue of exploration involves the functional CLT, a pivotal concept in the realm of eigenvector statistics pertaining to sample covariance matrices. Our established CLT for bilinear forms provides a solid foundation for delving into the intricacies of eigenvector statistics.Simultaneously, our focus extends to the second direction, which centers around the analysis of spiked eigenvalues and eigenvectors within a spiked model.§.§ Functional CLT for eigenvector statisticsIn this subsection, we embark on the application of the theoretical insights acquired in the preceding section to scrutinize the asymptotic properties of the eigenmatrix of _n. Before delving into the details, we find it necessary to introduce some fundamental definitions and background information that will lay the groundwork for our subsequent analysis. Given π_n, thevector empirical spectral distribution (VESD) function based on eigenvalues and eigenvectors of matrix A_n is defined as F_v,π_n^ A_n(x)=∑_j=1^p|q_j|^2I(λ_j≤ x).Understanding this function is pivotal for unraveling the intricacies of the eigenmatrix and its statistical behavior. Now, when the underlying data originates from a multivariate Gaussian distribution, the sample covariance matrix _n is a Wishart matrix. According to established results like those in <cit.> or Corollary 2.2 of <cit.>, the eigenmatrix of _n follows the Haar distribution. Formally, if we express _n as U_n Λ_nU_n^T, where U_n is the orthogonal matrix containing the eigenvectors and Λ_n is a diagonal matrix of eigenvalues, then U_n conforms to the uniform distribution over the group formed by all orthogonal matrices.Moreover, for any unit vector π_n∈ℝ^p, the random vector q_n= U_nπ_n≐q_1,⋯,q_p^T follows a uniform distribution over the unit sphere. This underscores the uniformity and isotropy of the eigenvectors associated with the Wishart matrix. Furthermore, consider the stochastic process defined asℚ_p(t)=√(p/2)∑_j=1^[pt]|q_j|^2-1/nd=√(p/2)1/| z|^2∑_j=1^[pt]|z_j|^2-| z|^2/p.This process converges in distribution to a Brownian Bridge 𝔹(t) as p→∞, see Page 334 in <cit.>. This convergence provides a bridge to understanding the limiting behavior of the eigenmatrix.For any matrix A_n, we introduce a time transformation denoted as 𝒬_p^ A_n(x)=ℚ_p(F^ A_n(x)). This transformation is applied to the stochastic process ℚ_p(t) using the ESD F^ A_n(x). Subsequently, the transformed process 𝒬_p^_n(x) serves as an approximation to 𝔹(F^c,H_1,H_2(x)). Recalling the definitions of the ESD and the VESD, we can express the transformed process as follows: 𝒬_p^_n(x)=√(p/2)F_v,π_n^_n(x)-F^_n(x).This transformation allows us to reframe the study of ℚ_p(t) into the investigation of the discrepancy between the ESD and VESD. This shift in perspective not only simplifies the analysis but also provides a meaningful connection between the properties of the eigenmatrix and the convergence behavior encapsulated in ℚ_p(t). Through this investigatory approach, notable efforts have been dedicated to exploring the universality of the Haar conjecture in high dimension. A somewhat unexpected revelation, as articulated in Theorem 10.2 within <cit.>, asserts that, under ICS conditions, the eigenmatrix's requisite condition for adhering to the Haar conjecture demands that the underlying distribution exhibits a fourth moment akin to a Gaussian distribution. This phenomenon implies a substantial impact of the fourth moment on the asymptotic structure of the eigenmatrix under ICS. The impact of nonlinear elliptical correlation on the asymptotic structure of the eigenmatrix can be elucidated through the application of the newly established CLT for bilinear forms. To illustrate this, consider any function g that is analytic on an open set containing the supports of F_v,π_n^_n(x) and F^_n(x). By the Cauchy integral formula, the following relation holds for large n:∫ g(x) dF_v,π_n^_n(x)-F^_n(x)=-1/2 πi∫_𝒞 g(z)s_F_v,π_n^_n(z)-s_F^_n(z)d z,where 𝒞 is a contour that encompasses the real interval defined by:[alim inf_nλ_min^_nI_(0,1)(c)1-√(c)^2,b lim sup_nλ_max^_n1+√(c)^2].Here, a and b represent the lower and upper bounds of the support of H_2, respectively. Let's define s_c_n,π_n^_n,ν_p(z)=z^-1π_n^T _p+g_2n^0(z)_n^-1π_n,representing the Stieltjes transform of the anisotropic M-P law F_c_n,π_n^_n,ν_p(x). We shall introduce 𝔾_n(x)=√(p)F_v,π_n^_n(x)-F_c_n,π_n^_n,ν_p(x).Consider test functions ζ_1,⋯,ζ_k analytic on an open set containing (<ref>). The functional CLT for eigenvector statistics can then be expressed as follows: Under the assumptions of Theorem <ref>, we have the following results.I: The k dimensional random vectorsΨ_n=ψ_1,n,⋯,ψ_k,n'=∫ζ_1(x)d 𝔾_n(x), ⋯, ∫ζ_k(x)d 𝔾_n(x)'form a tight sequence.II: The random vectors Ψ_n converge weakly to a mean zero Gaussian vector Ψ=ψ_1,⋯,ψ_k'.III: For 1≤ t,s≤ k, ψ_t,ψ_s=-1/2π^2∫_𝒞_1∫_𝒞_2ζ_t(z_1)ζ_s(z_2)ϖ(z_1,z_2)dz_1dz_2,where C_1,C_2 are two non-overlapping contours enclosing the support of F^c,H_1,H_2 andϖ(z_1,z_2)= 2h_1(z_1,z_2)r_11(z_1,z_2)r_11(z_1,z_2)+h_2(z_1,z_2)r_11(z_1)r_11(z_2).This theorem unveils the universality of the Haar conjecture within the realm of elliptical distributions, even in the presence of nonlinear dependencies between variables, as long as ν_p=o(p^2). §.§ Asymptotic distribution of spiked eigenvalue and eigenvectorGaining insights into the characteristics of spiked eigenvalues and their associated eigenvectors within a spiked sample covariance matrix holds paramount significance in a multitude of statistical applications, with a prominent example being Principal Component Analysis (PCA). In PCA, the identification of principal components associated with spiked eigenvalues serves as a pivotal mechanism for dimensionality reduction and feature extraction. This analytical approach proves particularly valuable in scenarios where datasets showcase a dominant signal. By pinpointing the spiked eigenvalues, one can extract crucial information about the intrinsic structure underlying the data. This understanding not only aids in optimizing data representation but also enhances the interpretability and effectiveness of statistical analyses. Since the seminal work by <cit.>, the exploration of this topic in the realm of high-dimensional statistics has garnered significant attention. Numerous authors have delved into the subject, progressively refining and expanding the models to accommodate a broader range of scenarios. The evolution of these models underscores the dynamic nature of statistical research in adapting to the demands of contemporary datasets. For the most recent advancements under the ICS model, we recommend consulting up-to-date references such as <cit.>.To better align the results of the spiked model with real high-dimensional datasets, we aim to explore the asymptotic distribution of sample spiked eigenvalues and eigenvectors under the ECS model by leveraging our result in Theorem <ref>. Consider the general spiked model, as introduced in <cit.>. Let _n be decomposed using singular value decomposition as follows:_n=[ Λ_S^1/2 0; 0 Λ_P^1/2 ]^T whereandare orthogonal matrices, Λ_S is a diagonal matrix consisting of the spiked eigenvalues in descending order, and Λ_P is the diagonal matrix of the bounded non-spiked eigenvalues. Let's partitionas =_1,_2, where _1 is a p× K submatrix of . Define _n=√(p^2/m_p)/√(n)_n and_1p=_2Λ_P_2^T=[ 0_S 0; 0 Λ_P ]^T=[ 0_S^1/2 0; 0 Λ_P^1/2 ]^T^2≜^2.Order the eigenvalues of _n as λ_1≥λ_2≥⋯≥λ_p. The sample spiked eigenvalues λ_j (j=1,⋯, K) of _n are determined by the equation involving the determinant:det{Λ_S^-1-_1^T_nλ_j-_n^T_1p_n^-1_n^T_1}=0.Clearly, the columns of _1 are orthogonal to _1p. It is noteworthy to highlight that, under ECS model, the target matrix mentioned above can be simplified by exploiting the property of elliptical distribution. Specifically, we have the option to diagonalize _1p due to the characteristics of elliptical distributions. However, for the sake of maintaining generality and relevance to a broader spectrum of data models, we intentionally refrain from this simplification.In the literature under the ICS model, researchers have investigated the properties of the random matrix_1^T_nλ_j-_n^T_1p_n^-1_n^T_1 directly. For example, in <cit.>, the authors established a general fourth-moment theorem to show that the distribution of this matrix remains the same when the underlying distribution is replaced by another one, provided they share the same fourth moment. In contrast, <cit.> studied the asymptotic distribution of the entries in this matrix directly by applying a martingale decomposition method. However, in this work, we will demonstrate through perturbation arguments that the study of this random matrix can be accomplished through properties of bilinear forms. To see this, let =+ε_p, sois invertible.Define Φ(z,ε)= _1^T^-1_n_n^T-z_p^-1^-1_1+z^-1_1^T^-1_p+g_2n^0(z)^-1^-1_1, Ψ(z,ε)= _1^T^-1^-1_1-_1^T^-1_p+g_2n^0(z)^-1^-1_1.In the context of Theorem <ref>, considering the implications for H_1n in connection to the ESD of _1pand adjusting parameter definitions accordingly, we deduce that √(p)Φ(z,ε)converges weakly to a Gaussian distributionN0,σ^2_1(z,ε), whereσ_1^2(z,ε)= lim_z_1→ z_22h_1(z_1,z_2)r_11^2(z_1,z_2)+h_2(z_1,z_2)r_11(z_1)r_11(z_2) = [2czg_2(z)'g_2'(z)/z^2zm(z)'+cg_2'(z)^2m(z)/g_2(z)'/g_1'(z)]1/1+ε^2g_2(z)^2.Usingthe formulaλ+^-1= λ+^-1,and letting ε→0, we obtain z_1^T_nz_p-_n^T_1p_n^-1_n^T_1= lim_ε→0-z_1^T^-1^-1_1-z^2_1^T^-1_n_n^T-z_p^-1^-1_1= lim_ε→0-zΨ(z,ε)-z^2Φ(z,ε).Hence, define𝒪_K× K(z)=√(p)z_1^T_nz-_n^T_1p_n^-1_n^T_1+g_2n^0(z)_K,and it follows that𝒪_11(z)=√(p)z_1^T_nz-_n^T_1p_n^-1_n^T_1+g_2n^0(z)converges weakly to Gaussian distribution N0,σ^2_11(z) with variance σ_11^2(z)= [2cz^2zg_2(z)'g_2'(z)/zm(z)'+cz^4g_2'(z)^2m(z)/g_2(z)'/g_1'(z)].Applying a similar argument, we have 𝒪_12(z)=√(p)z_1^T_nz-_n^T_1p_n^-1_n^T_2converges weakly to Gaussian distribution N0,σ^2_12(z), whereσ_12(z)= cz^2zg_2(z)'g_2'(z)/zm(z)'.Furthermore, we observe that Cov(𝒪11(z), 𝒪12(z)) converges to 0. By combining the aforementioned arguments, we essentially establish the following lemma. Assuming the conditions outlined in Theorem <ref> are satisfied, we can establish the following conclusion: the random matrix 𝒪(z) weakly converges to a zero-mean Gaussian Orthogonal Ensemble (GOE) matrix 𝒪^L(z)=(𝒪^L𝔦,𝔧(z))_K× K with a covariance profile given by:Cov(𝒪^L_𝔦,𝔧(z),𝒪^L_𝔨,𝔩(z))=σ_11(z),𝔦=𝔧=𝔨=𝔩 σ_12(z), (𝔦=𝔨 and 𝔧=𝔩) or (𝔦=𝔩 and 𝔧=𝔨); 0,otherwise, where σ_1^2(z)= 2cz^2zg_2(z)'g_2'(z)/zm(z)'+cz^4g_2'(z)^2m(z)/g_2(z)'/g_1'(z), σ_12^2(z)=cz^2zg_2(z)'g_2'(z)/zm(z)'.This result provides a clear understanding of the asymptotic behavior of the random matrix 𝒪(z) under the specified conditions, connecting it to a GOE matrix with a well-defined covariance structure. Leveraging the aforementioned lemma and employing similar arguments as in <cit.>, we can derive the almost sure limit and limiting distribution of the spiked eigenvalues under ECS. More specifically, assuming that the population spiked eigenvalues of _n, denoted by α_1>⋯>α_K, we obtain the following Theorem <ref>. The proofs are very similar to theirs; therefore, we omit them to avoid repetition. We remind the reader to recall that, in the following, g_2n^0(z) is associated with _1p. We also note that the multiple spiked eigenvalue case can be investigated similarly using Lemma <ref>. Under the assumptions in Theorem <ref>, further assuming the separation condition that minj ≠ k|α_k/α_j-1|>d, we have, for k=1,⋯,K, Δ_k≐λ_k-𝒢_2n(α_k)/𝒢_2n(α_k)→ 0, a.s., provided 𝒢_2n'(α_k)>0 where 𝒢_2n is the transition function that satisfies g_2n^0(𝒢_2nz)=-z^-1. Also, denoting θ_k=𝒢_2n(α_k), we have √(n)Δ_k/σ_Δ_k→ N(0,1), where σ_Δ_k^2=2θ_kg_2(θ_k)'/θ_km(θ_k)'g_2'(θ_k)θ_k^2+m(θ_k)/g_2(θ_k)'/g_1'(θ_k). In the special case where ν_p=o(p^2), a particularly interesting insight emerges from our analysis. Leveraging the relationship m(z)=g_2(z), the transition function 𝒢_2n(·) of spiked eigenvalues simplifies to a well-established form, precisely given by ψ(z)=z+c z ∫t/z-t d H_1(t), a result that aligns with existing knowledge in the field. Furthermore, under this specific scenario, the variance of the standardized spiked eigenvalueσ_Δ_k^2, simplifies to 2/m'(θ_k)θ_k^2, which is consistent with the known result under Gaussian case. This remarkable finding underscores the robustness of the asymptotic properties of sample spiked eigenvalues across a diverse range of elliptical distributions, provided that ν_p=o(p^2). It highlights a certain universality in the behavior of these eigenvalues, irrespective of the specific characteristics of the elliptical distribution, offering valuable insights into their statistical properties in high-dimensional settings. Moving forward, let's examine the scenario of the spiked sample eigenvector, focusing initially on a simplified case. In this simplified setting, we operate under the assumption of a single population spiked eigenvalue, allowing us to concentrate on the projection of sample eigenvectors onto the corresponding population eigenvector. It's worth noting that our decision to concentrate on the simplified scenario is motivated by the desire to offer a clear and focused presentation of our main contributions. More precisely, let's assume that the population spiked eigenvalues of_n, are denoted by α_1>⋯>α_K. For the population eigenvector v_k of the k-th spiked eigenvalue α_k, we denote its associated sample version as 𝒱_k. Our interest lies in the inner productℐ_k= v_k^T𝒱_k of these two vectors. Assuming the separation condition that min_j ≠ k|α_k/α_j-1|>d, according to the Cauchy integral formula, we have the following equalityℐ_k^2=-1/2 πi∮_ζ_k v_k^T Υ_n,z v_k d z,where ζ_k enclosing λ_k butexcludes the other eigenvalues. Thus, we turn our attention to the study of v_k^T Υ_n,z v_k. Noting thatv_k^T Υ_n,z v_k= _k^T(Λ^1/2^T_n_n^TΛ^1/2-z 𝐈)^-1_k,by the equationB(λ-)^-1=(λ- B A)^-1 B,we find the right hand side to be-1/α_kz/α_k+ z_k^T_n_n^T_1p_n-z^-1_n^T_k^-1.Now, by applying the residue theorem and combining it with Lemma <ref>, we establish the following result. Under assumptions in Lemma <ref>, we have ℐ_k^2-𝒢_2n'(α_k)/𝒢_2n(α_k)/α_k→ 0, a.s.. We observe a notable phenomenon: the asymptotic properties of spiked sample eigenvectors remain consistent with the Gaussian case as long as ν_p=o(p^2). Under this condition, the almost sure limit of ℐ_k^2 is given by1-c∫t^2/(α_k-t)^2dH_1(t)1+c∫t/(α_k-t)dH_1(t)^-1,which tends to zero as α_k→ 0 and converges to a positive constant otherwise. This observation aligns with the understanding that, asymptotically, a bias angle will emerge between the true principal component and the estimated one in high dimensions unless the true principal component is divergent. The above focused approach enables us to examine the angles between sample and population eigenvectors, shedding light on the asymptotic properties within various spiked model frameworks. By considering the alignment between these vectors, we gain insights into how the sample and population eigenvectors behave in the presence of a dominant signal, offering a nuanced understanding of their statistical properties. Certainly, exploring the more general case with multiple population spiked eigenvalues and the fluctuation of ℐ_k is an intriguing avenue for extension. This extension may involve more complex but traceable calculations, and we leave it as a potential direction for future research. In light of Lemma <ref> and the discussion above, we posit that the theoretical findings in <cit.>, which investigate the fluctuations of ℐ_k under ICS, apply to a wider spectrum of elliptical distributions by setting the fourth moment to 3, provided that ν_p=o(p^2). We conclude this section by presenting numerical simulations to validate the correctness of our theoretical results.Let's start with the simulation results for the spiked eigenvalues. Consider two cases: p=50, n=100 and p=200,n=400. Set the corresponding population covariance matrix as =_0_0_0^*=8_0,1_0,1^T+∑_j=2^pd_j_0,j_0,j^T, where = Diag8,d_2,⋯,d_p with d_j's i.i.d. chosen from U(0,1) and _0=_0,1,⋯,_0,p is the eigenmatrix of the Toeplitz matrix =(a_i,j) where a_i,j=0.9^|i-j| for i,j=1,⋯,p. It can be observed that the population matrix has a spiked population eigenvalue of 8. For each pair of p and n, we draw n i.i.d. samples from an elliptical distribution with the given p-dimensional population covariance matrix . We consider four different types of elliptical distributions where (a):ν_p=p^2, (b):ν_p=p, (c):ν_p=p^1/2 and (d):ν_p=0. For each sample, we compute the largest sample eigenvalue and repeat this procedure 10,000 times.The following figures depict the agreement between the empirical distribution of the sample spiked eigenvalues and Gaussian distributions. We also label the sample mean and sample variance under each situation.Two key observations can be drawn from Fig.1 and Fig.2. Firstly, both figures demonstrate the good normality of empirical spiked sample eigenvalues under all cases. This suggests that the sample spiked eigenvalues exhibit properties akin to a normal distribution in various scenarios. Secondly, a comparison between the two figures indicates that as p→∞, the asymptotic properties of sample spiked eigenvalues remain consistent as long as ν_p=o(p^2), aligning with our theoretical results. This transition in behavior is a noteworthy phenomenon in high-dimensional statistics.In the subsequent analysis, we delve into simulations centered on spiked eigenvectors, maintaining the same population covariance matrix settings as in previous investigations. We specifically explore two scenarios with distinct dimension-to-sample size ratios: c_n,1 = p/n = 0.5 and c_n,2 = p/n = 2. Our exploration spans the range of p from 64 to 512 in increments of 32. For each combination of p and n, we draw n samples from four distinct elliptical distributions, as previously considered. Subsequently, we compute the inner product of the population spiked eigenvector _0,1 with its sample counterpart. This process is iterated 5000 times, and we compute the average under each distribution.The resulting averages are then examined in terms of scatter plots against the varying values of p for the different dimension-to-sample size ratios. Specifically, the scatter plots are presented in Fig.3 and Fig.4, providing a visual representation of how the average inner product behaves as the dimension p varies.The figures reveal that, as the dimensionality p increases, the asymptotic properties of the sample spiked eigenvector remain consistent, provided that ν_p = o(p^2), aligning with our theoretical results. However, when ν_p = p^2, the inner product between the population's spiked eigenvector and its corresponding sample counterpart converges to a different value compared to the cases where ν_p = o(p^2). An interesting observation emerges: in our setting, a higher divergence rate of ν_p consistently leads to a larger angle between the population's spiked eigenvector and its corresponding sample counterpart. § CONCLUDING DISCUSSIONIn this paper, we rigorously establish a joint CLT for bilinear forms, specifically those related to the resolvent of a random covariance matrix under ECS. Our analysis reveals a phase transition phenomenon, adding a nuanced dimension to our understanding of elliptical distribution.Furthermore, we emphasize the practical importance of this CLT by showcasing its efficacy in exploring eigenvector statistics and the spiked model.Through our investigation, we unveil consistent limiting properties for spiked eigenvalues and eigenvectors across a diverse set of elliptical distributions. This discovery underscores the robustness and adaptability of statistical tools originally designed for the spiked model under a Gaussian distribution. For a detailed exposition of our results and proofs, we defer the reader to the appendix following our concluding discussion. Our work contributes to advancing the understanding of statistical properties in the context of random covariance matrices, particularly under ECS. § PROOF OF THE MAIN THEOREMS§.§ Some definitions and preliminary results We initiate our proof by introducing crucial notations and some preliminary results. Let_k=√(c)_n_n_k, and ξ_k=m_p^-1/4ρ_k. The matrices (z), _k(z), and _k j(z) are defined as follows:(z)=∑_k=1^nξ_k^2_k_k^T-z_p,_k(z)=(z)-ξ_k^2_k_k^T,_k j(z)=_k(z)-ξ_j^2_j_j^T.Additionally, we introduce the matrices _k(z) and _kj(z), where_k(z)= ∑_j<kξ_j^2_j_j^T+∑_j>kξ̆_j^2_j_j^T-z_p,_kj(z)=_k(z)-ξ_j^2_j_j^T, j<k, _k(z)-ξ̆_j^2_j_j^T, j>k.Here,ξ̆_k_k+1,…,ξ̆_n_n are independent copies of ξ_k_k+1,…,ξ_n_n. The conditional expectation given the samples ξ_1_1,ξ_2_2,…,ξ_k_k is denoted as _k. Moreover, we introduceβ_k(z)= 1/1+ξ_k^2_k^T_k^-1(z)_k, b_k(z)=1/1+ξ_k^2/n_k^-1(z)_n,ψ_k(z)=1/1+ξ_k^2 g_1n^0(z), β_kj(z)= 1/1+ξ_j^2_j^T_kj^-1(z)_j,ϕ_k(z)=1/1+ξ_k^2/n^-1(z)_n, ψ̆_k(z)=1/1+ξ̆_k^2g_1n^0(z),γ_k(z)=_k^T_k^-1(z)_k-1/n_k^-1(z)_n, η_k(z)=_k^T_k^-1(z)_k-1/n^-1(z)_n,ε_k1(z)=_k^T A_k^-1(z) π_n2π_n1^T A_k^-1(z) _k-1/nπ_n1^T A_k^-1(z) _nA_k^-1(z) π_n2,ε_k2(z)=_k^T A_k^-1(z) π_n4π_n3^T A_k^-1(z) _k-1/nπ_n3^T A_k^-1(z) _nA_k^-1(z) π_n4, The validity of the following inequality in the appropriate domainz∈𝒵 can be easily established as in <cit.>:max(|b_k(z)|,|ϕ_k(z)|, |ψ_k(z)|, |ψ̆_k(z)|, |β_kj(z)|, |β_k(z)|)≤ C.Given that the support of H_2 is bounded, it follows that ξ_1^2q≤ C_q. Employing the martingale difference decomposition method, Lemma <ref>, and Lemma <ref>, the following inequality is obtained:|^-1(z)_n-^-1(z)_n|^q= |∑_k=1^n_k-_k-1β_k(z)ξ_k^2_k^T_k^-1(z)_n_k^-1(z)_k|^q ≤ C_qn^q/2.This implies|η_k(z)-γ_k(z)|^q≤ C_q/n^q|_k^-1(z)_n-^-1(z)_n|^q+C_q/n^q|^-1(z)_n-^-1(z)_n|^q = C_q/n^q|β_k(z)ξ_k^2_k^T_k^-1(z)_n_k^-1(z)_k|^q+C_q/n^q/2≤C_qn^-q/2.Therefore,|η_k(z)|^q ≤ C_qn^-q/2.§.§ Proof of Theorem <ref>We proceed with the proof by establishing the almost sure convergence of the random part:π_n1^T^-1(z)π_n2 m_12(z).This can be divided into two parts for comprehensive demonstration:(a): For the random part π_n1^T^-1(z)π_n2-π_n1^T^-1(z)π_n20, (b): For the nonrandom part π_n1^T^-1(z)π_n2→ m_12(z). §.§.§ Almost sure convergence of the random partIn this section, we aim to demonstrate the almost sure convergence of the random part:π_n1^T^-1(z)π_n2-π_n1^T^-1(z)π_n20. Let _0 represent the unconditional expectation. Utilizing the inversion formula+αβ^T^-1=^-1-1/1+β^T^-1α^-1αβ^T^-1,we obtainπ_n1^T^-1(z)π_n2- π_n1^T^-1(z)π_n2 =∑_k=1^n_k-_k-1π_n1^T^-1(z)-_k^-1(z)π_n2 = -∑_k=1^n_k-_k-1β_k(z)ξ_k^2π_n1^T_k^-1(z)_k_k^T_k^-1(z)π_n2.Notice that, from Lemma <ref>, we have|∑_k=1^n_k-_k-1β_k(z)ξ_k^2π_n1^T_k^-1(z)_k_k^T_k^-1(z)π_n2|^4 ≤ C∑_k=1^n_k-1|π_n1^T_k^-1(z)_k_k^T_k^-1(z)π_n2|^2^2+Cδ_n^2p∑_k=1^n|π_n1^T_k^-1(z)_k_k^T_k^-1(z)π_n2|^4≤C/n^2.This implies π_n1^T^-1(z)π_n2-π_n1^T^-1(z)π_n20.§.§.§ Convergence of π_n1^T^-1(z)π_n2 Denote ℍ_n(z)=ξ_1^2ψ_1^2_n-z_p=-zg_2n^0(z)_n+_p. Then (z)-ℍ_n(z)=∑_k=1^nξ_k^2_k_k^T-ξ_1^2ψ_1^2_n. Using (<ref>) and β_k(z)=ϕ_k(z)-β_k(z)ϕ_k(z)ξ_k^2η_k(z),we haveπ_n1^T^-1(z)π_n2-π_n1^Tℍ_n(z)π_n2= -π_n1^Tℍ_n^-1(z)∑_k=1^nξ_k^2_k_k^T-ξ_1^2ψ_1^2_n^-1(z)π_n2= -1/n∑_k=1^nξ_k^2ϕ_k(z)-ψ_k(z)π_n1^Tℍ_n^-1(z)_n_k^-1(z)π_n2+∑_k=1^nβ_k(z)ϕ_k(z)η_k(z)ξ_k^4π_n1^Tℍ_n^-1(z)_k_k^T_k^-1(z)π_n2-1/n∑_k=1^nξ_1^2ψ_1^2π_n1^Tℍ_n^-1(z)_n^-1_k(z)-^-1(z)π_n2. Applying Lemma <ref> and (<ref>), we have|π_n1^T^-1(z)π_n2-π_n1^Tξ_1^2ψ_1^2_n-z_p^-1π_n2| ≤C|1/n^-1(z)_n-g_1n^0(z)|^2+C∑_k=1^n^1/2|η_k(z)|^2+C/n =o(1).Using ξ_1^2ψ_1^2=∫x/1+xg_1n^0(z)dH_2n(x)=-zg_2n^0(z), we obtainπ_n1^T^-1(z)π_n2+z^-1π_n1^T_p+g_2n^0(z)_n^-1π_n2→ 0.This completes the proof of Theorem <ref>. §.§ The proof of Theorem <ref>In establishing the theorem, we adopt a methodology akin to the classical procedure developed in <cit.> under ICS. This involves undertaking a martingale difference decomposition followed by the application of the CLT for martingales. Notably, our approach draws inspiration from the work in <cit.>, but with a significant departure as we streamline the intricate proofs substantially. This is achieved through the judicious use of the replacement of samples strategy. We posit that our simplified approach holds intrinsic interest in its own right. By virtue of the property of _j, we rewrite _j=_j/_j, _n=(_1,⋯,_n),_n=_ndiag(ρ_1/_1,⋯,ρ_n/_n),_n=1/n_n_ndiag(pξ_1^2/_1^2,⋯,pξ_n^2/_n^2)_n^T_n^Twhere _j∼ N( 0_p,_p).Let M_n1(z)=M_n1^1(z)+M_n1^2(z) and M_n2(z)=M_n2^1(z)+M_n2^2(z),whereM_n1^1(z)= √(p)(π_n1^T^-1(z)π_n2-π_n1^T^-1(z)π_n2), M_n2^1(z)= √(p)(π_n3^T^-1(z)π_n4-π_n3^T^-1(z)π_n4), M_n1^2(z)= √(p)(π_n1^T^-1(z)π_n2+z^-1π_n1^T (_p+g_2n^0(z)_n)^-1π_n2), M_n2^2(z)= √(p)(π_n3^T^-1(z)π_n4+z^-1π_n3^T (_p+g_2n^0(z)_n)^-1π_n4).Then the outline of the proof is as follows:(a): [ M_n1^1(z), M_n2^1(z) ]^T converges weakly to a Gaussian process M(z);(b): {M_n1(z)} and {M_n2(z)} both form a tight sequence on 𝒵;(c): M_n1^2(z) and M_n2^2(z) tend to zero for z∈𝒵.In the subsequent sections, we will systematically follow the outlined plan, proceeding step by step.§.§.§ Convergence in finite dimensionsIn this section, we aim to establish the convergence in distribution of the sum∑_ℓ=1^2∑_j=1^rα_jℓM_nℓ^1(z_j)for any positive integer r and complex numbers a_jℓ, where j=1,2, and ℓ=1,⋯,r. This sum converges to a Gaussian random variable.From (<ref>) and β_k(z)=b_k(z)-ξ_k^2β_k(z)b_k(z)γ_k(z), it follows that√(p)[π_n1^T^-1(z)π_n2-π_n1^T^-1(z)π_n2] = -√(p)∑_k=1^n_k-_k-1ξ_k^2b_k(z)ε_k(z)+√(p)∑_k=1^n_k-_k-1ξ_k^4β_k(z)b_k(z)γ_k(z)ε_k(z)-√(p)/n∑_k=1^n_k-_k-1ξ_k^2b_k(z)π_n1^T_k^-1(z)_n_k^-1(z)π_n2+√(p)/n∑_k=1^n_k-_k-1ξ_k^4β_k(z)b_k(z)γ_k(z)π_n1^T_k^-1(z)_n_k^-1(z)π_n2.By computation and utilizing Lemma <ref>, we obtain|√(p)/n∑_k=1^n_k-_k-1ξ_k^4β_k(z)b_k(z)γ_k(z)π_n1^T_k^-1(z)_n_k^-1(z)π_n2|^2 ≤ C/n∑_k=1^n(ξ_k^8)|γ_k(z)|^2≤C/n.and |√(p)∑_k=1^n_k-_k-1ξ_k^4β_k(z)b_k(z)γ_k(z)ε_k(z)|^2≤ Cp∑_k=1^n^1/2|γ_k(z)|^4^1/2|ε_k(z)|^4 ≤C/n.Hence, we see√(p)[π_n1^T^-1(z)π_n2-π_n1^T^-1(z)π_n2] =-√(p)∑_k=1^n_k-_k-1ξ_k^2b_k(z)ε_k1(z)-√(p)/n∑_k=1^n_k-_k-1ξ_k^2b_k(z)π_n1^T_k^-1(z)_n_k^-1(z)π_n2+o_p(1).By (<ref>) and (<ref>), one finds|b_k(z)-ψ_k(z)|^2 ≤ C/n^2|ξ_k^2_k^T_k^-1(z)_n_k^-1(z)_k|^2+o(1)=o(1),where used the fact that n^-1^-1(z)_n→ g_1(z).Therefore, we deduce from (<ref>) √(p)[π_n1^T^-1(z)π_n2-π_n1^T^-1(z)π_n2] =-√(p)∑_k=1^nξ_k^2ψ_k(z)_kε_k1(z)-√(p)/n∑_k=1^nξ_k^2ψ_k(z)-ξ_k^2ψ_k(z)_kπ_n1^T_k^-1(z)_n_k^-1(z)π_n2+o_p(1)≜∑_k=1^n Y_k1(z)+o_p(1).Applying the same procedure, it becomes evident√(p)[π_n3^T^-1(z)π_n4-π_n3^T^-1(z)π_n4] =-√(p)∑_k=1^nξ_k^2ψ_k(z)_kε_k2(z)-√(p)/n∑_k=1^nξ_k^2ψ_k(z)-ξ_k^2ψ_k(z)_kπ_n3^T_k^-1(z)_n_k^-1(z)π_n4+o_p(1)≜∑_k=1^n Y_k2(z)+o_p(1).Our next objective is to demonstrate that, for any positive integer r > 0, the sum∑_ℓ=1^2∑_j=1^rα_jℓ∑_k=1^nY_kℓ(z_j)will converge in distribution to a Gaussian random variable. For anyz_1,…,z_r∈ℂ_+α_11,α_12,…,α_r1,α_r2∈ℝand any ε>0, we have∑_k=1^n(|∑_ℓ=1^2∑_j=1^rα_jℓY_kℓ(z_j)|^2I(|∑_ℓ=1^2∑_j=1^rα_jℓY_kℓ(z_j)|≥ε)) ≤C/ε^2∑_k=1^n∑_ℓ=1^2∑_j=1^rα_jℓ^4|Y_kℓ(z_j)|^4→0where |Y_kℓ(z)|^4≤ Cpξ_k^8|ε_kℓ(z)|^4+C/n^2ξ_k^8≤C/n^2. This implies the fulfillment of the Lindeberg condition for Lemma <ref>. Then, we shall prove for z_1,z_2∈𝒵,∑_k=1^n_k-1[(α_11Y_k1(z_1)+α_12Y_k2(z_1))(α_21Y_k1(z_2)+α_22Y_k2(z_2))]= α_11α_21∑_k=1^n_k-1(Y_k1(z_1)Y_k1(z_2))+α_12α_22∑_k=1^n_k-1(Y_k2(z_1)Y_k2(z_2))+α_11α_22∑_k=1^n_k-1(Y_k1(z_1)Y_k2(z_2))+α_12α_21∑_k=1^n_k-1(Y_k2(z_1)Y_k1(z_2))tends to a constant in probability. We will now demonstrate the derivation of the limit for∑_k=1^n_k-1(Y_k1(z_1)Y_k2(z_2)),and the others follow a similar procedure.To begin with, it is easy to get from Lemma <ref> that∑_k=1^n_k-1(Y_k1(z_1)Y_k2(z_2)) = c_nh_n1(z_1,z_2)/n∑_k=1^n_kπ_n1^T_k^-1(z_1)_n_k^-1(z_2)π_n4π_n3^T_k^-1(z_2)_n_k^-1(z_1)π_n2+c_nh_n1(z_1,z_2)/n∑_k=1^n_kπ_n1^T_k^-1(z_1)_n_k^-1(z_2)π_n3π_n4^T_k^-1(z_2)_n_k^-1(z_1)π_n2+ph_n2(z_1,z_2)/n^2∑_k=1^n_kπ_n1^T_k^-1(z_1)_n_k^-1(z_1)π_n2_kπ_n3^T_k^-1(z_2)_n_k^-1(z_2)π_n4+o_p(1) = c_nh_n1(z_1,z_2)/n∑_k=1^n_kπ_n1^T_k^-1(z_1)_n_k^-1(z_2)π_n4_kπ_n3^T_k^-1(z_2)_n_k^-1(z_1)π_n2+c_nh_n1(z_1,z_2)/n∑_k=1^n_kπ_n1^T_k^-1(z_1)_n_k^-1(z_2)π_n3_kπ_n4^T_k^-1(z_2)_n_k^-1(z_1)π_n2+c_nh_n2(z_1,z_2)/n∑_k=1^n_kπ_n1^T_k^-1(z_1)_n_k^-1(z_1)π_n2_kπ_n3^T_k^-1(z_2)_n_k^-1(z_2)π_n4+o_p(1) ≜ ℐ_1+ℐ_2+ℐ_3+o_p(1)where the last equality is due to|π_n1^T_k^-1(z_1)-_k_k^-1(z_1)_n_k^-1(z_2)π_n4|^2= |∑_j=k+1^nπ_n1^T_j-_j-1_k^-1(z_1)-_kj^-1(z_1)_n_k^-1(z_2)π_n4|^2 =On^-1 ,andh_n1(z_1,z_2)= ∫x^2dH_2n(x)/(1+x g_1n^0(z_1))(1+x g_1n^0(z_2))→z_1g_2(z_1)-z_2g_2(z_2)/g_1(z_1)-g_1(z_2),h_n2(z_1,z_2)= h_n1(z_1,z_2)-∫xdH_2n(x)/1+x g_1n^0(z_1)∫xdH_2n(x)/1+x g_1n^0(z_2)→ z_1z_2m(z_1)g_2(z_2)-m(z_2)g_2(z_1)/g_1(z_1)-g_1(z_2).Next, the matrix _k^-1(z) can be further decomposed as_k^-1(z)=𝕋_n(z)+_k(z)+_k(z)+_k(z)+_k(z),where𝕋_n(z) =-(z-n-1/nξ_1^2ψ_1(z)·_n)^-1, _k(z) =∑_j≠kξ_j^2ψ_j(z) 𝕋_n(z) (_j_j^T-1/n_n)_kj^-1(z), _k(z) =∑_j≠k(β_kj(z)-ψ_j(z))ξ_j^2 𝕋_n(z) _j_j^T_kj^-1(z), _k(z) =1/n∑_j≠kξ_j^2ψ_j(z)-ξ_j^2ψ_j(z)𝕋_n(z) _n_kj^-1(z), _k(z)=-1/nξ_1^2ψ_1(z)𝕋_n(z)∑_j≠kβ_jk(z)ξ_j^2_kj^-1(z)_j_j^T_kj^-1(z).It is easy to obtain_kz_1π_n1^T_k^-1(z_1)π_n4-z_2π_n1^T_k^-1(z_2)π_n4= - π_n1^T_p+g_2n^0(z_1)_n^-1π_n4+π_n1^T_p+g_2n^0(z_2)_n^-1π_n4+o_p(1)= g_2n^0(z_1)-g_2n^0(z_2)π_n1^T_p+g_2n^0(z_1)_n^-1_n_p+g_2n^0(z_2)_n^-1π_n4+o_p(1)g_2(z_1)-g_2(z_2)lim_n→∞π_n1^T_p+g_2n^0(z_1)_n^-1_n_p+g_2n^0(z_2)_n^-1π_n4.On the other hand, rewrite _kz_1π_n1^T_k^-1(z_1)π_n4-z_2π_n1^T_k^-1(z_2)π_n4 as_k[π_n1^T_k^-1(z_1)z_1_k^-1(z_2)-z_2_k^-1(z_1)_k^-1(z_2)π_n4]= _k[π_n1^T_k^-1(z_1)z_1-z_2∑_j=1^k-1ξ_j^2_j_j^T+∑_j=k+1^nz_1ξ̆_j^2_j_j^T-z_2ξ_j^2_j_j^T_k^-1(z_2)π_n4].We find the right hand side of the above equality equals toz_1-z_2∑_j=1^k-1_k[ξ_j^2ψ_j(z_1)ψ_j(z_2)π_n1^T_kj^-1(z_1)_j_j^T_kj^-1(z_2)π_n4]+z_1∑_j=k+1^n_k[ξ̆_j^2ψ̆_j(z_2)π_n1^T_k^-1(z_1)_j_j^T_kj^-1(z_2)π_n4]-z_2∑_j=k+1^n_k[ξ_j^2ψ_j(z_1)π_n1^T_kj^-1(z_1)_j_j^T_k^-1(z_2)π_n4]+o_p(1)= z_1-z_2∑_j=1^k-1ξ_j^2ψ_j(z_1)ψ_j(z_2)_k[π_n1^T_kj^-1(z_1)_j_j^T-1/n_n_kj^-1(z_2)π_n4]+z_1-z_2/n∑_j=1^k-1ξ_j^2ψ_j(z_1)ψ_j(z_2)_k[π_n1^T_kj^-1(z_1)_n_kj^-1(z_2)π_n4]+n-k/nw_n(z_1,z_2)_k[π_n1^T_k^-1(z_1)_n_k^-1(z_2)π_n4]+o_p(1)=z_1-z_2(k-1)/nξ_1^2ψ_1(z_1)ψ_1(z_2)_k[π_n1^T_k^-1(z_1)_n_k^-1(z_2)π_n4]+n-k/nw_n(z_1,z_2)_k[π_n1^T_k^-1(z_1)_n_k^-1(z_2)π_n4]+o_p(1)=w_n(z_1,z_2)1-k-1/nd_n(z_1,z_2)_k[π_n1^T_k^-1(z_1)_n_k^-1(z_2)π_n4]+o_p(1),where the last second equality is from|∑_j=1^k-1ξ_j^2ψ_j(z_1)ψ_j(z_2)_k[π_n1^T_kj^-1(z_1)_j_j^T-1/n_n_kj^-1(z_2)π_n4]|^2 = ξ_1^4ψ_1^2(z_1)ψ_1^2(z_2)∑_j=1^k-1|_k[π_n1^T_kj^-1(z_1)_j_j^T-1/n_n_kj^-1(z_2)π_n4]|^2+∑_j≠ t{ξ_j^2ξ_t^2ψ_j(z_1)ψ_j(z_2)ψ_t(z_1)ψ_t(z_2)_k[π_n1^T_kj^-1(z_1)_j_j^T-1/n_n_kj^-1(z_2)π_n4]·_k[π_n1^T_kt^-1(z_1)_t_t^T-1/n_n_kt^-1(z_2)π_n4]} =o(1)andw_n(z_1,z_2)=z_1ξ_1^2ψ_1(z_2)-z_2ξ_1^2ψ_1(z_1)→ z_1z_2g_2(z_1)-g_2(z_2),d_n(z_1,z_2)=w_n(z_1,z_2)-z_1-z_2ξ_1^2ψ_1(z_1)ψ_1(z_2)/w_n(z_1,z_2)→ d(z_1,z_2).From these, one obtain_k[π_n1^T_k^-1(z_1)_n_k^-1(z_2)π_n4]_k[π_n2^T_k^-1(z_1)_n_k^-1(z_2)π_n3]= 1/w_n^2(z_1,z_2)1-k-1/nd_n(z_1,z_2)^2_kz_1π_n1^T_k^-1(z_1)π_n4-z_2π_n1^T_k^-1(z_2)π_n4·_kz_1π_n2^T_k^-1(z_1)π_n3-z_2π_n2^T_k^-1(z_2)π_n3 +o_p(1),which yieldsℐ_1 h_1(z_1,z_2)lim_n→∞π_n1^T_p+g_2n^0(z_1)_n^-1_n_p+g_2n^0(z_2)_n^-1π_n4·lim_n→∞π_n2^T_p+g_2n^0(z_1)_n^-1_n_p+g_2n^0(z_2)_n^-1π_n3 = h_1(z_1,z_2)r_14(z_1,z_2)r_23(z_1,z_2).Continuing with the same procedure, we deduceℐ_2 h_1(z_1,z_2)r_13(z_1,z_2)r_24(z_1,z_2). We are now prepared to address ℐ_3. It is known that _kπ_n1^T_k^-1(z_1)π_n2+z_1^-1π_n1^T_p+g_2n^0(z_1)_n^-1π_n20 z_1_kπ_n1^T_k^-2(z_1)π_n2+z_1z_1^-1π_n1^T_p+g_2n^0(z_1)_n^-1π_n2' 0 .Furthermore, it follows that_kπ_n1^T_k^-1(z_1)π_n2= ∑_j≠ k_kξ_j^2π_n1^T_k^-1(z_1)_j_j^T_k^-1(z_1)π_n2-z_1_kπ_n1^T_k^-2(z_1)π_n2= ∑_j< kξ_j^2ψ_j^2(z_1)_kπ_n1^T_kj^-1(z_1)_j_j^T-1/n_n_kj^-1(z_1)π_n2+1/n∑_j≠ k_kξ_j^2ψ_j^2(z_1)π_n1^T_kj^-1(z_1)_n_kj^-1(z_1)π_n2-z_1_kπ_n1^T_k^-2(z_1)π_n2+o_p(1)= ξ_1^2ψ_1^2(z_1)_kπ_n1^T_k^-1(z_1)_n_k^-1(z_1)π_n2-z_1_kπ_n1^T_k^-2(z_1)π_n2+o_p(1).Consequently, we see_k(π_n1^T_k^-1(z_1) _n_k^-1(z_1)π_n2)=_kπ_n1^T_k^-1(z_1)π_n2+z_1_kπ_n1^T_k^-2(z_1)π_n2/ξ_1^2ψ_1^2(z_1)+o_p(1) = -[g_2n^0(z_1)]'/z_1g_2n^0(z_1)π_n1^T_p+g_2n^0(z_1)_n^-2_nπ_n2+o_p(1) → -g_2'(z_1)/z_1g_2(z_1)lim_n→∞π_n1^T_p+g_2n^0(z_1)_n^-2_nπ_n2.This impliesℐ_3 h_2(z_1,z_2)r_12(z_1)r_34(z_2).Combining (<ref>), (<ref>), and (<ref>), we conclude∑_k=1^n_k-1(Y_k1(z_1)Y_k2(z_2))h_1(z_1,z_2)r_14(z_1,z_2)r_23(z_1,z_2)+h_1(z_1,z_2)r_13(z_1,z_2)r_24(z_1,z_2)+h_2(z_1,z_2)r_12(z_1)r_34(z_2). §.§.§ Tightness of M_nj(z),j=1,2We now proceed with the proof of tightness. Initially, owing to the similarity between M_n1(z) and M_n2(z), it suffices to demonstrate the tightness of the sequence of random functions M_n1(z) for z∈𝒵. Utilizing Theorem 12.3 of Billingsley and Section <ref>, it is only necessary to showsup_n;z_1,z_2∈𝒵|M_n1^1(z_1)-M_n1^1(z_2)|^2/|z_1-z_2|^2≤ C.Note that from (<ref>) M_n1^1(z_1)-M_n1^1(z_2)/z_1-z_2=√(p)∑_k=1^n(_k-_k-1)π_n1^T^-1(z_1)^-1(z_2)π_n2=-√(p)∑_k=1^n(_k-_k-1)β_k(z_2)ξ_k^2π_n1^T_k^-1(z_1)_k^-1(z_2)_k_k^T_k^-1(z_2)π_n2-√(p)∑_k=1^n(_k-_k-1)β_k(z_1)ξ_k^2π_n1^T_k^-1(z_1)_k_k^T_k^-1(z_1)_k^-1(z_2)π_n2+√(p)∑_k=1^n(_k-_k-1)β_k(z_1)β_k(z_2)ξ_k^4π_n1^T_k^-1(z_1)_k_k^T_k^-1(z_1)_k^-1(z_2)_k_k^T_k^-1(z_2)π_n2 ≜ 𝒥_1(z_1,z_2)+𝒥_2(z_1,z_2)+𝒥_3(z_1,z_2).Therefore, the ensuing steps aim to demonstratesup_n;z_1,z_2∈𝒵|𝒥_t(z_1,z_2)|^2≤ C, t=1,2,3.Before proving (<ref>), we provide moment bounds for specific random functions for z∈𝒵 without delving into the details.The first set of bounds pertains to any positive qmax{^-1(z)^q,_k^-1(z)^q,_kj^-1(z)^q}≤ C_q.To save space and avoid redundancy, we omit this part and direct the reader to <cit.> and <cit.> for more comprehensive details. The second set of bounds is given by: |β_k(z)|^q≤ C_q, and |ϕ_k(z)|^q≤ C_q.Applying Lemma <ref>, (<ref>), and (<ref>), it yields|𝒥_1(z_1,z_2)|^2 ≤ p∑_k=1^n|β_k(z_2)ξ_k^2π_n1^T_k^-1(z_1)_k^-1(z_2)_k_k^T_k^-1(z_2)π_n2|^2 ≤ p∑_k=1^n^1/2|β_k(z_2)|^4^1/2ξ_k^8^1/2|π_n1^T_k^-1(z_1)_k^-1(z_2)_k_k^T_k^-1(z_2)π_n2|^4 ≤ C/n∑_k=1^n^1/2_k^-1(z_1)^4_k^-1(z_2)^8≤ C.Using the same argument, we can derive|𝒥_2(z_1,z_2)|^2≤ C and|𝒥_3(z_1,z_2)|^2≤ C.Therefore, we have completed the proof of tightness.§.§.§ Convergence of M_nj^2(z),j=1,2.A slight modification of the argument in <cit.> allows us to extend their considered domain to 𝒵. Consequently, we establish that sup_n,z∈𝒵ℍ_n^-1(z)<∞.Utilizing (<ref>) and Lemma <ref>, we obtainM_n1^2(z) = -√(p)ξ_1^2ϕ_1(z)-ψ_1(z)π_n1^Tℍ_n^-1(z)_n^-1(z)π_n2+√(p)∑_k=1^nβ_k(z)ϕ_k(z)η_k(z)ξ_k^4π_n1^Tℍ_n^-1(z)_k_k^T_k^-1(z)π_n2+o(1) ≜ 𝒦_1(z)+𝒦_2(z)+o(1).By utilizing (<ref>) and Lemma <ref>, it follows that𝒦_2(z)= √(p)∑_k=1^nϕ_k^2(z)η_k(z)ξ_k^4π_n1^Tℍ_n^-1(z)_k_k^T_k^-1(z)π_n2-√(p)∑_k=1^nβ_k(z)ϕ_k^2(z)η_k^2(z)ξ_k^6π_n1^Tℍ_n^-1(z)_k_k^T_k^-1(z)π_n2 = -√(p)∑_k=1^nβ_k(z)ϕ_k^2(z)η_k^2(z)ξ_k^6π_n1^Tℍ_n^-1(z)_k_k^T_k^-1(z)π_n2+o(1).Combining the above equality and (<ref>), one finds|𝒦_2(z)|≤ C/√(n)∑_k=1^n^1/2|η_k(z)|^4≤C/√(n).Moreover, arguments from <cit.> show thatsup_n,z∈𝒵√(p)1/n^-1(z)_n-g_1n^0(z)→0.Thus, we have|𝒦_3(z)|≤C|√(p)ξ_1^4ϕ_1(z)ψ_1(z)1/n^-1(z)_n-g_1n^0(z)|→0.Together with (<ref>)-(<ref>), we conclude that for z∈𝒵,M_n1^2(z)→0. Additionally, employing the same procedure yieldsM_n2^2(z)→0 z∈𝒵.§.§ Proof of Theorem <ref>Note that, based on the discussion in Section <ref>, following the approach of <cit.>, the proof of this lemma requires identifying a suitable domain 𝒵 and truncating the corresponding stochastic process. Then the desired result is indeed consequences of CLT for bilinear forms. To achieve this, let𝒞_n=𝒞∩{z:(z)≥ n^-1ε_n}. The truncated process M̂_n(z) is defined asM̂_n(z)= M_n(z),for z∈𝒞_n M_n(x_r+ sign( z)· in^-1ε_n),for x=x_r,v∈[0,n^-1ε_n], M_n(x_ℓ+ sign( z)· in^-1ε_n),for x=x_ℓ,v∈[0,n^-1ε_n]. It can be verified with probability 1M_n(z)-M̂_n(z)→ 0,for z∈𝒞.Let x_r be a number which is greater than the right endpoint of interval (<ref>) and let x_ℓ be a negative number if the left endpoint of interval (<ref>) is zero, otherwise letx_ℓ∈0,alim inf_nλ_min^_nI_(0,1)(c)1-√(c)^2.Let v_0>0 be arbitrary and 𝒞_u={u± iv_0:u∈[x_ℓ,x_r]}. Then𝒞=𝒞_u∪{x_ℓ+iv:v∈[-v_0,v_0]}∪{x_r+iv:v∈[-v_0,v_0]}.Hence, we conclude that 𝒞 is an appropriate domain, and the results follow directly from Theorem <ref>. § AUXILIARY LEMMASWe present the following lemmas, which are used in the proofs above. Let 𝐀=(a_jk) be a p× p nonrandom matrix and 𝐫=√(c)_n_n𝐮 where 𝐮∼ U(S^p-1). Then for q≥2,|𝐫^T𝐀𝐫-1/ntr(𝐀_n)|^q ≤ C_qp^-qr^q/2𝐀_n^q,where r=rank(𝐀) and C_q is a constant depending on q only.From Lemma 5 in <cit.>, we have|𝐮^T_n^T𝐀_n 𝐮-1/ptr (𝐀_n)|^q ≤C_q/p^q[tr(𝐀_n𝐀^T_n)^q/2+tr(𝐀_n𝐀^T_n)^q/2],where C_q is a positive constant depending only on q. Hence, we obtain|𝐫^T𝐀𝐫-1/ntr(𝐀_n)|^q≤ C_qp^-qr^q/2𝐀_n^q.This completes the proof of the lemma.Let 𝐀 and 𝐁 be two p× p nonrandom matrices, and 𝐮 is uniformly distributed on the unit sphere S^p-1 in ℝ^p. Then we have(𝐮^T𝐀𝐮-1/ptr𝐀)(𝐮^T𝐁𝐮-1/ptr𝐁)= tr (𝐀𝐁^T)+tr (𝐀𝐁)/p(p+2)-2tr(𝐀)tr(𝐁)/p^2(p+2).Suppose that for each n, Y_n1, Y_n2, …, Y_nr_n is a real martingale difference sequence with respect to the increasing σ-field {ℱ_nj} with second moments. If, as n →∞, ∑_j=1^r_nE(Y_nj^2| ℱ_n, j-1) i.p.⟶σ^2, where σ^2 is a positive constant, and, for each ε > 0,∑_j=1^r_nE(Y_nj^2 I_(|Y_nj| ≥ε)) → 0, then ∑_j=1^r_n Y_nr_n𝒟→ N(0, σ^2). Let {X_k} be a real martingale difference sequence with respect to the increasing σ-field ℱ_k, and let _k denote conditional expectation with respect to ℱ_k. Then for q ≥ 2, |∑ X_k|^q ≤ C_q [(∑_k-1|X_k|^2)^q/2 + ∑|X_k|^q]. Let {X_k} be a real martingale difference sequence with respect to the increasing σ-field ℱ_k, then for q ≥ 2, |∑ X_k|^q ≤ C_q(∑ |X_k|^2)^q/2.23[Anderson2003]Anderson03I[author] Anderson, T. W.T. W. (2003). An introduction to multivariate statistical analysis. Third Edition. Wiley New York. [Bai, Li and Pan2019]Bai2019[author] Bai, ZhidongZ., Li, HuiqinH.Pan, GuangmingG. (2019). Central limit theorem for linear spectral statistics of large dimensional separable sample covariance matrices. Bernoulli251838–1869. 10.3150/18-BEJ1038[Bai, Miao and Pan2007]BaiM07A[author] Bai, Zhi DongZ. D., Miao, Bai QiB. Q.Pan, Guang MingG. M. (2007). On asymptotics of eigenvectors of large sample covariance matrix. The Annals of Probability351532–1572. 10.1214/009117906000001079[Bai and Silverstein2004]BaiS04C[author] Bai, Zhi DongZ. D.Silverstein, Jack W.J. W. (2004). CLT for linear spectral statistics of large-dimensional sample covariance matrices. The Annals of Probability32553–605. [Bai and Silverstein2010]BaiS10S[author] Bai, Zhi DongZ. D.Silverstein, Jack W.J. W. (2010). Spectral analysis of large dimensional random matrices. Second Edition. Springer Verlag. [Bao et al.2022]Bao2022[author] Bao, ZhigangZ., Ding, XiucaiX., Wang, JingmingJ.Wang, KeK. (2022). Statistical Inference for Principal Components of Spiked Covariance Matrices. Annals of Statistics501144–1169. 10.1214/21-AOS2143[Billingsley1995]Billingsley95P[author] Billingsley, P.P. (1995). Probability and measure. John Wiley&Sons, New York. [Bloemendal et al.2014]BloemendalE14I[author] Bloemendal, AlexA., Erdos, LászlóL., Knowles, AnttiA., Yau, Horng TzerH. T.Yin, JunJ. (2014). Isotropic local laws for sample covariance and generalized Wigner matrices. Electronic Journal of Probability19. 10.1214/EJP.v19-3054[Dumitriu and Edelman2002]Dumitriu2002[author] Dumitriu, IoanaI.Edelman, AlanA. (2002). Matrix models for beta ensembles. Journal of Mathematical Physics. 10.1063/1.1507823[Gao et al.2017]GaoH17H[author] Gao, JitiJ., Han, XiaoX., Pan, GuangmingG.Yang, YanrongY. (2017). High dimensional correlation matrices: the central limit theorem and its applications. Journal of the Royal Statistical Society: Series B (Statistical Methodology)79677–693. 10.1111/rssb.12189[Hu, Li and Zhou2019]Hu2019a[author] Hu, JiangJ., Li, WeimingW.Zhou, WangW. (2019). Central Limit Theorem for Mutual Information of Large MIMO Systems with Elliptically Correlated Channels. IEEE Transactions on Information Theory657168–7180. 10.1109/TIT.2019.2913760[Hu et al.2019]Hu2019[author] Hu, JiangJ., Li, WeimingW., Liu, ZhiZ.Zhou, WangW. (2019). High-dimensional covariance matrices in elliptical distributions with application to spherical test. Annals of Statistics47527–555. 10.1214/18-AOS1699[Jiang and Bai2021]Jiang2021b[author] Jiang, DandanD.Bai, ZhidongZ. (2021). Generalized four moment theorem and an application to CLT for spiked eigenvalues of high-dimensional covariance matrices. Bernoulli27274–294. 10.3150/20-BEJ1237[Johnstone2001]Johnstone01D[author] Johnstone, Iain MI. M. (2001). On the Distribution of the Largest Eigenvalue in Principal Components Analysis. The Annals of Statistics29295–327. 10.1214/aos/1009210544[Karoui2009]Karoui09C[author] Karoui, Noureddine E.N. E. (2009). Concentration of measure and spectra of random matrices: Applications to correlation matrices, elliptical distributions and beyond. Annals of Applied Probability192362–2405. 10.1214/08-AAP548[Knowles and Yin2017]KnowlesY17A[author] Knowles, AnttiA.Yin, JunJ. (2017). Anisotropic local laws for random matrices. Probability Theory and Related Fields169257–352. 10.1007/s00440-016-0730-4[Li et al.2023]LiYin2023TAMS[author] Li, HuiqinH., Pan, GuangmingG., Yin, YanqingY.Zhou, WangW. (2023). Separable sample covariance matrices under elliptical populations with applications. Transactions of the American Mathematical Society,To appear. [Onatski2009]onatski2009testing[author] Onatski, AlexeiA. (2009). Testing hypotheses about the number of factors in large factor models. Econometrica771447–1479. [Pan and Zhou2008]PanZ08C[author] Pan, Guang MingG. M.Zhou, WangW. (2008). Central limit theorem for signal-to-interference ratio of reduced rank linear receiver. The Annals of Applied Probability181232–1270. 10.1214/07-AAP477[Tony Cai, Han and Pan2020]TonyCai2020a[author] Tony Cai, T.T., Han, XiaoX.Pan, GuangmingG. (2020). supplement: Limiting laws for divergent spiked eigenvalues and largest nonspiked eigenvalue of sample covariance matrices. Annals of Statistics481255–1280. 10.1214/18-AOS1798[Wen et al.2022]Wentwell2022[author] Wen, JunJ., Xie, JiahuiJ., Yu, LongL.Zhou, WangW. (2022). Tracy-Widom limit for the largest eigenvalue of high-dimensional covariance matrices in elliptical distributions. Bernoulli282941–2967. 10.3150/21-BEJ1443[Wishart1928]Wishart28G[author] Wishart, JohnJ. (1928). The Generalised Product Moment Distribution in Samples from a Normal Multivariate Population. Biometrika20A32–52. [Zhang et al.2022]zhangzheng2022[author] Zhang, ZhixiangZ., Zheng, ShurongS., Pan, GuangmingG.Zhong, Ping-ShouP.-S. (2022). Asymptotic independence of spiked eigenvalues and linear spectral statistics for large sample covariance matrices. The Annals of Statistics502205–2230. 10.1214/22-AOS2183
http://arxiv.org/abs/2312.16373v1
{ "authors": [ "Yanqing Yin", "Wang Zhou" ], "categories": [ "math.ST", "stat.TH", "62H15, 62B20" ], "primary_category": "math.ST", "published": "20231227013738", "title": "Limiting behavior of bilinear forms for the resolvent of sample covariance matrices under elliptical distribution with applications" }
Computing Gerber-Shiu function in the classical risk model with interest using collocation method Zan Yu,   Lianzeng ZhangCorresponding author. School of Finance, Nankai University, Tianjin 300350,China =================================================================================================================== It is by now well-established that modern over-parameterized models seem to elude the bias-variance tradeoff and generalize well despite overfitting noise. Many recent works attempt to analyze this phenomenon in the relatively tractable setting of kernel regression. However, as we argue in detail, most past works on this topic either make unrealistic assumptions, or focus on a narrow problem setup. This work aims to provide a unified theory to upper bound the excess risk of kernel regression for nearly all common and realistic settings. Specifically, we provide rigorous bounds that hold for common kernels and for any amount of regularization, noise, any input dimension, and any number of samples. Furthermore, we provide relative perturbation bounds for the eigenvalues of kernel matrices, which may be of independent interest. These reveal a self-regularization phenomenon, whereby a heavy tail in the eigendecomposition of the kernel provides it with an implicit form of regularization, enabling good generalization. When applied to common kernels, our results imply benign overfitting in high input dimensions, nearly tempered overfitting in fixed dimensions, and explicit convergence rates for regularized regression. As a by-product, we obtain time-dependent bounds for neural networks trained in the kernel regime.§ INTRODUCTION It is by now well-established that various families of highly over-parameterized models tend to generalize well, even when perfectly fitting noisy data <cit.>. This phenomenon seemingly contradicts the classical intuition of the bias-variance trade-off, and motivated a large literature attempting to explain it <cit.>. In particular, a long series of works attempted to understand this phenomenon in the context of kernel methods <cit.>. This is due both to their classical importance and their relation to over-parameterized neural networks via the Neural Tangent Kernel (NTK) and Gaussian Process Kernel (GPK, also known as NNGP) <cit.>. However, there is still a large gap between empirical observations and current theoretical analysis. As we argue in detail in sec:preliminaries, past works tend to either make unrealistic assumptions (often inspired by the analysis of linear regression) that do not hold for common kernels of interest, or are limited to a very narrow problem setup. This is not just a technical limitation, but rather, as we will show, may result in an inaccurate analysis for common kernels in practice. In this paper, we provide simple, sharp, and rigorous upper bounds for the generalization error of kernel regression, which hold under realistic assumptions and can be applied to a wide range of kernels and settings.Specifically, we demonstrate that many kernels have a built-in self-regularization property, meaning that the structure of the kernel provides an implicit form of regularization. This property is characterized through novel relative deviation bounds on the eigenvalues of kernel matrices, which may be of independent interest and may be useful in many other settings.We then apply these tools to analyze the generalization performance of regularized and un-regularized kernel regression. Self-regularization causes the kernel to learn a function that generalizes well, even if it can interpolate the data. As such, we provide upper bounds for the excess risk (and its bias and variance components) regardless of the amount of explicit regularization. Importantly, our mild assumptions allow us to apply these bounds to common kernels, including NTKs (and hence provide insights on generalization in neural networks). Specifically, our main results and insights include the following: * Relative concentration bounds for the eigenvalues of kernel matrices (thm:ker_eigenvalues). We derive both upper and lower bounds for the eigenvalues of kernel matrices under very mild assumptions which hold for common kernels. In particular, this highlights a self-regularization phenomenon whereby the eigenvalue of the kernel matrix behave as if one added an explicit regularization term to the training objective.* A general-purpose upper bound for the excess risk in kernel regression (thm:bound_gen). The assumptions of this bound are very mild, and the bound can thus be applied to common kernels in a variety of settings. The bound is sharp without further assumptions, and characterizes both the bias and variance up to universal constants. In particular, no assumption is made on the regularization strength, amount of noise, input dimension, or number of samples.* Benign overfitting in high input dimensions (thm:highdim), meaning that the excess risk goes to zero despite the presence of noise and lack of explicit regularization. In such a high dimensional setting, the frequencies that can be learned are limited, thus preventing any harmful overfitting. In particular, our results apply to the NTK, showing benign overfitting (and the corresponding convergence rates) for neural networks in the kernel regime when the input dimension is large.* Nearly Tempered overfitting in fixed dimensions (thm:min_norm_poly), meaning that the bias goes to zero, and the variance cannot diverge too quickly. As such, when the amount of noise is relatively small, this implies a good excess risk despite a possibly harmful overfitting of noise. As far as we know, this is the first rigorous upper bound for unregularized kernel regression (i.e., min-norm interpolator) in the fixed dimensional setting for generic kernels.* Learning rates for regularized kernel regression(thm:fixed_dimensional), where we bound the bias and variance as a function of the regularization strength. In particular, through a connection with gradient flow, this gives convergence rates for neural networks trained in the kernel regime.Overall, we hope that our paper will contribute to the development of a rigorous general theory analyzing overfitting in kernel regression and, more generally, in over-parameterized models, under minimal and realistic assumptions. The paper is structured as follows: In sec:preliminaries, we formally present our settings and explain the issues with past works. In sec:eigenvalues we present our eigenvalues bounds, and in sec:krr our bias and variance bound for kernel regression. sec:applications specializes to specific cases showcasing the utility of our previous results. In sec:neural_nets we discuss the implications of our results for neural networks. All of our proofs are given in the appendix. Namely, the proofs for sec:eigenvalues, sec:krr and sec:applications are given in app:eigenvalues, app:krr and app:applications respectively. Further appendices are referred to from the text as needed. § PRELIMINARIES AND DISCUSSION OF PAST WORKS §.§ Problem Set-UpLetbe some input space, μ some measure overand K:×→ be a positive definite kernel over . We assume that K is a Mercer kernel, meaning that it admits a spectral decomposition K(,') = ∑_i=1^∞λ_i ψ_i()ψ_i('),where λ_i ≥ 0 are the non-negative eigenvalues (not necessarily ordered), and the eigenfunctions ψ_i form an orthonormal basis in L^2_μ() (the space of square-integrable functions w.r.t. μ). This is a very mild assumption, as it holds for (but is not limited to) the cases where μ is a probability measure, and eitheris compact or K is bounded and continuous <cit.>. Let p∈∪{∞} denote the number of non-zero eigenvalues, and w.l.o.g let ϕ():=(√(λ_i)ψ_i() )_i=1^p be the non-zero features (with λ_i>0) and ψ():=(ψ_i() )_i=1^p. Since _x[ψ_i()ψ_j()^⊤]=I, it is straightforward to verify that the features admit a diagonal and invertible (uncentered) covariance operator given by Σ:=𝔼_[ϕ_i()ϕ_j()^⊤] = [ λ_1 0; λ_2; 0 ⋱ ].The features are related to the eigenfunctions by ϕ()=Σ^1/2ψ(), and to the kernel by K(,')=⟨ϕ(), ϕ(') ⟩ where the dot product is the standard one. We will always work in the over-parameterized setting, meaning that throughout the paper, we assume that p≥ n. Since oftentimes p=∞, our bounds will not explicitly depend on p (only implicitly through the eigenvalues of Σ). Let X={_1,...,_n}∈^n be a set of n training points drawn i.i.d from μ. Let f^*∈ L^2_μ() be some target function, and y_i = f^*(_i) + ϵ_i be the response variable, where ϵ_i is any i.i.d noise with mean 0 and variance σ_ϵ^2.Given some regularization parameter > 0, Kernel Ridge Regression (KRR) corresponds to minimizing the objectivemin_f∈1/n∑_i=1^n (f(_i) - y_i)^2 + f_^2,whereis the RKHS of K, consisting of functions of the form f(x)=⟨θ, ϕ()⟩ with θ_2<∞. Letting =(y_1,…,y_n)^⊤, the minimizer of the KRR problem in(<ref>) is given by f̂() = ⟨θ̂(), ϕ() ⟩ with: θ̂() = ϕ(X)^⊤ (+n I)^-1,where =K(X,X)=(K(_i,_j))_i,j=1^n is the kernel matrix, and using infinite matrix notation, ϕ(X):=[ϕ(_1),…,ϕ(_n)]^⊤∈^n× p are the training features. As → 0, θ̂ tends to the min-norm interpolator:θ̂(y) = min_θθ_=ϕ(X)θ. We can decompose the target function as f^*() = ⟨θ^*, ϕ() ⟩ + P^⊥ f^* where θ^*∈^p and P^⊥ is the orthogonal projection onto the space spanned by the eigenfunctions with 0 eigenvalues (from (<ref>)). In particular, if the kernel function K from (<ref>) has no zero eigenvalues, then P^⊥f^*=0. By the orthonormality of ψ_i, it holds that f^*_ L^2_μ()=Σ^1/2θ^*_2 + P^⊥f^*_ L^2_μ(). We do not require f^* to be in the RKHS.We will define the excess risk of KRR as:R(θ̂(y)):=𝔼_,ϵ[( ⟨θ̂(), ϕ() ⟩ - f^*())^2] = 𝔼_,ϵ[⟨θ̂() - θ^*, ϕ() ⟩^2] + P^⊥f^*_ L^2_μ()^2.Some authors equivalently analyze the risk, namely the expected error w.r.t. the noisy labels, which is equal toσ_ϵ^2 + R(θ̂()). By linearity, the predictor can be decomposed as θ̂()=θ̂(ϕ(X)θ^*) + θ̂() where ∈^n is the noise on the training set. Using this, the fact that the noise is independent of , and the definition of Σ from (<ref>), the excess risk from (<ref>) can be decomposed in terms of bias, variance, and an approximation error as:R(θ̂(y)) = :=Bθ̂(ϕ(X)θ^*)-θ^*^2_Σ + :=V𝔼_ϵ[θ̂()^2_Σ] + P^⊥f^*_ L^2_μ()^2,where _Σ=√(^⊤Σ).§.§ Issues With Past WorksThere is a vast literature on KRR and linear regression, with many interesting results under various assumptions and settings. However, perhaps surprisingly, there does not appear to be a unified theory that can provide upper bounds for the excess risk of kernel regression for common kernels and for any amount of regularization, noise, any input dimension, and any number of samples. We now detail a few aspects of how current bounds are insufficient.* Assumptions That Do Not Hold:Many works rely on assumptions that are common or reasonable for analyzing linear regression. However, as we argue below, they are generally inapplicable for kernel regression. These assumptions include that the features ϕ_i() are Gaussians <cit.>, the eigenfunctions ψ_i() (sometimes called covariates) are sub-Gaussian, i.i.d finite dimensional and/or have mean 0 <cit.> or various nonrigorous assumptions common in the statistical physics literature <cit.>. Unfortunately, none of these assumptions hold for common kernels, making such works incapable of providing rigorous results in common settings. As a simple example, suppose our inputs are one-dimensional standard Gaussians x∼(0,1) and let K(x,y)=exp(-γ(x-y)^2) be a Gaussian (RBF) kernel. Such kernels have known Mercer decompositions <cit.> with eigenfunctions ψ_i given by Hermite polynomials. We show in appendix:rbf that if we pick for simplicity γ=3/8, then for any p ≥ 3, the moments of ψ_i diverge as∀ p ≥ 3,    ([ψ_i(x)^p])^1/p≥Ω_i(exp(p-2/4· i)) i→∞⟶∞. Thus, for the classical RBF kernel with Gaussian inputs, not only is ψ() not sub-Gaussian, but all moments ≥ 3 diverge. Another simple example is given with inputs distributed uniformly on the unit sphere ^d-1, and dot product kernels such as RBF, Laplace and NTK. Under this setting, ψ_i() are given by spherical harmonics, for which even in the case of d=3 the third moments diverge as i→∞ <cit.>. Additionally, for dot product kernels, ψ_i are definitely not i.i.d across i, ψ_1 is generally constant and not mean 0, and p may be ∞ (see appendix:dot-product for more details.) Furthermore, these assumptions are not only unrealistic but also lead to inaccurate predictions. Specifically, they induce concentration inequalities (e.g bounding the eigenvalues of the empirical covariance matrix) which are tighter than one can typically expect, resulting in risk bounds that may be over-optimistic (see fig:low_dimensional). By contrast, we work under very mild and realistic assumptions, and we do not know of any interesting kernel for which our analysis is not applicable.* Limitation to a Specific Setting:The literature seems to be split into several categories, with different works focusing on incompatible settings. These include:* "High-Dimensional" vs. "Fixed-Dimensional": Many works assume that the input dimension d and the number of samples n both tend towards infinity at a fixed ratio n=d^τ for some τ > 0 <cit.>. By contrast, other lines of work assume a fixed d and n→∞ <cit.>. The techniques and assumptions used by these two lines of work are inherently different, and make the results from the high-dimensional works inapplicable for fixed d and vice versa. For example, high-dimensional works typically rely on tools from random matrix theory, which require d and n to be tied and are inapplicable for a fixed d. By contrast, low-dimensional works have bounds that depend on the properties of the fixed RKHS, and often assume a fixed polynomial decay for the eigenvalues λ_i. This not only excludes kernels with an exponential decay such as RBF <cit.> but is also problematic, for example, for analyzing the NTK with high-dimensional inputs, since the polynomial decay only begins when the eigenvalue index is i≫poly(d) <cit.>.By contrast, we obtain bounds that are relevant for any d,n, regardless of the ratio between them, and in particular, capture interesting phenomena in these two regimes. * Regularized vs. Unregularized:Several works are limited to either the regularized case <cit.> or the unregularized case (a.k.a min-norm interpolation) <cit.>. This distinction is of course unwanted, and our results provide bounds that can handle both and make the role of the regularization explicit.* Noisy vs. Noiseless: <cit.> noted a discrepancy between rates obtained in a noisy setting (whenσ_ϵ>0) <cit.> vs. a noiseless setting (whenσ_ϵ=0) <cit.>. Furthermore, quantifying the effect of the noise is important since even when σ_ϵ>0, one may still obtain a small excess risk if the noise is small. Recent works in the fixed dimensional setting still only manage to provide upper bounds in the noiseless case <cit.>. Our analysis handles both cases, separating the bias and variance, and upper bounding both of these separately. There are also prior works that bound the eigenvalues of kernel matrices similarly to what we do here. <cit.> provide generic bounds; however, they are not sufficiently strong for many applications and, in particular, often do not yield nontrivial bounds for the smallest eigenvalue of the kernel matrix. As we shall see, this will be crucial for our analysis. <cit.> provide lower bounds for the smallest eigenvalue when the input dimension is linear in the number of samples and tends towards infinity. For fully-connected NTKs, <cit.> provide bounds for two-layer networks, and <cit.> provide bounds for deep networks for large input dimensions. <cit.> gives bounds for radial kernels such as RBF. §.§ Additional Notations and DefinitionsWe use the subscriptsandto denote the first 1,…,k and k+1,k+2,… coordinates of a vector respectively. So for example, ϕ_(X) is an n× k matrix. Analogously, we let _ := ϕ_(X)ϕ_(X)^T and _ := ϕ_(X)ϕ_(X)^T. For an operator T, we use μ_i(T) to denote its i'th largest eigenvalue (where we allow repeated eigenvalues, i.e μ_i(T)=μ_j(T) is allowed). We use this notation to avoid confusion with the eigenvalues λ_i of Σ. Unless stated otherwise, · is the standard ℓ^2 norm for vectors, and operator norm for operators. We use the standard big-O notation, with O(·), Θ(·) and Ω(·) hiding absolute constants that do not depend on problem parameters, Õ(·) and Ω̃(·) hiding absolute constants and additional logarithmic factors. We may make the problem parameters explicit, e.g _n,d to mean up to constants that do not depend on n or d.As in <cit.>, for any k∈, we define two highly related notions of the effective rank of Σ_ as: r_k:=r_k(Σ):=(Σ_)/Σ_,          R_k:=R_k(Σ)=(Σ_)^2/(Σ_^2). r_k is the common definition of effective rank, and R_k is related to r_k via r_k ≤ R_k ≤ r_k^2 <cit.>[Lemma 5].Typically, one must assume something on ψ() to obtain various concentration inequalities, meaning that the kernel matrix and empirical covariance matrix will behave as they are "supposed to". Perhaps the most common assumption in previous works is that ψ() is sub-Gaussian, requiring the moments of ψ_i() to be sufficiently well-behaved for every i. Unfortunately, as discussed earlier, this does not hold for many common kernels, even when the input distribution is "nice." but I continue to know In order to overcome this issue, we present a framework for analyzing kernels under only a mild heavy-tailed condition which can be shown to hold for many common kernels. In particular, we wish that quantities concerning the features will be related to their expected values by a multiplicative constant. By the orthonormality of ψ, for any k∈ one has that [ψ_()^2]= k, [ϕ_()^2] = (Σ_) and [Σ_^1/2ϕ_()^2]=(Σ_^2). We quantify the distance of the quantities from their expected values by the following definitions: Given k∈, let β_k ≥α_k ≥ 0 be defined as follows:α_k:=inf_{ϕ_()^2/(Σ_)}. β_k:=sup_max{ψ_()^2/k, ϕ_()^2/(Σ_), Σ_^1/2ϕ_()^2/(Σ_^2)}.Where the sup and inf are for a.s any . For each term in these definitions, the denominator is the expected value of the numerator, so α_k and β_k quantify how much the features behave as they are "supposed to". Since inf≤≤sup, one always has 0≤α_k ≤ 1 ≤β_k. Upper bounding β_k is often easy, and common examples for kernels with β_k=O_k(1) include dot-product kernels such as NTK and polynomial kernels, shift-invariant kernels, random features and kernels with bounded eigenfunctions ψ()_inf<∞. α_k can also be lower bounded as Ω_k(1) for many kernels (e.g dot-product kernels); however, a lower bound on α_k may sometimes be more difficult, and as such, many of our bounds will not require any control of α_k. Nevertheless, when α_k>0, in some cases, stronger bounds will be available. We defer a more complete discussion of these definitions, their relation to common kernels, and our claims in this paragraph to appendix:example_kernels. Overall, for sufficiently "nice" kernels, one should think of α_k and β_k as generally being Θ_k(1). For the bounds in this paper, we will not need to control α_k and β_k for every value of k, but rather k can be arbitrarily chosen. def:eigen_lower and def:eigen_upper are stated for a.s any . However, one can weaken the definition for α_k to the training set, so that w.p at least 1-δ_k, min_∈{_1,…, _n}{ϕ_()^2/(Σ_), }≥α_k. In such a case, all bounds that depend on α_k would still hold with probability 1-δ_k.In some cases, we will need to make the control of β_k explicit via the following regularity assumption. Either the feature dimension p is finite, or there exists some sequence of natural numbers (k_i)_i=1^∞⊆ with k_ii→∞⟶∞β_k_i(Σ__i)i→∞⟶0.Because Mercer kernels are trace class, one always has (Σ__i)i→∞⟶0. As such, assumption:good_beta simply states that for infinitely many choices of k∈,β_k does not increase too quickly. This is of course satisfied by the previous examples of kernels with β_k=O_k(1).§ EIGENVALUES OF KERNEL MATRICESSince the KRR solution can be written as in (<ref>), understanding it requires understanding the structure of the empirical kernel matrix . In particular, we will need to provide tight bounds on its eigenvalues. For a fixed k∈, it is known that μ_k(1/n) should tend to λ_k as n→∞. In fact, there are bounds of the form μ_k(1/n) - λ_k= ((Σ)/√(n)) <cit.>. Unfortunately, these bounds are the same for all 1≤ k≤ n, and since usually λ_k=o(1/k), for most of the eigenvalues of 1/n, the ((Σ)/√(n)) approximation error is much larger than the eigenvalues themselves, leading to the very weak bound of 0≤μ_k(1/n)≤((Σ)/√(n)). This is insufficient for multiple reasons. First, the expected decay of eigenvalues in the kernel matrix is not captured. Second, tighter lower bounds are often necessary to ensure the kernel matrix is positive definite and well-conditioned. Control of the smallest eigenvalue is a common working assumption in the NTK literature <cit.> and determines the convergence rate of gradient descent with the corresponding neural network <cit.>.We address these issues by providing relative perturbation bounds. The general approach is to decompose the kernel matrix as = _ + _ (for k∈), where the eigenvalues of the "low dimensional" part _ should concentrate well, and the "high dimensional" part _ should approximately be γ̃ I for some γ̃>0.theoremkereigenvalues Suppose assumption:good_beta holds, and that the eigenvalues of Σ are given in non-increasing order λ_1≥λ_2 ≥…. There exist some absolute constants c,C,c_1, c_2>0 s.t for any k≤ k' ∈ [n] and δ>0, it holds w.p at least 1- δ - 4 r_k/k^4exp(-c/β_kn/r_k)-2exp(-c/β_kmax(n/k,log(k))) that:μ_k(1/n) ≤ c_2β_k((1+klog(k)/n)λ_k + log(k+1)(Σ_)/n),andμ_k(1/n) ≥ c_1𝕀_k,nλ_k + α_k(1-1/δ√(n^2/R_k'))(Σ_')/n,where 𝕀_k,n= 1,ifCβ_kklog(k)≤ n0,otherwise. Informally, the theorem shows that one should think of the kernel matrix as ≈_ + γ̃ I where γ̃>0 is some value which is larger the "flatter" the eigenvalue decay of Σ, and μ_i(1/n_)≈λ_k. More specifically, n samples suffice for approximating the eigenvalues of the top _n(n/log(n)) features. For the largest eigenvalues of the kernel matrix, (Σ_)/n should be small, and thus μ_k(1/n) ≈λ_k. By contrast, for the smaller eigenvalues of the kernel matrix where k = ω_n(n/log(n)), one instead has to turn towards the self-regularization induced by thefeatures. If the eigenvalues decay sufficiently slowly, one should be able to pick k' so that R_k' > n^2/δ^2 and (Σ_')/n≈(Σ_)/n≈(Σ_)/n. This implies that the smaller eigenvalues of the kernel matrix can be bounded as μ_k(1/n)≳(Σ_)/n.As an example, suppose λ_i = Θ(1/ilog^1+a(i)) for some a>0 and α_k, β_k = Θ(1) (a condition satisfied by many common kernels, see appendix:example_kernels). Then taking k':=k'(n):=n^2, one can easily calculate that R_k'≥Ω(n^2log(n)) and (Σ_')/n = Θ(1/nlog^a(k')) = Θ(1/nlog^a(n)). As a result, letting γ̃_n:=1/nlog^a(n), thm:ker_eigenvalues implies that for any k∈[n] one has that μ_k(1/n) ≥Ω(𝕀_k,nλ_k + γ̃_n). In particular, the smallest eigenvalues can be lower bounded as μ_n(1/n) ≥Ω(1/nlog^a(n)) ≫λ_n. This result is at first surprising, as the classical intuition arising from works discussed earlier which bound μ_k(1/n) - λ_k would suggest that μ_n(1/n) ≈λ_n. One can analogously obtain a matching upper bound up to a log(k) factor.The parameter γ̃ in the above example plays an identical role in KRR as the actual regularization term . As such, the kernel actually provides its own regularization, arising from the high dimensionality of the features and the flatness of the eigenvalues. We call this self-induced regularization, and it has two significant implications. First, it can be used to derive good bounds on the smallest eigenvalue of a kernel matrix, which as already mentioned, is critical for many applications, and will be used extensively to derive new KRR bounds in the following sections. Second, it can (quite surprisingly) cause the eigenvalues of the kernel matrix to decay at a significantly different rate than λ_k. In particular, the spectrum of μ_k(1/n) concentrates around λ_k + γ̃ for all k∈[n]. § EXCESS RISK OF KERNEL REGRESSIONWe now return to bounding the bias and variance of KRR as given by (<ref>). The strategy will be to pick some k≤ n, and treat theandcomponents separately. By the previous section, we expect that _≈γ̃ I and this will serve as a regularization term for KRR. We quantify this by what we call the concentration coefficient_k,n := Σ_ + μ_1(1/n_) + /μ_n(1/n_) + .Because μ_1(1/n_) = Σ̂_ where Σ̂_ is the empirical covariance matrix and [Σ̂_]=Σ_, one should expect that any upper bound on μ_1(1/n_) should be larger than Σ_. As a result, the Σ_ term practically affects ρ_k,n by at most a factor of 2. We only include this term for technical simplicity within the proofs. Now, if for some k, one shows that μ_1(1/n_)≈μ_n(1/n_) then the concentration coefficient _k,n can be bounded as Θ(1). As we shall soon show, in such a case, it will follow that the bias and variance can be well bounded. Although our theory from the previous section provides a bound for _k,n, we make the role of _k,n explicit in the bias and variance bounds. This is because tighter bounds on _k,n may be available when there is additional information on the structure of the kernel.theoremboundgenLet k∈ and let _k,n be as defined in (<ref>). There exists some absolute constants c,c',C_1,C_2>0 s.t if cβ_kklog(k)≤ n, then for every δ>0, it holds w.p at least 1-δ - 16exp(-c'/β_k^2n/k) that both the variance and bias can be upper bounded as:V ≤C_1_k,n^2σ_ϵ^2 (k/n +min(r_k(Σ^2)/n, n/α_k^2R_k(Σ))). B ≤C_2 _k,n^3(1/δθ^*__Σ_^2 + θ_^*_Σ_^-1^2 ( + β_k(Σ_)/n)^2).Several comments are in order. First, the optimal choice of k should depend on the concentration coefficient _k,n, and the eigenvalues λ_i of the kernel. Given these, one can determine an asymptotically optimal k as a function of n. One would typically want to take k to be as small as possible, while still ensuring ρ_k,n≈1. Second, we do not assume here that the eigenvalues λ_i are ordered. This is important because for certain kernels, ordering the eigenvalues is actually quite difficult, for example with NTKs corresponding to popular convolutional architectures <cit.>. This flexibility will be critical for our analysis in the following section involving dot product kernels. Finally, a control of α_k is not required to obtain bounds for the bias and variance, and is present only in (<ref>) via the term min(r_k(Σ^2)/n, n/α_k^2R_k(Σ)). Under a slight abuse of notation, even when α=0, this term is at most r_k(Σ^2)/n. As we shall later show in thm:fixed_dimensional, under sufficient regularization, our bounds on the excess risk will not depend on α_k.We also note that in the simple case of finite-dimensional linear regression (where ϕ()=) with zero mean and sub-Gaussian ψ()=Σ^-1/2, our bounds provide a significant generalization of those of <cit.>[Theorem 1]. Specifically, they derived similar bounds for a specific k which is hard to determine, under the explicit assumption that the condition number of 1/n_ +I (similar to ρ_k,n) is bounded by some constant. Their results only hold for 0-mean, sub-Gaussian, and finite-dimensional ψ_i, and hence are not applicable for many common kernels. The explicit dependence on ρ_k,n, as well as the ability to choose k freely, will play an important role in the proofs of thm:highdim and thm:min_norm_poly in the next sections. Nevertheless, when all of their assumptions are satisfied, including that the condition number of 1/n_ +I is constant, our bound precisely recovers theirs. Because they showed that their bounds are sharp up to a multiplicative constant, we also obtain that under sufficient conditions, the upper bounds in thm:bound_gen are also sharp. § APPLICATIONS §.§ Benign Overfitting in High Dimensions In order to capture high-dimensional phenomena that likely play a major role in the success of neural networks, it is common to analyze KRR in a high-dimensional setting. Specifically, where n,d both tend towards infinity, with the ratio n/d^τ=Θ(1) fixed for some τ>0. In this chapter, we consider an important class of kernels known as dot-product kernels. A kernel K is called a dot product kernel if K(,')=h(^⊤') for some function h. One typically has to impose restriction on h for K to be a valid kernel, and as such, we follow the standard assumption that h has a Taylor expansion of the form h(t)=∑_i=0^∞ a_i t^i with a_i≥ 0 <cit.>. We will currently restrict ourselves to ^d-1 (and thus h:[-1,1]→) under the uniform distribution. Examples of dot-product kernels on ^d-1 include NTKs and GPKs of fully-connected networks and fully-connected-ResNets, Laplace kernels, Gaussian (RBF) kernels, and polynomial kernels <cit.>. For any d≥ 3, dot-product kernels with inputs uniformly distributed on ^d-1 have known Mercer decompositions given byK(,') = ∑_ℓ=0^∞σ̂_ℓ/N(d,ℓ)∑_m=1^N(d,ℓ)Y_ℓ, m()Y_ℓ, m('),where the eigenfunctions Y_ℓ, m are the m'th spherical harmonic of degree (or frequency) ℓ, N(d,ℓ)=2ℓ+d-2/ℓℓ+d-3d-2 is the number of harmonics of each degree, and σ_ℓ:=σ̂_ℓ/N(d,ℓ) are the eigenvalues <cit.>. Each spherical harmonic can be defined via restrictions of homogeneous polynomials to the unit sphere, with the degree (or frequency) of the spherical harmonic corresponding to the degree of said polynomials. We defer a background on dot-product kernels and more involved explanations to appendix:dot-product. We now show that in the high-dimensional regime, any dot product kernel is capable of benign overfitting, i.e achieving an excess risk that approaches zero as n→∞, without regularization and despite the presence of noise.theoremhighdim Suppose that as n,d→∞, d^τ/n=Θ_n,d(1) for some τ∈ (0,∞)∖. Let μ be the uniform distribution over ^d-1, f^*∈ L_μ^2(^d-1) a target function, and K be a dot-product kernel given by (<ref>) s.t σ̂_⌊τ⌋>0 and ∃ℓ> ⌊ 2τ⌋ with σ̂_ℓ≥ 0 (e.g NTK, Laplace, or RBF).Then for the min norm solution defined in (<ref>) (given when →0), for any δ>0 it holds w.p at least 1 - δ - o_d(1/d) thatV ≤σ_ϵ^2 ·_n,d(1/d^τ - ⌊τ⌋ + 1/d^⌊τ⌋ + 1 - τ). B ≤1/δ_n,d(θ^*_> N_d_Σ_> N_d^2) + θ^*_≤ N_d_∞^2 (ℓ≤⌊τ⌋σ̂_ℓ≠ 0max 1/σ̂_ℓ) ·_n,d(1/d^2(τ - ⌊τ⌋)).Where N_d=Θ_n,d(d^⌊τ⌋) denotes the number of spherical harmonics of degree at most ⌊τ⌋ with non-zero eigenvalues, and _n,d(θ^*_> N_d_Σ_> N_d^2) ≤ O_n,d(θ^*_> N_d_∞^2). Simply put, the variance decays to 0, and the bias approaches (θ^*_>N_d_∞^2) for N_d≈ d^⌊τ⌋. More specifically, the rate of decay for the variance depends on τ, with the fastest decay occurring when τ = z+1/2 for some z∈, and slowest when τ≈ z. This highlights the multiple descent behavior of kernel ridge regression as discussed in <cit.>. For the bias, θ^*_> N_d_Σ_> N_d^2 is the L^2_μ norm of the projection of f^* onto the spherical harmonics of degree at least ⌈τ⌉, and θ^*_> N^d_∞^2 is the maximal projection. The ℓ≤⌊τ⌋σ̂_ℓ≠ 0max 1/σ̂_ℓ term will typically be O_n,d(1) because often times σ̂_ℓ = Ω_n,d(1). For example, for the NTK, one has an even stronger statement, ℓ≤⌊τ⌋σ̂_ℓ≠ 0max 1/σ̂_ℓ = _n,d(1/d) <cit.>[Theorem 4.3]. Thus, whether KRR achieves benign overfitting or not depends on the spectral decomposition of the target function. If θ^* consists of frequencies of at most ⌊τ⌋, then θ^*_>N_d_∞^2=0 and thus both the bias and variance tend towards zero, implying benign overfitting. The variance for high-dimensional regression is demonstrated in fig:multiple_descent for the NTK and polynomial kernel.The key to this result is that the repeated eigenvalues lead to large effective ranks r_k and R_k, allowing one to take k=N_d (where N_d/n=1/d^τ - ⌊τ⌋) with concentration coefficient _k,n=Θ(1). We highlight the fact that there is nothing specific to dot product kernels, and using thm:bound_gen, a similar result can be derived for any kernel with ρ_k,n =Θ(1) for k≪ n. The assumption that σ̂_⌊τ⌋>0 and ∃ℓ> ⌊ 2τ⌋ with σ̂_ℓ≥ 0 is only made for simplicity to avoid degeneracies via convoluted examples involving 0 eigenvalues. We make the role of this assumption clear within the proof, as it can easily be modified. For example, one can obtain similar results when the 0 eigenvalues are the odd frequencies as in an NTK without bias <cit.>Our results can naturally be extended to other domains and distributions. <cit.>[Corollary D.2, Lemma D.4] show that the eigenvalues only change by multiplicative constants under suitable change of measures or diffeomorphisms ("smooth" change of domains). One can also exploit the specific structure of certain kernels. For example, NTK kernels and homogeneous polynomial kernels are zonal, meaning thatK(,')='K(/, '/'), so results from ^d-1 can easily generalize to ^d.Perhaps the works that provide the results most similarto thm:highdim are the excellent papers of <cit.>.By comparison, <cit.>[Corollary 2] do not provide convergence rates, but rather show that the excess risk approaches θ^*_> N_d_Σ_> N_d^2 + o_d(1) as n,d→∞.Furthermore, they assumed that σ̂_ℓ are Θ_d(1) independent of d, a condition which is typically not satisfied, e.g. in an NTK. <cit.>[Theorem 4] when combined with a "spectral gap condition" (which would also require that σ̂_ℓ are Θ_d(1)) also implies a bound of the formθ^*_> N_d_Σ_> N_d^2 + o_d(1). Without this problematic spectral gap assumption, it is unclear what their bound implies.They also impose other strict assumptions, which do not hold for broader domains. For example, they assume that for any _i, ϕ_> N_d(_i)^2/(Σ_> N_d) = 1 ± o_d(1). For zonal kernels such as the NTK, this typically will not hold unless all inputs have roughly the same norm. By contrast, our mild assumptions imply that the same results hold in ^d as discussed above. The results of <cit.>[Theorem 3] are limited to target functions in the RKHS, with a bound that is the same for all θ^*. This is critical since the structure of θ^* is precisely what allows us to characterize when benign overfitting occurs. Overall, our results are the first to clearly characterize benign overfitting for common kernels, such as NTK.§.§ Nearly Tempered Overfitting in Fixed DimensionsWe now shift our attention to the fixed dimensional regime. We focus on polynomially decaying eigenvalues, encompassing NTKs and GPKs of common fully-connected architectures <cit.>, convolutional and residual architectures <cit.> as well as the Laplace kernel <cit.>.For such kernels, various works show lower bounds of the form Ω(1) for the excess risk for min-norm interpolation <cit.>. Recently <cit.> distinguished between the regimes where the risk explodes to ∞ (called catastrophic overfitting) vs when the risk remains bounded (called tempered overfitting). The two regimes are significantly different since when the noise is small, kernel regression can still achieve a low risk despite tempered overfitting. Using our tools, we show that when λ_i ≈ i^-1-a for small a>0, such kernels are nearly tempered, meaning that the bias goes to 0, and the variance cannot diverge too quickly.theoremminnormpoly Let K be a kernel with polynomially decaying eigenvalues λ_i=Θ_i,n(i^-1-a) for some a>0, feature dimension p≥ n^2log^4(n) and assume that α_k, β_k = Θ_k(1). Then for the min norm solution defined in (<ref>) (given when →0), for any δ>0 it holds w.p at least 1-δ - O_n(1/log(n)) thatV ≤σ_ϵ^2 Õ_n(n^2a).Moreover, if θ_i^*=O_i(1/i^r) where r>a then under the same probability it also holds that B ≤1/δ_n(1/n^min(2(r-a), 2-a)).When a→ 0, the bound for the variance approaches (n), and the bound for the bias is nearly (1/n^2r). For the popular NTK of a fully-connected network and a Laplace kernel, λ_i=Θ(i^-1-1/d-1) <cit.>, indicating that a=1/d-1. For these kernels, the variance bound becomes Õ(n^2/d-1). In fact, when d≳log(n) it holds that n^2/d-1≲(n). So, when the noise is small, one can expect the excess risk to also be relatively small. The condition on the decay of θ^* is fairly mild, as for any realizable f^* (i.e f^*∈) it holds that θ^*_2<∞ and thus, under the conditions of the theorem, r> 1 and B< (1/n^2-2a).As far as we know, this is the first rigorous upper bound for the excess risk of the min-norm interpolator in the fixed dimensional setting for generic kernels. Previous bounds were either based on a Gaussian feature assumption or non-rigorous analysis <cit.> and gave (n^-min(2r+a, 2(1+a))) and σ_ϵ^2·(1) bounds for the bias and variance respectively. In fig:low_dimensional, we provide a simple example of a common kernel that does not appear to adhere to their bounds (a GPK corresponding to a 3-layer fully connected network with inputs uniformly on the unit disk). The difference between our bounds and theirs is not a limitation of our work but rather due to their strong Gaussian feature assumption and can be quantified by the concentration coefficient _k,n. Without any special assumptions, we showed that for k≈n/log(n), _k,n=(n^a(n)). If one is willing to assume stronger assumptions on the features which may not hold in practice (such as Gaussian features) so that _k,n=Θ(1), our bias and variance bounds would improve to (n^-min(2r+a, 2(1+a))) and σ_ϵ^2(1) respectively, matching their bound up to afactor. When a→ 0, the difference is of course very small, implying that one obtains nearly tempered overfitting in the fixed dimensional regime. Unfortunately, common kernels do not have Gaussian features in practice and may suffer from poor concentration in the fixed d regime. Thus, afactor in the bounds is likely inevitable. This is the reason for the observation in fig:low_dimensional, showing that upper bounds that assume Gaussian features may be over-optimistic for common kernels.§.§ Regularized RegressionA major benefit to our approach is that we can provide boundds for both the regularized and unregularized cases with the same tools. We can thus derive bounds for the classical setup where the regularizationis relatively large. theoremstrongreg Let K be a kernel with polynomially decaying eigenvalues λ_i=Θ_i,n(i^-1-a) for some a>0, and assume that β_k = _k(1). Further, suppose that the regularization parameter satisfies = Θ_n(n^-1-b) for b∈ (-1,a). Then for any δ>0, it holds w.p at least 1- δ - o_n(1/n) thatV ≤σ_ϵ^2 ·_n(1/n^a-b/1+a),and if θ_i^* = Θ_i,n(i^-r) for some r∈ s.t Σ^1/2θ^*_2<∞ (necessary for f^*∈ L^2_μ()), then under the same probability it also holds that B ≤1/δ·_n(1/n^(1+b)min((2r+a)/1+a, 2)),where theis weakened toif r = 1+a/2. The conditions of thm:fixed_dimensional are very mild, and do not require any control of α_k. In particular, the same kernels mentioned in the previous chapter all satisfy the assumptions here. Regarding the role of the regularization decay, as b decreases, the regularization is strengthened. One can observe a bias-variance tradeoff, where the variance bound improves with increased regularization, and the bias bound worsens. Regardless, one always has that the excess risk tends to 0 as n→∞. The choice of polynomial decay was arbitrary, and bounds for other decays can easily be obtained by modifying the proof.The result recovers the results of <cit.> who worked under the heavy Gaussian feature assumption, and <cit.> who worked under a Hölder continuity assumption on the kernel as well as an assumption relating to what they called an embedding index. <cit.> only provide upper bounds for the optimal , and do not decompose into bias and variance.§ IMPLICATIONS FOR NEURAL NETWORKSOur mild assumptions and general setting allow us to apply these results to a wide range of neural networks. Under suitable initialization and learning rate, gradient decent with sufficiently wide neural networks is equivalent to kernel regression with the NTK <cit.>. Specifically, for a neural network f(,θ), one can typically bound its distance from its first order Taylor approximation f^lin(,θ) at time t of gradient flow as sup_t≥ 0f(,θ_t) - f^lin(,θ_t)≤ O(1/√(width)) <cit.>. Furthermore, training f^lin(,θ) for time t is roughly equivalent to kernel regression with regularization = 1/t <cit.>. By combining the two, one can easily bound the difference in generalization errors between neural networks trained for time t and kernel regression with the NTK and regularization = 1/t.So by thm:fixed_dimensional, if the eigenvalues of the NTK decay as λ_i=Θ_i,n(1/i^-1-a) and the target function satisfies θ_i^* = Θ_i,n(i^-r), then as the width of the corresponding network tends towards infinity, the bias and variance after training for t:=Θ_n(n^s) steps of gradient flow for some s∈(0, 1+a) approachV ≤σ_ϵ^2 ·_n(1/n^1-s/1+a), B ≤_n(1/n^smin(2r+a/1+a, 2)).Neural networks of various architectures exhibit polynomially decaying eigenvalues in the fixed dimensional regime, including fully-connected networks, CNNs, and ResNets <cit.>. For example, for fully-connected networks, a=1/d-1. Interestingly, skip connections do not affect the asymptotic rate of decay of the NTK eigenvalues <cit.> and as a result, ResNets obtain the same rates in (<ref>) as their non-residual counterparts (i.e if one removes the skip connection).Similarly, the applications of thm:min_norm_poly and thm:highdim to networks that are instead trained to completion (i.e in the t→∞ limit) are immediate. In particular, one has nearly tempered overfitting in the fixed dimensional regime, and in the high dimensional regime of d^τ/n=Θ(1), if f^* consists of frequencies of at most ⌈τ⌉, then overfitting is benign.§ ACKNOWLEDGMENTSThis research is supported in part by European Research Council (ERC) grant 754705, by the Israeli Council for Higher Education (CHE) via the Weizmann Data Science Research Center and by research grants from the Estate of Tully and Michele Plesser and the Anita James Rosen Foundation.§ MORE NOTATIONSWe introduce a few more notations for the appendix, which are not needed in the main text. We let A_k:=_+n I and A:=+n I. Additionally, for any k≤ k'∈ we denote by k:k' the k,…, k' indices, so that, for example, ϕ_k:k'(X)=(ϕ_k(X), …, ϕ_k'(X))∈^n× (k'-k+1). § CONCENTRATION BOUNDS Let k∈[n], then each of the following holds w.p at least 1-2exp(-1/2β_k^2n): * 1/2n∑_iλ_i^2 ≤(ϕ_(X)Σ_ϕ_(X)^⊤) ≤3/2n∑_iλ_i^2 * 1/2kn ≤(ψ_(X)ψ_(X)^⊤) ≤3/2kn. For (1), first observe that(ϕ_(X)Σ_ϕ_(X)^⊤) = ∑_j=1^n [ϕ_(X)Σ_ϕ_(X)^⊤]_jj = ∑_j=1^n ϕ_(_j)^⊤Σ_ϕ_(_j)= ∑_j=1^n∑_iλ_i^2 ψ_i(_j)^2. We will now show that the conditions for Hoeffding's inequality hold. Let v_j=∑_iλ_i^2 ψ_i(_j)^2 and M:=β_k ∑_iλ_i^2. By the definition of β_k (<ref>), we have that for every j, 0≤ v_j≤ M. Furthermore, [∑_j=1^n v_j]=n∑_iλ_i^2 and so Hoeffding's inequality yields:ℙ((ϕ_(X)Σ_ϕ_(X)^⊤) - n∑_iλ_i^2≥ t ) ≤ 2exp(-2t^2/nM^2).Substituting t=n/2∑_iλ_i^2, it holds that w.p at least 1-2exp(-1/2β_k^2n), 1/2n∑_iλ_i^2 ≤(ϕ_(X)Σ_ϕ_(X)^⊤) ≤3/2n∑_iλ_i^2. For (2), the proof is analogous:(ψ_(X)ψ_(X)^⊤) = ∑_j=1^n [ψ_(X)ψ_(X)^⊤]_jj = ∑_j=1^n ψ_(_j)^⊤ψ_(_j)= ∑_j=1^n∑_i=1^kψ_i(_j) ^2 ≤β_kknNow letting M'=β_kk using Hoeffding as before yieldsℙ((ψ_(X)ψ_(X)^⊤) - kn≥ t' ) ≤ 2exp(-2t'^2/nM'^2).So picking t'=nk/2 we get that w.p at least 1-2exp(-n·1/2β_k^2)1/2kn ≤(ψ_(X)ψ_(X)^⊤) ≤3/2kn. For any k∈ [n] there exist some absolute constants c',c_2>0, s.t the following hold simultaneously w.p at least 1-2exp(-c'/β_kmax(n/k,log(k))) *μ_k(ψ_(X)^⊤ψ_(X)) ≥max(√(n) - √(1/2max(n, β_k(1+1/c')klog(k))) ,  0)^2, *μ_1(ψ_(X)^⊤ψ_(X)) ≤ c_2 max(n, β_kklog(k)).Moreover, there exists some c>0 s.t if cβ_kklog(k)≤ n then w.p at least 1-2exp(-c'/β_kn/k) and some absolute constant c_1>0, it holds thatc_1n ≤μ_k(ψ_(X)^⊤ψ_(X)) ≤μ_1(ψ_(X)^⊤ψ_(X)) ≤ c_2 n.We will bound the singular values σ_i(ψ_(X)) sinceσ_i(ψ_(X))^2 = μ_i(ψ_(X)^⊤ψ_(X)).ψ_(X) is an n× k matrix, whose rows ψ_(_j) are independent isotropic random vectors in ^k (where the randomness is over the choice of _j). Furthermore, by the definition of β_k (<ref>), for a.s every _i, ψ_(_i)≤√(β_kk). As such, from <cit.>[Theorem 5.41], there is some absolute constant c'>0 s.t for every t≥ 0, one has that with probability at least 1-2k exp(-2c't^2), √(n) - t√(β_kk)≤σ_k(ψ_(X)) ≤σ_1(ψ_(X)) ≤√(n) + t√(β_kk). Now for t = √(1/2β_kmax(n/k,log(k))+log(k)/2c') we get that with probability at least 1-2exp(-c'/β_kmax(n/k,log(k))) it holds thatσ_1(ψ_(X))^2 ≤ (√(n) + √(1/2max(n, klog(k)) + klog(k)β_k/2c'))^2≤ (√(n) + 1/√(2)√(n + (1 + β_k/c')klog(k)))^2≤3n + (1 + β_k/c')klog(k),where the last equality followed from the fact that (a+b)^2≤2a^2+2b^2 for any a,b∈. Because, β_k ≥ 1 (<ref>), we obtain σ_1(ψ_(X))^2 ≤ c_2 max(n, β_kklog(k)) for a suitable c_2>0, proving point (<ref>). For the lower bound, we simultaneously haveσ_k(ψ_(X)) ≥ √(n) - 1/√(2)√(1/2max(n, klog(k)) + klog(k)β_k/2c') ≥ √(n) - √(1/2max(n, β_k(1+1/c')klog(k))),Since the singular values are non-negative, the above impliesσ_k(ψ_(X))^2 ≥max(√(n) - √(1/2max(n, β_k(1+1/c')klog(k))) ,  0)^2.proving point (<ref>).For the moreover part, taking c=(1+1/c'), we now have by assumption that n/k≥ cβ_klog(k) ≥log(k) (where we used the facts that c≥ 1 and β_k≥ 1), the probability that (<ref>) and (<ref>) hold is in fact 1-2exp(-c'/β_kn/k).Furthermore, plugging cβ_kklog(k)≤ n into the lower bound (<ref>) yieldsμ_k(ψ_(X)^⊤ψ_(X)) ≥ max(√(n) - √(1/2max(n, cβ_kklog(k))) ,  0)^2.≥ (√(n) - √(n/2))^2 = (1-1/√(2))^2n. Similarly, since β_kklog(k)≤ n the upper bound (<ref>) becomesμ_1(ψ_(X)^⊤ψ_(X)) ≤ c_2nFor any k∈ [n] and δ>0, it holds w.p at least 1-δ thatϕ_(X)θ_^*^2 ≤1/δ nθ_^*_Σ_^2Let v_j=⟨ϕ_(_j), θ_^*⟩^2 so that ϕ_(X)θ_^*^2 = ∑_j=1^n v_j. Since _j are independent, it holds that v_j are independent random variables with mean:[v_j] =[(∑_i√(λ_i)ψ_i(_j)θ_i^*)^2] =∑_i∑_√(λ_i)√(λ_l)θ_i^*θ_l^* δ_il__j[ψ_i(_j)ψ_l(_j)]=∑_iλ_i (θ_i^*)^2 = θ^*_Σ_^2. So by Markov's inequality:ℙ(∑_j=1^n v_j ≥1/δ nθ_^*_Σ_^2 ) ≤δ.There exists some absolute constants c, c', c_1, c_2>0 s.t for any k∈ with cβ_kklog(k)≤ n, it holds w.p at least 1-8exp(-c'/β_k^2n/k) that all of the following hold simultaneously: * c_1n∑_iλ_i^2 ≤(ϕ_(X)Σ_ϕ_(X)^⊤) ≤ c_2n∑_iλ_i^2 * c_1kn ≤(ψ_(X)ψ_(X)^⊤) ≤ c_2kn * μ_k(ψ_(X)^⊤ψ_(X)) ≥ c_1n * μ_1(ψ_(X)^⊤ψ_(X)) ≤ c_2n By lem:concentration1, points (1) and (2) each hold w.p at least 1-2exp(-1/2β_k^2n) so they both hold w.p at least (1-2exp(-1/2β_k^2n))^2. Furthermore, the "moreover" part of lem:concentration2 states that points (3) and (4) hold simultaneously w.p at least 1-2exp(-c'/β_kn/k).Now the probability for which (1)-(4) all hold simultaneously is at least(1-2exp(-1/2β_k^2n))^2(1-2exp(-c'/β_kn/k))≥1-8exp(-min(1/2β_k^2n, c'/β_kn/k)) ≥ 1-8exp(-min(1/2β_k^2, c'/β_k)n/k) Since β_k≥ 1 (<ref>) replacing c' with min(1/2, c') results in the desired bounds holding w.p at least 1-8exp(-c'/β_k^2n/k).§ BOUNDS ON THE EIGENVALUES OF KERNEL MATRICES - PROOFS OF RESULTS IN SEC:EIGENVALUES §.§ Proof of thm:ker_eigenvalues*From lem:rel_bound_eigenvals, we have thatλ_kμ_k(D_k) + μ_n(1/n_) ≤μ_k(1/n) ≤λ_kμ_1(D_k) + μ_1(1/n_),where D_i is as in the formulation of the lemma. We bound each of the summands in the upper bound separately. From cor:ak_eigen_bound, it holds w.p at least 1- 4 r_k/k^4exp(-c'/β_kn/r_k) that for some absolute constants c',c_2'>0,μ_1(1/n_) ≤ c_2'(λ_k+1 + β_k log(k+1)(Σ_)/n).For the other summand, since D_i=1/nψ_(X)^⊤ψ_(X) lem:concentration2 states that there exists some absolute constants c”, c_2”>0, s.t w.p at least 1-2exp(-c”/β_kmax(n/k,log(k))) λ_kμ_1(D_i) ≤ c_2”1/nmax(n, β_kklog(k)) λ_k≤ c_2”β_k(1+klog(k)/n)λ_k,where in the last inequality we used the fact that β_k ≥ 1. So taking c=max(c',c”), both events hold w.p at least 1-4 r_k/k^4exp(-c/β_kn/r_k)-2exp(-c/β_kmax(n/k,log(k))) and the upper bound from (<ref>) yieldsμ_k(1/n) ≤ c_2β_k((1+klog(k)/n)λ_k + log(k+1)(Σ_)/n),for some suitable absolute constant c_2>0. The "moreover" part of this proof analogously follows from the "moreover" part of lem:concentration2, which states that μ_k(D_k)≥ c_1 if Cβ_kklog(k)≤ n, and from the lower bound of cor:ak_eigen_bound, which holds w.p at least 1-δ.§.§ Lemmas and Alternative Results for Eigenvalue Bounds We now provide an extension of Ostrowski's theorem to non-square matrices. Note that the case of k≤ n is relatively easy. However, we also prove the case of k>n.Let i,k∈ satisfy 1≤ i≤min(k,n) and D_k:=1/nψ_(X)ψ_(X)^⊤∈^n× n. Suppose that the eigenvalues of Σ are given in non-increasing order λ_1≥λ_2 ≥… then λ_i+k-min(n,k)μ_min(n,k)(D_k) ≤μ_i(1/n_) ≤λ_iμ_1(D_k).Let π_1 denote the number of positive eigenvalues of 1/n_ (where in particular π_1≤min(n,k)). Because the kernel can be decomposed as _=ψ_(X)Σ_ψ_(X)^⊤, it follows from <cit.>[Theorem 1.5] that for 1≤ i≤π_1, λ_i+k-min(n,k)μ_min(n,k)(D_k) ≤μ_i(1/n_) ≤λ_iμ_1(D_k). It remains to handle the case where π_1 < i (where in particular this means π_1<min(n,k)). By definition of π_1 there are some orthonormal eigenvectors of _, v_π_1+1,…, v_n with eigenvalues 0. Since Σ≻ 0, for each such 0 eigenvector v,0 = (ψ_^⊤(X) v)^⊤Σ(ψ_^⊤(X) v) ψ_^⊤(X) v = 0.In particular, D_k has v_π_1+1,…, v_n as 0 eigenvectors and since D_k≽ 0, we obtain that μ_π_1+1(D_k),…, μ_n(D_k) = 0. So for i> π_1 we haveλ_i+k-min(n,k)μ_min(n,k) (D_k) = 0 = μ_i(1/n_) ≤λ_iμ_1(D_k).Let i,k∈ satisfy 1≤ i ≤ n and i≤ k, let D_k:=1/nψ_(X)ψ_(X)^⊤∈^n× n. that the eigenvalues of Σ are given in non-increasing order λ_1≥λ_2 ≥… thenλ_i+k-min(n,k)μ_min(n,k)(D_k) + μ_n(1/n_) ≤μ_i(1/n) ≤λ_i μ_1(D_k) + μ_1(1/nϕ_(X)ϕ_(X)^⊤).In particular, λ_i+k-min(n,k)μ_min(n,k)(D_k) ≤μ_i(1/n) ≤λ_i μ_1(D_k) + μ_1(1/nϕ_(X)ϕ_(X)^⊤).We can decomposeinto the sum of two hermitian matrices by =_+_. By Weyl's theorem <cit.>[Corollary 4.3.15] we can use this decomposition to bound the eigenvalues ofas:μ_i(_) + μ_n(_) ≤μ_i() ≤μ_i(_) + μ_1(_). Further, since _=ψ_(X)Σ_ψ_(X)^⊤, we use an extension of Ostrowski's theorem, lem:ostrowski, to obtain the bound:λ_i+k-min(n,k)μ_min(n,k)(D_k) ≤μ_i(1/n_) ≤λ_i μ_1(D_k). So combining the two results yields the bounds:λ_i+k-min(n,k)μ_min(n,k)(D_k) + μ_n(1/n_) ≤μ_i(1/n) ≤λ_i μ_1(D_k) + μ_1(1/nϕ_(X)ϕ_(X)^⊤). The "in particular part" now follows from μ_n(1/n_) ≥ 0.Suppose assumption:good_beta holds, and that the eigenvalues of Σ are given in non-increasing order λ_1≥λ_2 ≥…. Let k∈ and let r_k be as defined in def:effective_rank. There exist absolute constant c,c'>0 s.t it holds w.p at least 1-4 r_k/k^4exp(-c'/β_kn/r_k) thatμ_1(1/nϕ_(X)ϕ_(X)^⊤) ≤ c(λ_k+1 + β_klog(k+1)(Σ_)/n).Let E_k=μ_1(1/nϕ_(X)ϕ_(X)^⊤), Σ̂_ := 1/nϕ_(X)^⊤ϕ_(X) and observe that E_k=Σ̂_. We would ideally like to bound Σ̂_ using the matrix Chernoff inequality with intrinsic dimension <cit.>[Theorem 7.2.1]. However, as this inequality was proved for finite matrices, if the dimension of the features is p=∞ we first approximate Σ̂_, letting ϕ_k+1:p'(X):=(ϕ_k+1(X), …, ϕ_p'(X)) for some p'∈ and Σ̂_k+1:p':=1/nϕ_k+1:p'(X)^⊤ϕ_k+1:p'(X), then E_k can be bounded as:E_k =1/n_k+1:p' + 1/n_≥ p'≤1/n_k+1:p' + 1/n_≥ p' = Σ̂_k+1:p' + E_p'.Furthermore, E_p' can be bounded asE_p'≤ 1/n(_> p') = 1/n∑_j=1^n∑_i>p'λ_i ψ_i(_j)^2 ≤β_p'∑_i>p'+1λ_i = β_p'(Σ_> p'). So, to summarize, either p is finite, in which case we can take p'=p and E_p'=0, or p is infinite, in which case E_p'≤β_p'(Σ_> p'). However, by assumption:good_beta this implies:∀ u > 0, ∃ p' ∈ E_p'≤ u. Let Z^p'_j=1/nϕ_k+1:p'(_j)ϕ_k+1:p'(_j)^⊤ (where (Z^p'_j) ≽ 0) so that we can decompose the empirical covariance as a sum Σ̂_k+1:p'=∑_j=1^n Z_j^p'. We will need a bound on both μ_1(Z^p'_j) and μ_1(Σ̂_k+1:p').For the first, we haveμ_1(Z^p'_j)=1/n∑_i=k+1^p'λ_iψ_i(_j)^2 ≤1/n∑_i=k+1^∞λ_iψ_i(_j)^2 ≤:=Lβ_k/n(Σ_), where we denote by L the right-hand side. For the bound on μ_1(Σ̂_k+1:p'), it holds that Σ̂_k+1:p'=Σ_k+1:p'=(λ_k+1+1,…,λ_p') and thus μ_1(Σ̂_k+1:p')=λ_k+1.We have shown that the conditions of <cit.>[Theorem 7.2.1] are satisfied. As such, for r_k:p':=(Σ_k+1:p')/λ_k+1 and any t≥ 1 + L/λ_k+1=1 + β_kr_k/n,ℙ(Σ̂_k+1:p'≥ tλ_k+1) ≤ 2r_k:p'(e^t-1/t^t)^λ_k+1/L.By (<ref>) it holds that Σ̂_k+1:p'≥ E_k - E_p'. Using this, the fact that λ_k+1/L=n/β_kr_k, and upper bounding e^t-1≤ e^t, r_k: p'≤ r_k yields ℙ(E_k - E_p'≥ tλ_k+1) ≤ℙ(Σ̂_k+1:p'≥ tλ_k+1) ≤ 2r_k (e/t)^tn/β_kr_k. Now we pick t=e^3 + 2β_kr_k/nlog(k+1), (which satisfies the requirement of t≥ 1 + β_kr_k/n). In particular e/t≤1/e^2, and we obtain that: ℙ(E_k ≥ tλ_k+1 + E_p') ≤2r_k (1/e^2)^e^3/β_kn/r_k + 2log(k+1) ≤2 r_k/(k+1)^4exp(-2e^3/β_kn/r_k). Furthermore, E_p' can be bounded via (<ref>)As a result, we obtain that for c'=2e^3, c=e^3, it holds w.p at least 1-4 r_k/k^4exp(-c'/β_kn/r_k) thatE_k ≤ c(λ_k+1 + β_klog(k+1)(Σ_)/n+ E_p').Notice that the bound on E_k depends on p' only via E_p'. So by (<ref>) we are done. Let R_k be as defined in def:effective_rank. For any δ > 0 it holds w.p at least 1- δ that for all 1 ≤ i≤ nα_k1/n(Σ_)(1-1/δ√(n^2/R_k)) ≤μ_i(1/n_) ≤β_k1/n(Σ_)(1+1/δ√(n^2/R_k)). Let Λ_:=(1/n_)∈^n× n be equal to 1/n_ on the diagonal and 0 elsewhere, and Δ_:=1/n_ - Λ_ be the remainder. Λ_ is a diagonal matrix with the i'th value on the diagonal given by [Λ_]_ii = 1/n∑_ℓλ_ℓψ_ℓ(_i)^2. By def:eigen_lower of α_k and def:eigen_upper of β_k it holds thatα_k1/n(Σ_) ≤ [Λ_]_ii≤β_k1/n(Σ_), which together with the fact that Λ_ is diagonal implies α_k1/n(Σ_)I ≼Λ_≼β_k1/n(Σ_)I. As such, by Weyl's theorem <cit.>[Corollary 4.3.15], we can bound the eigenvalues of 1/n_ asα_k1/n(Σ_) + μ_n(Δ_) ≤μ_i(1/n_)≤β_k1/n(Σ_) + μ_1(Δ_).So in order to bound the eigenvalues of 1/nK_, it remains to bound the eigenvalues of Δ_. We first bound the expectation using[Δ_] ≤ [Δ__F^2]^1/2 = √(∑_i,j=1^n [(1/n⟨ϕ_(_i), ϕ_(_j) ⟩)^2]) = √(n(n-1)/n^2(Σ_^2))≤√((Σ_^2)) = 1/n(Σ_)√(n^2/R_k).By Markov's inequality, it holds that ℙ(Δ_≥1/δ[Δ_]) ≤δ.Implying that with probability at least 1-δ it holds thatΔ_≤1/δ[Δ_] ≤1/nδ(Σ_)√(n^2/R_k).Finally, plugging this back into (<ref>) completes the proof.Suppose assumption:good_beta holds, and that the eigenvalues of Σ are given in non-increasing order λ_1≥λ_2 ≥…. Let k∈ and let r_k be as defined in def:effective_rank. There exist absolute constant c,c'>0 s.t it holds w.p at least 1-4 r_k/k^4exp(-c'/β_kn/r_k) thatμ_1(1/n_) ≤ c(λ_k+1 + β_klog(k+1)(Σ_)/n).And for any k'∈ with k'>k, and any δ > 0 it holds w.p at least 1- δ thatα_k(1-1/δ√(n^2/R_k'))(Σ_')/n≤μ_n(1/n_'),so that both statements hold w.p at least1-δ - 4 r_k/k^4exp(-c'/β_kn/r_k).By Weyl's theorem <cit.>[Corollary 4.3.15], for any k'≥ k, μ_n(_≥ k)≥μ_n(_≥ k')+μ_n(_k:k') ≥μ_n(_≥ k'). So the lower bound comes from lem:eff_regularizaion (with k') and the upper bound comes from lem:Er_bound. § UPPER BOUNDS FOR THE RISK - PROOFS OF RESULTS IN SEC:KRR §.§ Proof of thm:bound_gen.The majority of the work was done in lemmas <ref>, <ref>, <ref> and <ref>. Here we essentially combine these results to obtain the desired bounds. Throughout the section, the notations of A_k:=_+n I and A:=+n I as defined in app:notations will be very common. * The majority of the work is given by lemmas <ref> and <ref>. We note a few properties which are immediate, from which the claim will follow:μ_1(1/nA_k)^2/μ_n(1/nA_k)^2= (μ_1(1/n_) + /μ_n(1/n_) + )^2 ≤_k,n^2.Σ_/μ_n(1/nA_k)≤_k,n.1/nμ_n(1/nA_k)^2∑_iλ_i^2=Σ_^2/μ_n(1/nA_k)^2·r_k(Σ^2)/n≤_k,n^2 r_k(Σ^2)/n. Furthermore, because the trace of a matrix is the sum of its eigenvalues, we obtainμ_1(1/nA_k)^2 =μ_1(1/nA_k)^2/μ_n(1/nA_k)^2μ_n(1/nA_k)^2 ≤_k,n^2 (1/n(1/nA_k))^2≤ _k,n^2 ( + 1/n^2∑_j=1^n∑_iλ_iψ_i(_j)^2)^2 ≤_k,n^2 ( + β_k(Σ_)/n)^2.and similarly μ_n(1/nA_k)^2 ≥ μ_n(1/nA_k)^2/μ_1(1/nA_k)^2μ_1(1/nA_k)^2 ≥1/_k,n^2(1/n(1/nA_k))^2≥ 1/_k,n^2( + 1/n^2∑_j=1^n∑_iλ_iψ_i(_j)^2)^2 ≥1/_k,n^2( + α_k(Σ_)/n)^2. We thus also obtain an alternative bound for (<ref>) via(<ref>) as1/nμ_n(1/nA_k)^2∑_iλ_i^2≤ _k,n^2 n∑_iλ_i^2 /(n + α_k(Σ_))^2≤_k,n^2/α_k^2n/R_k(Σ). Now for the variance part of the claim, by combining lem:bound_varwith (<ref>), (<ref>) and (<ref>), we obtain that w.p at least 1-δ - 8exp(-c'/β_k^2n/k) it holds thatV ≤C_1_k,n^2σ_ϵ^2 (k/n +min(r_k(Σ^2)/n, n/α_k^2R_k(Σ))). For the bias part of the claim, by similarly combining lem:bound_biaswith (<ref>), (<ref>) and (<ref>), and using the fact that _k,n>1, we obtain that w.p at least 1-δ - 8exp(-c'/β_k^2n/k)B≤C_2 (θ^*__Σ_^2 (1 + 1/δ(_k,n^2 + _k,n)). . + θ_^*_Σ_^-1^2(_k,n^2( + β_k(Σ_)/n)^2(1+_k,n)))≤C_2· 3_k,n^3(1/δθ^*__Σ_^2 + θ_^*_Σ_^-1^2 ( + β_k(Σ_)/n)^2).So everything holds w.p at least 1-δ - 16exp(-c'/β_k^2n/k)There exists some absolute constants c, c', C_1>0, s.t for any k∈ with cβ_kklog(k)≤ n, it holds w.p at least 1-8exp(-c'/β_k^2n/k) the variance can be upper bounded as:V ≤C_1σ_ϵ^2 (μ_1(1/nA_k)^2 k/μ_n(1/nA_k)^2 n +1/nμ_n(1/nA_k)^2∑_iλ_i^2). A_k is positive definite for any >0 and thus, by lemma (<ref>) we have that:V ≤σ_ϵ^2(μ_1(A_k^-1)^2( ψ_(X)ψ_(X)^⊤)/μ_n(A_k^-1)^2 μ_k(ψ_(X)^⊤ψ_(X))^2 +μ_1(A_k^-1)^2(ϕ_(X)Σ_ϕ_(X)^⊤)). Plugging in the bounds from lem:concentration_union, there are some absolute constants c, c', c_1, c_2>0 s.t for any k∈ with cβ_kklog(k)≤ n, it holds w.p at least 1-8exp(-c'/β_k^2n/k) that V ≤ σ_ϵ^2(μ_1(A_k^-1)^2c_2kn/μ_n(A_k^-1)^2 c_1^2n^2 +μ_1(A_k^-1)^2c_2 n ∑_iλ_i^2)≤c_2(1/c_1^2+1)σ_ϵ^2 (μ_1(A_k^-1)^2 k/μ_n(A_k^-1)^2 n +μ_1(A_k^-1)^2 n ∑_iλ_i^2). Now taking C_1 accordingly, and the facts that μ_1(A_k^-1)=1/nμ_n(1/nA_k) and μ_n(A_k^-1)=1/nμ_1(1/nA_k) complete the proof. There exists some absolute constants c, c', C_2>0 (where c and c' are the same as in lem:bound_var),s.t for any k∈ with cβ_kklog(k)≤ n, and δ>0, it holds w.p at least 1-δ - 8exp(-c'/β_k^2n/k) the bias can be upper bounded as:B≤C_2 (θ^*__Σ_^2 (1 + 1/δ(μ_1(A_k^-1)^2/μ_n(A_k^-1)^2 + Σ_/μ_n(1/nA_k))). . + θ_^*_Σ_^-1^2(μ_1(1/nA_k)^2(1+Σ_/μ_n(1/nA_k)))). Similarly, to the variance term, by lemma (<ref>) we have thatθ^* - θ̂( ϕ(X)θ^*)_Σ^2≤θ^*__Σ_^2+ μ_1(A_k^-1)^2/μ_n(A_k^-1)^2μ_1(ψ_(X)^⊤ψ_(X) )/μ_k(ψ_(X)^⊤ψ_(X) )^2ϕ_(X) θ^*_^2+θ_^*_Σ_^-1^2/μ_n(A_k^-1)^2μ_k(ψ_(X)^⊤ψ_(X) )^2 + Σ_μ_1(A_k^-1)ϕ_(X)θ^*_^2 + Σ_μ_1(A_k^-1)/μ_n(A_k^-1)^2μ_1(ψ_(X)^⊤ψ_(X))/μ_k(ψ_(X)^⊤ψ_(X))^2Σ_^-1/2θ^*_^2. Plugging in the bounds from lemmas (<ref>) and (<ref>), there are some absolute constants c, c', c_1, c_2>0 s.t for any k∈ with cβ_kklog(k)≤ n, it holds w.p at least 1-8exp(-c'/β_k^2n/k) that θ^* - θ̂( ϕ(X)θ^*)_Σ^2≤θ^*__Σ_^2 + μ_1(A_k^-1)^2/μ_n(A_k^-1)^2c_2n/c_1^2n^2·1/δnθ_^*_Σ_^2+θ_^*_Σ_^-1^2/μ_n(A_k^-1)^2c_1^2n^2 + Σ_μ_1(A_k^-1)·1/δnθ_^*_Σ_^2 + Σ_μ_1(A_k^-1)/μ_n(A_k^-1)^2c_2n/c_1^2n^2Σ_^-1/2θ^*_^2 ≤ C_2 (θ^*__Σ_^2 (1 + 1/δ(μ_1(A_k^-1)^2/μ_n(A_k^-1)^2 + nΣ_μ_1(A^-1))). . + θ_^*_Σ_^-1^2(1/n^2μ_n(A_k^-1)^2 + Σ_μ_1(A_k^-1)/nμ_n(A_k^-1)^2)),where C_2>0 can be chosen to depend only on c_1 and c_2 (which are absolute constants). Now we can use the facts that μ_1(A_k^-1)=1/nμ_n(1/nA_k) and μ_n(A_k^-1)=1/nμ_1(1/nA_k) to complete the proof, since μ_1(A^-1)≤μ_1(A_k^-1)=1/nμ_n(1/nA_k) and 1/n^2μ_n(A_k^-1)^2=μ_1(A_k^-1)^2, and finally μ_1(A_k^-1)/nμ_n(A_k^-1)^2=1/μ_n(A_k^-1).§.§ Lemmas for Risk boundsIn <cit.>[Appendices F,G,H], several inequalities which will be highly useful to us were derived. Unfortunately, they assumed throughout their paper that the features are finite-dimensional, mean zero, and follow some sub-Gaussianity constraint. The proofs from their paper that we need technically do not depend on these constraints. However, for completeness and rigor, we rewrite their proofs here, adjusted where necessary to match our settings. Again, we remind the reader of the notations A_k:=_+n I and A:=+n I as defined in app:notations.For any k∈ it holds thatθ̂( y)_ + ϕ_(X)^⊤ A_k^-1ϕ_(X)θ̂( y)_ = ϕ_(X)^⊤ A_k^-1y.We start with the ridgeless case, where θ̂(y) is the minimum norm interpolating solution. Note that θ̂(y)_ is also the minimum norm solution to the equation ϕ_(X)θ_ = y - ϕ_(X) θ̂(y)_, where θ_ is the variable. Thus, we can writeθ̂(y)_ = ϕ_(X)^⊤(ϕ_(X) ϕ_(X)^⊤)^-1(y - ϕ_(X) θ̂(y)_). As such, we obtain that the min norm interpolator is the minimizer of the following:θ̂(y) = _θ_ v(θ_) := [θ_^⊤, (y - ϕ_(X) θ_)^⊤(ϕ_(X) ϕ_(X)^⊤)^-1ϕ_(X)] As θ_ varies, this vector sweeps an affine subspace of our Hilbert space. The vector θ̂(y)_ gives the minimum norm if and only if for any additional vector η_ we have v(θ̂(y)_) ⊥ v(θ̂(y)_ + η_) - v(θ̂(y)_). Let's write out the second vector: ∀η_∈^kv(θ̂(y)_ + η_) - v(θ̂(y)_) =[η_^⊤,- η_^⊤ϕ_(X)^⊤(ϕ_(X) ϕ_(X)^⊤)^-1ϕ_(X)]We see that the above mentioned orthogonality for any η_ is equivalent to the following:θ̂(y)_^⊤ - (y - ϕ_(X) θ̂(y)_)^⊤(ϕ_(X) ϕ_(X)^⊤)^-1ϕ_(X)= 0,θ̂(y)_ + ϕ_(X)^⊤ A_k^-1ϕ_(X)θ̂(y)_ = ϕ_(X)^⊤ A_k^-1y,where we replaced ϕ_(X) ϕ_(X)^⊤ = :A_k.This completes the ridgeless case, and we now move on to the case of > 0. We have thatθ̂(y)_ = ϕ_(X)^⊤(+n I)^-1y = ϕ_(X)^⊤(A_k + ϕ_(X) ϕ_(X)^⊤)^-1y.Which yieldsθ̂(y)_ + ϕ_(X)^⊤ A_k^-1ϕ_(X)θ̂(y)_= ϕ_(X)^⊤(A_k + ϕ_(X) ϕ_(X)^⊤)^-1y+ ϕ_(X)^⊤ A_k^-1ϕ_(X) ϕ_(X)^⊤(A_k + ϕ_(X) ϕ_(X)^⊤)^-1y= ϕ_(X)^⊤ A_k^-1( A_k + ϕ_(X) ϕ_(X)^⊤)(A_k + ϕ_(X) ϕ_(X)^⊤)^-1y= ϕ_(X)^⊤ A_k^-1y.We now prove a very simple lemma that will help us formalized the intuition that we can split the error into theandcomponentsFor any k∈, and ∈^,_Σ^2 = __Σ_^2 + __Σ_^2We can write =[ _; _; ] and since Σ is diagonal Σ = [ Σ_0;0 Σ_;] and thus:_Σ^2 = [ _ _; ][ Σ_0;0 Σ_;][ _; _; ]=__Σ_^2 + __Σ_^2.The next lemma provides a useful upper bound for the variance. If for some k ∈ the matrix A_k is PD, thenV ≤σ_ϵ^2(μ_1(A_k^-1)^2( ψ_(X) ψ_(X)^⊤)/μ_n(A_k^-1)^2 μ_k(ψ_(X)^⊤ψ_(X) )^2 +μ_1(A_k^-1)^2(ϕ_(X)Σ_ϕ_(X)^⊤)).Recall that V = _ϵ[θ̂()^2_Σ] = _ϵ[ϕ(X)^⊤ (+n I)^-1ϵ^2_Σ]By Lemma (<ref>) we can split the variance into θ̂( _)^2_Σ and θ̂( _)^2_Σ and bound these separately.Lemma (<ref>) states thatϕ_(X)^⊤ A_k^-1ϵ = θ̂( _) + ϕ_(X)^⊤ A_k^-1ϕ_(X) θ̂( _).Multiplying the identity by θ̂( _)^⊤ from the left, and using that θ̂( _)^⊤θ̂( _) ≥ 0 we getθ̂( _)^⊤ϕ_(X)^⊤ A_k^-1ϵ≥θ̂( _)^⊤ϕ_(X)^⊤ A_k^-1ϕ_(X) θ̂( _). The leftmost expression is linear in θ̂( _), and the rightmost is quadratic. We use these expressions to boundθ̂( _)_Σ_.First, we extract that norm from the quadratic partθ̂( _)^⊤ϕ_(X)^⊤ A_k^-1ϕ_(X) θ̂( _) ≥ μ_n(A_k^-1)θ̂( _)^⊤ϕ_(X)^⊤ϕ_(X) θ̂( _)≥ μ_n(A_k^-1) θ̂(_)_Σ_^2μ_k(ψ_(X)^⊤ψ_(X) ). Then we can substitute (<ref>) and apply Cauchy-Schwarz to obtainθ̂( _)_Σ_^2μ_n(A_k^-1)μ_k(ψ_(X)^⊤ψ_(X) )≤θ̂( _)^⊤ϕ_(X)^⊤ A_k^-1ϕ_(X) θ̂( _) ≤θ̂( _)^⊤ϕ_(X)^⊤ A_k^-1ϵ≤θ̂( _)_Σ_ψ_(X)^⊤ A_k^-1ϵ,and soθ̂( _)_Σ_^2 ≤ϵ^⊤ A_k^-1ψ_(X)ψ_(X)^⊤ A_k^-1ϵ/μ_n(A_k^-1)^2 μ_k(ψ_(X)^⊤ψ_(X) )^2.Since ϵ is independent of X, taking expectation in ϵ only leaves the trace in the numerator:_ϵθ̂( _)_Σ_^2≤ σ_ϵ^2( A_k^-1ψ_(X)ψ_(X)^⊤ A_k^-1)/μ_n(A_k^-1)^2 μ_k(ψ_(X)^⊤ψ_(X) )^2 ≤ σ_ϵ^2 μ_1(A_k^-1)^2( ψ_(X)ψ_(X)^⊤)/μ_n(A_k^-1)^2 μ_k(ψ_(X)^⊤ψ_(X) )^2,where we transitioned tothe second line by using the fact that (MM'M)≤μ_1(M)^2(M') for PD matrices M, M'.This completes the bound for the firstcomponents, and we now move on to theones. The rest of the variance term is Σ_^1/2ϕ_(X)^⊤ A^-1ϵ^2 = ϵ^⊤ A^-1ϕ_(X)Σ_ϕ_(X)^⊤ A^-1ϵ.Since ϵ is independent of X, taking expectation in ϵ only leaves the trace of the matrix:1/σ_ϵ^2_ϵΣ_^1/2ϕ_(X)^⊤ A^-1ϵ^2 = ( A^-1ϕ_(X)Σ_ϕ_(X)^⊤ A^-1) ≤ μ_1(A^-1)^2(ϕ_(X)Σ_ϕ_(X)^⊤) ≤ μ_1(A_k^-1)^2(ϕ_(X)Σ_ϕ_(X)^⊤).Here we again used the fact that (MM'M)≤μ_1(M)^2(M') for PD matrices M, M' to transition to the second line. We then used A ≽ A_k to infer μ_1(A^-1) ≤μ_1(A_k^-1).We now move on to bounding the bias term. Suppose that for some k < n the matrix A_k is PD. Then, θ^* - θ̂( ϕ(X)θ^*)_Σ^2≤θ^*__Σ_^2+ μ_1(A_k^-1)^2/μ_n(A_k^-1)^2μ_1(ψ_(X)^⊤ψ_(X) )/μ_k(ψ_(X)^⊤ψ_(X) )^2ϕ_(X) θ^*_^2+θ_^*_Σ_^-1^2/μ_n(A_k^-1)^2μ_k(ψ_(X)^⊤ψ_(X) )^2 + Σ_μ_1(A_k^-1)ϕ_(X)θ^*_^2 + Σ_μ_1(A_k^-1)/μ_n(A_k^-1)^2μ_1(ψ_(X)^⊤ψ_(X))/μ_k(ψ_(X)^⊤ψ_(X))^2Σ_^-1/2θ^*_^2.As before, by Lemma (<ref>) we can bound thecomponents and thecomponents separately.We start by bounding θ^*_- θ̂(y)_(ϕ(X)θ^*)_Σ_^2. By Lemma (<ref>), we haveθ̂( ϕ(X)θ^*)_ + ϕ_(X)^⊤ A_k^-1ϕ_(X)θ̂( ϕ(X)θ^*)_ = ϕ_(X)^⊤ A_k^-1ϕ(X)θ^*.Denote the error vector as ζ:= θ̂( ϕ(X)θ^*) - θ^*. We can rewrite the equation above asζ_ + ϕ_(X)^⊤ A_k^-1ϕ_(X)ζ_ = ϕ_(X)^⊤ A_k^-1ϕ_(X) θ^*_ - θ^*_.Multiplying both sides by ζ_^⊤ from the left and using that ζ_^⊤ζ_ = ζ_^2 ≥ 0 we obtainζ_^⊤ϕ_(X)^⊤ A_k^-1ϕ_(X)ζ_≤ζ_^⊤ϕ_(X)^⊤ A_k^-1ϕ_(X) θ^*_ - ζ_^⊤θ^*_. Next, divide and multiply by Σ_^1/2 in several places: ζ_^⊤Σ_^1/2ψ_(X)^⊤ A_k^-1ψ_(X) Σ^1/2_ζ_≤ ζ_^⊤Σ_^1/2ψ_(X)^⊤ A_k^-1ϕ_(X) θ^*_ - ζ_^⊤Σ_^1/2Σ^-1/2_θ^*_.Now we pull out the lowest singular values of the matrices in the LHS and largest singular values of the matrices in the RHS to obtain lower and upper bounds respectively, yieldingζ__Σ_^2μ_n(A_k^-1)μ_k(ψ_(X)^⊤ψ_(X) ) ≤ζ__Σ_μ_1(A_k^-1)√(μ_1(ψ_(X)^⊤ψ_(X) ))ϕ_(X) θ^*_ + ζ__Σ_θ_^*_Σ_^-1,and soζ__Σ_≤ μ_1(A_k^-1)/μ_n(A_k^-1)μ_1(ψ_(X)^⊤ψ_(X) )^1/2/μ_k(ψ_(X)^⊤ψ_(X) )ϕ_(X) θ^*_+ θ_^*_Σ_^-1/μ_n(A_k^-1)μ_k(ψ_(X)^⊤ψ_(X) ). This completes the bounds for thecomponents and we now move on to theones. The contribution of the components of ζ, starting from the k+1st can be bounded as follows:θ^*_ - ϕ_(X)^⊤ A^-1ϕ(X)θ^*^2_Σ_ ≤ 3(θ^*__Σ_^2 + ϕ_(X)^⊤ A^-1ϕ_(X)θ^*_^2_Σ_ + ϕ_(X)^⊤ A^-1ϕ_(X)θ^*_^2_Σ_). First of all, let's deal with the second term:ϕ_(X)^⊤ A^-1ϕ_(X)θ^*_^2_Σ_ = Σ_^1/2ϕ_(X)^⊤ A^-1ϕ_(X)θ^*_^2 ≤ Σ_ϕ_(X)^⊤ A^-1ϕ_(X)θ^*_^2= Σ_(θ^*_)^⊤ϕ_(X)^⊤ A^-1=A - n I - ϕ_(X)ϕ_(X)^⊤≼ Aϕ_(X) ϕ_(X)^⊤ A^-1ϕ_(X)θ^*_ ≤ Σ_(θ^*_)^⊤ϕ_(X)^⊤ A^-1ϕ_(X)θ^*_ ≤ Σ_μ_1(A_k^-1) ϕ_(X)θ^*_^2,where we used that μ_1(A_k^-1) ≥μ_1(A^-1) in the last transition.Now, let's deal with the last term. Note that A = A_k + ϕ_(X) ϕ_(X)^⊤. By the Sherman–Morrison–Woodbury formula,A^-1ϕ_(X) =(A_k^-1 + ϕ_(X) ϕ_(X)^⊤)^-1ϕ_(X)= (A_k^-1 - A_k^-1ϕ_(X)(I_k + ϕ_(X)^⊤ A_k^-1ϕ_(X))^-1ϕ_(X)^⊤ A_k^-1)ϕ_(X)= A_k^-1ϕ_(X)(I_n -(I_k + ϕ_(X)^⊤ A_k^-1ϕ_(X))^-1ϕ_(X)^⊤ A_k^-1ϕ_(X))= A_k^-1ϕ_(X)(I_n -(I_k + ϕ_(X)^⊤ A_k^-1ϕ_(X))^-1(I_k + ϕ_(X)^⊤ A_k^-1ϕ_(X) - I_k))=A_k^-1ϕ_(X)(I_k + ϕ_(X)^⊤ A_k^-1ϕ_(X))^-1.Thus, ϕ_(X)^⊤ A^-1ϕ_(X)θ^*_^2_Σ_= ϕ_(X)^⊤ A_k^-1ϕ_(X)(I_k + ϕ_(X)^⊤ A_k^-1ϕ_(X))^-1θ^*_^2_Σ_= Σ_^1/2ϕ_(X)^⊤ A_k^-1ψ_(X)(Σ_^-1 + ψ_(X)^⊤ A_k^-1ψ_(X))^-1Σ_^-1/2θ^*_^2 ≤ A_k^-1/2ϕ_(X) Σ_ϕ_(X)^⊤ A_k^-1/2μ_1(A_k^-1/2)^2 μ_1(ψ_(X)^⊤ψ_(X))/μ_k(ψ_(X)^⊤ A_k^-1ψ_(X))^2Σ_^-1/2θ^*_^2 ≤ Σ_A_k^-1/2ϕ_(X) ϕ_(X)^⊤ A_k^-1/2μ_1(A_k^-1)/μ_n(A_k^-1)^2μ_1(ψ_(X)^⊤ψ_(X))/μ_k(ψ_(X)^⊤ψ_(X))^2Σ_^-1/2θ^*_^2= Σ_I_n - n A_k^-1μ_1(A_k^-1)/μ_n(A_k^-1)^2μ_1(ψ_(X)^⊤ψ_(X))/μ_k(ψ_(X)^⊤ψ_(X))^2Σ_^-1/2θ^*_^2 ≤ Σ_μ_1(A_k^-1)/μ_n(A_k^-1)^2μ_1(ψ_(X)^⊤ψ_(X))/μ_k(ψ_(X)^⊤ψ_(X))^2Σ_^-1/2θ^*_^2,where in the last transition we used the fact that I_n - n A_k^-1 is a PSD matrix with norm bounded by 1 for > 0.Putting those bounds together yields the result.§ APPLICATIONS - PROOFS OF RESULTS IN SEC:APPLICATIONS §.§ Regularized Case (thm:fixed_dimensional)* We use thm:bound_gen, which states that there exist some absolute constants c,c'>0 s.t for any k∈ with cβ_kklog(k)≤ n and any δ>0, (<ref>) and (<ref>) hold w.p at least 1 - δ - 16exp(-c'/β_k^2n/k).In order to use the theorem, for any n we first have to pick some k∈ s.t cβ_kklog(k)≤ n. As such, let k:=k(n):= ⌈ n^1+b/1+a⌉. The condition b∈ (-1,a) implies that 1+b/1+a < 1, and thus k(n) = o_n(n/log(n)), meaning that for sufficiently large n, thm:bound_gen can be used with this chosen k. Since k is a function of n, the _n notation in particular, implies constants w.r.t k.We now proceed to bounding _k,n (as defined in thm:bound_gen). By lem:mu1_bound_poly it holds w.p at least 1- _n(1/k^3)exp(-Ω_n(n/k)) thatμ_1(1/n_) = _n(λ_k+1) = _n((n^1+b/1+a)^-(1+a)) = _n(n^-1-b) = _n(). We can bound the event that both thm:bound_gen hold and (<ref>) hold as1 - δ - 16exp(-c'/β_k^2n/k) - _n(1/k^3)exp(-Ω_n(n/k)) = 1 - δ - O_n(1/n),Where we used the facts that c'/β_k^2n/k = ω_n(log(n)). From now on, we assume that both thm:bound_gen and (<ref>) indeed hold.Plugging (<ref>) into the definition of the concentration coefficient (<ref>) and using μ_n(1/n_)≥ 0, we obtain the bound_k,n = _n(λ_k+1 + /) =_n(/) = _n(1). By lem:int_dim, it holds that r_k(Σ), r_k(Σ^2) = Θ_n(k). So plugging this and (<ref>) into thm:bound_gen yields, V/σ_ϵ^2 =_n(k/n + r_k(Σ^2)/n) =(k/n) =_n(n^1+b/1+a/n) = _n(n^b-a/1+a),and B =1/δ_n(:=T_1θ^*__Σ_^2) + _n(:=T_2θ_^*_Σ_^-1^2:=T_3( + (Σ_)/n)^2).Following lem:int_dim it holds that (Σ_)=_n(k·λ_k) = _n(k·) and so T_3=_n(( + k/n)^2) = _n(^2) = _n(1/n^2+2b).Combining this bound for T_3 with the bounds for T_1,T_2 from lem:bound_bias_poly yieldsB ≤ _n(1/k^2r+a + 1/k^2r-2-a n^2(1+b)) 2r < 2 + a_n(1/k^2(1+a) + log(k)/n^2(1+b)) 2r = 2 + a_n(1/k^2r+a + 1/n^2(1+b)) 2r > 2 + a≤ _n(1/n^(2r+a)(1+a)/1+b) 2r < 2 + a_n(log(n)/n^2(1+b)) 2r = 2 + a_n(1/n^2(1+b)) 2r > 2 + a . §.§ Fixed Dimensional Interpolation Case (thm:min_norm_poly)* We use thm:bound_gen, which states that there exist some absolute constants c,c'>0 s.t for any k∈ with cβ_kklog(k)≤ n and any δ>0, (<ref>) and (<ref>) hold w.p at least 1 - δ - 16exp(-c'/β_k^2n/k).In order to use the theorem, for any n we first have to pick some k∈ s.t cβ_kklog(k)≤ n. Using the fact that β_k≤ C_0 for some C_0>0, let k:=k(n):= n/max(cC_0, 1)log(n) and we also let k':=k'(n)=n^2log^4(n). The probability that thm:bound_gen holds with k(n) now becomes 1 - δ - O_n(1/n). Since k is a function of n, the _n notation in particular, implies constants w.r.t k.In order to bound (<ref>) and (<ref>), we begin by bounding _k,n, which requireds bounding μ_1(1/n_) and μ_n(1/n_).First note that by <cit.>[Lemma 5] R_k ≥ r_k and thus by lem:int_dim it holds that R_k' = Ω_n(n^2log^4(n)) and (Σ_') = Ω_n((n^2log^4(n))^-a). By cor:ak_eigen_bound it holds w.p at least 1-1/log(n) that, μ_n(1/n_) ≥ α_k(1-1/log(n)√(n^2/R_k'))(Σ_')/n=Ω_n((1-log(n)√(1/log^4(n)))(Σ_')/n)=Ω_n((n^2log^4(n))^-a/n) = Ω_n(n^-1-2alog^-4a(n)).For μ_1(1/n), by lem:mu1_bound_poly it holds w.p at least 1- _n(1/k^3)exp(-Ω_n(n/k)) thatμ_1(1/n_) = _n(λ_k+1) = _n(n^-1-alog^1+a(n)). So thm:bound_gen, (<ref>) and (<ref>) all hold simultaneously with probability 1-δ - O_n(1/log(n)), and from now on we assume that this is indeed the case.By combining (<ref>) and (<ref>) we obtain the bound _k,n = _n(n^-1-alog^1+a(n)/n^-1-2alog^-4a(n)) = _n(n^a) And thus by combining (<ref>), (<ref>) and the fact that from lem:int_dim r_k(Σ^2)≲ k, we obtain the bound V/σ_ϵ^2 = _n(n^2ak/n) = _n(n^2a)and B =1/δ_n( n^3a(:=T_1θ^*__Σ_^2 + :=T_2θ_^*_Σ_^-1^2:=T_3((Σ_)/n)^2)).Following lem:int_dim it holds that (Σ_)=_n(k·λ_k) = _n( 1/n^a) and so T_3=_n(1/n^2+2a). Combining this bound for T_3 with the bounds for T_1,T_2 from lem:bound_bias_poly yieldsT_1 + T_2T_3 ≤_n(1/n^2r+a + 1/n^2r-2-an^2(1+a)) 2r ≤ 2 + a_n(1/n^2r+a + 1/n^2(1+a)) 2r > 2 + a= _n(1/n^min(2r+a, 2(1+a))). Implying thatB ≤ 1/δ_n( n^3a1/n^min(2r+a, 2(1+a)))=1/δ_n(1/min(n^2(r-a), 2-a)). §.§ High Dimensional Interpolation Case (thm:highdim)* Let σ_ℓ:=σ̂_ℓ/N(d,ℓ) be the eigenvalues from (<ref>). We order ϕ in the natural way, by first taking ϕ̃()=(√(σ_0)Y_0,1, √(σ_1)Y_1,1,…,√(σ_1)Y_1,N(d,1), √(σ_2)Y_2,1, … ), and letting ϕ be the same as ϕ̃ with zero-valued indices removed (where σ_ℓ=0). We let ψ be given accordingly.For any s∈, and d∈ let k_s(d)=∑_ℓ=0^sN(d,ℓ) ·𝕀_σ̂_ℓ where𝕀_σ̂_ℓ= 1σ̂_ℓ > 0 0else.Let Δ__s(d)∈^n× n be the matrix given by [Δ__s(d)]_ij = 1/n[__s(d)]_iji≠ j 0 i=j.By (<ref>) we have thatα_k_s(d)1/n(Σ__s(d)) + μ_n(Δ__s(d)) ≤μ_i(1/n__s(d))≤β_k_s(d)1/n(Σ__s(d)) + μ_1(Δ__s(d)). In order to bound the eigenvalues of Δ__s(d) we will need to control the effective ranks. Let j_s:=_j≥ sσ_j, then r_k_s(d)(Σ) = ∑_i=s+1^∞N(d,i)σ_i/σ_j_s+1≥ N(d, j_s+1) ≥ N(d, s+1),where our assumption that σ̂_ℓ>0 for some ℓ≥⌊ 2τ⌋ ensures that σ_j_s+1>0 for s≤⌊ 2τ⌋. By <cit.>[Lemma 5] we also have R_k_s(d)(Σ) ≥ r_k_s(d)(Σ). Let k(d):= k_⌊τ⌋(d) and v(d):= k_⌊ 2τ⌋(d). Let t=min(⌊τ⌋ - τ + 1, ⌊2τ⌋ - 2τ + 1) > 0, then by what we just showed, and using the fact that for any i∈, N(d,i)=Θ_d(d^i), we have the following identities:R_v(d)(Σ) ≥ Ω_n,d(d^⌊2τ⌋+1) ≥Ω_n,d(n^2 + t/τ), r_k(d)(Σ) ≥ Ω_n,d(d^⌊τ⌋+1) ≥Ω_n,d(n^1 + t/τ).We have shown that conditions (A2) and (A3) of <cit.>[Proposition 4] hold. Furthermore, condition (A1) holds applying <cit.>[Lemma 19] to ψ_≤ v(d). As a result, <cit.>[Proposition 4] states that for some t'>0, Δ_≥ k(d)≤_n,d(d^-t') ·1/n(Σ_(d)).Plugging this into (<ref>) and using that by the addition theorem (<ref>), α_k(d)=β_k(d)=1, it holds thatμ_i(1/n_(d)) = Θ_n,d(1/n(Σ_(d))). As a result, we obtain that for _k,n as defined in thm:bound_gen, _k(d),n = Σ_ + μ_1(1/n_) + /μ_n(1/n_) += _n,d((n/r_k(d) + 1)1/n(Σ_(d))/1/n(Σ_(d))) ≤_n,d(1).Combining this with thm:bound_gen, it holds that for every δ>0, w.p at least 1-δ - 16exp(-c'/β_k(d)^2n/k(d)), both the variance and bias can be upper bounded asV ≤ σ_ϵ^2 _n,d((k(d)/n + n/R_k(Σ))) ≤σ_ϵ^2 _n,d(1/d^τ - ⌊τ⌋ + 1/d^⌊τ⌋ + 1 - τ). B ≤1/δ_n,d(θ^*_(d)_Σ_(d)^2) + _n,d(θ_(d)^*_Σ_(d)^-1^2 ((Σ_(d))/n)^2).Using the fact that c'/β_k(d)^2n/k(d) = ω_d(log(d)) the probability becomes 1 - δ - o_d(1/d)Now in order to further bound the bias, we first note that by the addition theorem (<ref>) it holds that(Σ) = ∑_ℓ=0^∞σ_ℓN(d,ℓ) = h(1) = Θ_n,d(1).As in the statement of the lemma, let N_d:=k(d). Because i∈, N(d,i)=Θ_d(d^i) and by assumption, σ̂_⌊τ⌋≠ 0, it holds that k(d)=_n,d(d^⌊τ⌋). Combining this with (<ref>) and the fact that for all i≤ k(d), λ_i ≥ℓ≤⌊τ⌋σ̂_ℓ≠ 0minσ̂_ℓ·Ω_n,d(1/d^⌊τ⌋) the right hand side of (<ref>) can be bounded asθ_(d)^*_Σ_(d)^-1^2 ((Σ_(d))/n)^2 =∑_i(d)(θ_i^*)^2/λ_i((Σ_(d))/n)^2≤ k(d)/min_i(d)λ_iθ^*_≤ N_d_∞^2 ((Σ)/n)^2≤ θ^*_≤ N_d_∞^2 1/ℓ≤⌊τ⌋σ̂_ℓ≠ 0minσ̂_ℓ·_n,d(1/d^2(τ - ⌊τ⌋)).The left hand side of (<ref>) can be bounded as1/δθ^*_(d)_Σ_(d)^2 ≤1/δθ^*_(d)_∞^2 (Σ_(d)) = 1/δO_n,d(θ^*_> N_d_∞^2). So (<ref>) becomesB ≤1/δO_n,d(θ^*_> N_d_∞^2) + θ^*_≤ N_d_∞^2 ℓ≤⌊τ⌋σ̂_ℓ≠ 0max 1/σ̂_ℓ·_n,d(1/d^2(τ - ⌊τ⌋)).§.§ Lemmas for Applicationslemmaintdim For any a>0,* If c_1 1/ilog^1+a(i)≤λ_i ≤ c_21/ilog^1+a(i) then c_1/c_21/a (k+1)log(k+1) ≤ r_k≤ 1 + c_2/c_11/a (k+1)log(k+1).* If c_1 1/i^1+a≤λ_i ≤ c_21/i^1+a then c_1/c_21/a (k+1)≤ r_k≤ 1 + c_2/c_11/a (k+1).* If c_1 1/e^ai≤λ_i ≤ c_21/e^ai then c_1/c_21/a≤ r_k≤ 1 + c_2/c_11/a.The famous integral test for convergence states that for a monotonic decreasing function f(n), it holds for any k∈ that∫_k+1^∞ f(x)dx ≤∑_if(i) ≤ f(k+1) + ∫_k+1^∞ f(x)dx, We now split into separate cases of eigenvalue decay. * If c_1 1/ilog^a(i)≤λ_i ≤ c_21/ilog^a(i) then using the fact that ∫_k+1^∞1/xlog^1+a(x)dx = 1/alog^a(k+1) we obtainr_k ≤ 1 + 1/c_1λ_k+1∫_k+1^∞ c_21/xlog^1+a(x)dx ≤ 1 + c_2/c_11/a(k+1)log(k+1),and r_k ≥1/c_2λ_k+1∫_k+1^∞ c_a1/xlog^1+a(x)dx ≥c_1/c_21/a(k+1)log(k+1).* If c_1 1/i^1+a≤λ_i ≤ c_21/i^1+a then using the fact that ∫_k+1^∞1/x^1+a(x)dx = 1/a(k+1)^a we obtain thatr_k ≤ 1 + 1/c_1λ_k+1∫_k+1^∞ c_21/x^1+a(x)dx ≤ 1 + c_2/c_11/a(k+1),andr_k ≥1/c_2λ_k+1∫_k+1^∞ c_11/x^1+a(x)dx ≥c_1/c_21/a(k+1).* If c_1 1/e^ai≤λ_i ≤ c_21/e^aithen using the fact that ∫_k+1^∞exp(-ax)dx = 1/ae^a(k+1) we obtain thatr_k ≤ 1 + 1/c_1λ_k+1∫_k+1^∞ c_2exp(-ax)dx ≤ 1 + c_2/c_11/a,andr_k ≥1/c_2λ_k+1∫_k+1^∞ c_1exp(-ax)dx ≥c_1/c_21/a. Let K be a kernel with polynomially decaying eigenvalues λ_i=Θ_i,n(i^-1-a) for some a>0. Furthermore, suppose that β_k klog(k)/n=O_k,n(1) and that β_k=O_k(1). Then it holds w.p at least 1- _k,n(1/k^3)exp(-Ω_k,n(n/k)) thatμ_1(1/n_) = _k,n(λ_k+1)By lem:int_dim, it holds that r_k(Σ), r_k(Σ^2) = Θ_k,n(k). Now using cor:ak_eigen_bound (note that assumption:good_beta holds since β_k=O_k(1)), there exist absolute constants c,c'>0 s.t it holds w.p at least 1-4 r_k/k^4exp(-c'/β_kn/r_k) thatμ_1(1/n_) ≤c(λ_k+1 + β_klog(k+1)(Σ_)/n)=_k,n(λ_k+1(1 + β_klog(k+1)r_k/n)) =_k,n(λ_k+1(1 + β_kklog(k)/n))= _k,n(λ_k+1).Now to bound the probability which this holds, we use the fact that r_k=Θ_k,n(k) together with the fact that exp(-c'/β_kn/r_k) < 1 to get the claim holds w.p at least 1-4 r_k/k^4exp(-c'/β_kn/r_k) = 1- _k,n(1/k^3)exp(-Ω_k,n(n/k)).Let a∈, 1<k∈, then∑_i i^-a≤ 1 + k^1-aa < 1 1 + log(k) a = 11/a-1a > 1If a<0, then bounding the mean with the maximum yields ∑_i i^-a≤ k· k^-a=k^1-a. Next, if a≠ 1, bounding the sum with the integral yields∑_i i^-a≤ 1 + ∫_1^k 1/x^a dx = 1 + 1/a-1 - k^1-a/a-1.So if a<1, we obtain a 1 + k^1-a bound, and if a>1, a 1 + 1/a-1 bound. Lastly, if a=1 then we can similarly bound as∑_i i^-a≤ 1 + ∫_1^k 1/x dx = 1 + 1 + log(k).Let 1<k∈ and suppose that λ_i= Θ_i,n(1/i^1+a) for some a>0, and θ_i^* = Θ_i,n(i^-r) for some r∈ s.t f^*∈ L^2_μ(). It holds thatθ^*__Σ_^2 ≤_k,n(1/k^2r+a),θ_^*_Σ_^-1^2 ≤_k,n(k^-2r + 2 + a) 2r < 2 + a_k,n(log(k)) 2r = 2 + a_k,n(1) 2r > 2 + a . The condition that f^*∈ L^2_μ() implies ∑_i=1^∞θ^* λ_i^2 = ⟨θ^* ϕ()⟩ < ∞. Thepart can be bounded using lem:int_dim asθ_^*_Σ_^2 = ∑_i(θ_i^*)^2λ_i = _k,n(∑_ii^-2r-1-a) ≤_k,n(1/k^2r+a).Thepart can be bounded using lemma lem:sum_leqk (with 2r-1-a) asθ_^*_Σ_^-1^2 = ∑_i(θ_i^*)^2/λ_i = _k,n(∑_ii^-2r+1+a) ≤_k,n(k^-2r + 2 + a) 2r < 2 + a_k,n(log(k)) 2r = 2 + a_k,n(1) 2r > 2 + a . § LACK OF SUB GAUSSIANITYSuppose our inputs are one-dimensional standard Gaussians x∼(0,σ^2) and let K(x,y)=exp(-γ(x-y)^2) be the Gaussian (RBF) kernel. Such kernels have known Mercer decompositions <cit.>, and if we pick for simplicity σ=1 and γ=3/8 (meaning that in their notation, α=1/√(2) and ϵ=√(3/8)) we obtain that ψ(x)=(ψ_i(x))_i=0^∞ is given by:ψ_i(x)=√(2)/√(2^i i!)e^-x^2/4H_i(x),where H_i(x)=(-1)^ie^x^2d^i/dx^ie^-x^2 is the i'th order (physicist's) Hermite polynomial. Note that in this chapter, for ease of notation, we start counting at i=0.Recall that a vector Y is said to be sub-Gaussian if sup_u:u=1sup_p≥ 11/√(p)([⟨ u, Y⟩^p])^1/p < ∞. In particular, taking Y=ψ and u=e_i we get that:[⟨ u, Y⟩^p] =1/√(2π)∫_-∞^∞ψ_i(x)^p e^-x^2/2dx =2^p/4 - 1/2/√(π)(2^ii!)^p/2∫_-∞^∞H_i(x)^p exp(-(p/4+1/2)x^2) dx Thus, if for a fixed p, The value of (<ref>) diverges to infinity with i, it would imply that ψ is not sub-Gaussian.We will thus aim to lower bound this term. To do so, we begin by bounding the Hermite polynomials using <cit.>[Theorem 8.22.9], which states that for any δ >0, and any x=√(2i+1)cos(ϕ) where δ≤ϕ≤π-δ, we have the uniform approximation:e^-x^2/2H_i(x)= 2^i/2+1/4√(i!)(π i)^-1/4×:=Asin(ϕ)^-1/2(:=Bsin(3π/4 + (2i+1/4)(sin(2ϕ)-2ϕ)) + O(i^-1)). We now wish to bound B. Since sin(ϕ) ≥ 0.5 for ϕ∈ [1/6π, 5/6π] then we can lower bound B by 0.5 when 3π/4 + (2i+1/4)(sin(2ϕ)-2ϕ)∈[1/6π, 5/6π].This is equivalent to:-1/6(2i+1)π≤ϕ - sin2ϕ/2≤7/6(2i+1)π.Since ϕ≥ 0, we have (via the sin Taylor expansion) that ϕ - ϕ^3/6≤sin(ϕ)≤ϕ (meaning -ϕ≤-sin2ϕ/ϕ≤ -ϕ + 8ϕ^3/6) and so the lower bound holds trivially and the upper bound holds when ϕ≤√(7/8(2i+1)π). We can also lower bound A trivially by 1. Furtheremore, for i sufficiently large the O(i^-1) is at least -1/4.So overall we obtain that for ϕ∈[δ, √(7/8(2i+1)π)] and x=√(2i+1)cos(ϕ), A(B + O(i^-1))≥1/4, and (<ref>) can be lower bounded as:H_i(x) ≥1/42^i/2+1/4√(i!)(π i)^-1/4 e^x^2/2 = 1/4(2/π)^1/4 2^i/2√(i!)i^-1/4 e^x^2/2.So for any p∈, we can lower bound the p'th power of H_i as H_i(x)^p ≥1/4^p(2/π)^p/4(2^ii!)^p/2i^-p/4 e^px^2/2.Denoting a_i=√(2i+1)cos(√(7/8(2i+1)π)) and b_i=√(2i+1)cos(δ) we can bound our expected value in (<ref>) by:[⟨ u, Y⟩^p] =2^p/4 - 1/2/√(π)(2^ii!)^p/2∫_-∞^∞H_i(x)^p exp(-(p/4+1/2)x^2) dx≥ (2/π)^p/4 - 1/22^p/4/4^pi^p/4∫_a_i^b_iexp((p/4-1/2)x^2)dx≥ Ω_i(i^-3/2∫_a_i^b_iexp(p-2/4x^2)dx)≥ Ω_i(i^-3/2(b_i-a_i)exp(p-2/4a_i^2))) By continuity in δ we can take b_i=√(2i+1)cos(0)=√(2i+1) and by using the inequality (via the cos Maclaurinexpansion) cos(t)≤ 1 - t^2/2 + o(t^2) we getb_i-a_i=√(2i+1)(1-cos(√(7/8(2i+1)π))≥ √(2i+1)(1/2√(7/8(2i+1)π)^2 - o(√(7/8(2i+1)π)^2))=Ω_i(√(i) i^-2/3) = Ω_i(i^-1/6). Finally, since for sufficiently large i, a_i^2 > 3/2i (since the cos part of a_i tends to 1), for any p≥ 3 we obtain[⟨ u, Y⟩^p] = Ω_i(i^-3/2i^-1/6exp(p-2/4·3/2i)) = Ω_i(exp(p-2/4· i)) i→∞⟶∞.This implies that ψ is not sub-Gaussian.§ BACKGROUND ON DOT-PRODUCT/ZONAL KERNELSA Kernel K is called a dot product kernel if K(,')=h(^⊤') for some h:→ which has a Taylor expansion of the form h(t)=∑_i=0^∞ a_i t^i with a_i≥ 0. Importantly, K depends only on ^⊤'. With inputs uniformly distributed on ^d-1, this family of kernels includes the NTK, Laplace kernel, Gaussian (RBF) kernel, and polynomial kernel <cit.>. We emphasize that for an L layer fully connected network f(;θ), KRR with respect to the corresponding GPK 𝒦(,')=𝔼_θ[f(;θ) · f(';θ)] (also called Conjugate Kernel or NNGP Kernel) is equivalent to training the final layer while keeping the weights of the other layers at their initial values <cit.>. Furthermore, KRR with respect to the NTK Θ(,')=𝔼_θ[⟨∂ f(;θ)/∂θ,∂ f(';θ)/∂θ⟩] is equivalent to training the entire network <cit.>. Under a uniform distribution on ^d-1, the domain of h is [-1, 1], and for any d≥ 3 dot-product kernels exhibit the Mercer decomposition K(,') = ∑_ℓ=0^∞σ̂_ℓ/N(d,ℓ)∑_m=1^N(d,ℓ)Y_ℓ, m()Y_ℓ, m('),where the eigenfunctions Y_ℓ, m are the m'th spherical harmonic of degree (or frequency) ℓ, N(d,ℓ)=2ℓ+d-2/ℓℓ+d-3d-2 is the number of harmonics of each degree, and σ_ℓ:=σ̂_ℓ/N(d,ℓ) are the eigenvalues <cit.>. Each spherical harmonic can be defined via restrictions of homogeneous polynomials to the unit sphere, with the degree (or frequency) of the spherical harmonic corresponding to the degree of said polynomials. When d≫ℓ, N(d,ℓ)=Θ_d(d^ℓ) and when ℓ≫ d, N(d,ℓ)=Θ_ℓ(ℓ^d-2). Importantly, all spherical harmonics Y_ℓ, m of the same degree ℓ share the same eigenvalue σ_ℓ, and as a result, there are many repeated eigenvalues. For background on spherical harmonics, see <cit.>. In order to write the kernel as (<ref>), we can order ϕ in the natural way, by first taking ϕ̃()=(√(σ_0)Y_0,1, √(σ_1)Y_1,1,…,√(σ_1)Y_1,N(d,1), √(σ_2)Y_2,1, … ), and letting ϕ be the same as ϕ̃ with zero-valued indices removed (where σ_ℓ=0). We let ψ be given accordingly. We note that ψ_1=Y_0,1 is a constant function.The famous addition theorem <cit.>[1.2.8 and 1.2.9] implies that for any d≥ 3, ,'∈^d-1 and ℓ≥ 0,∑_m=1^N(d,ℓ)Y_ℓ, m()Y_ℓ, m() = N(d,ℓ).For any ℓ∈, let N(d,≤ℓ) = ∑_j=1^ℓ N(d,ℓ). The addition theorem (<ref>) in particular implies that the eigenfunctions ψ_i are highly correlated, and definitely not i.i.d. Importantly, (<ref>) implies that For any ℓ∈, k:=N(d,≤ℓ),it holds that β_k=α_k=1. Furthermore, for any k∈, let ℓ_k= max{ℓ∈∪{0} N(d,≤ℓ) ≤ k}, so that N(d,≤ℓ_k) ≤ k ≤ N(d,≤ℓ_k+1). If momentarily we consider the case when σ̂_ℓ≠ 0 for all ℓ, then from (<ref>), it holds that for any ∈^d-1,Θ_k(1) = N(d, ≤ℓ_k)/N(d, ≤ℓ_k+1)≤ψ_()^2/k≤N(d, ≤ℓ_k+1)/N(d, ≤ℓ_k) = Θ_k(1).Implying that ψ_()^2/k = Θ_k(1). A similar argument yields1 - σ̂_ℓ_k/∑_ℓ=ℓ_k^∞σ̂_ℓ = ∑_ℓ=ℓ_k+1^∞σ̂_ℓ/∑_ℓ=ℓ_k^∞σ̂_ℓ≤ϕ_()^2/(Σ_)≤∑_ℓ=ℓ_k^∞σ̂_ℓ/∑_ℓ=ℓ_k+1^∞σ̂_ℓ≤ 1 + σ̂_ℓ_k/∑_ℓ=ℓ_k+1^∞σ̂_ℓ,which analogously to lem:int_dim will typically be Θ_k(1) if the decay of σ̂ is at most exponential (but may be slower). This is the case for common kernels such as NTK, Laplace and RBF, and for such kernels we obtain:α_k, β_k = Θ_k(1). § EXAMPLES OF KERNELS THAT FIT OUR FRAMEWORKHere, we provide some simple examples of kernels that fit our framework. Namely, that β_k and possibly α_k (as defined in def:eigen_combined) can be bounded. First, note that for each of the terms in def:eigen_combined, the denominator is the expected value of the numerator, so α_k and β_k quantify how much the features behave as they are "supposed to". Since inf≤≤sup, one always has 0≤α_k ≤ 1 ≤β_k.A control on β_k is usually easier than one on α_k. Nevertheless, bounding α_k may be made easier by Remark <ref>. We also mention that bounds on α_k, β_k in one domain can often be extended to others. See section:high-dim for details.* Dot-Product Kernels on ^d-1: A complete treatment of such kernels is given in appendix:dot-product.* Kernels With Bounded Eigenfunctions: If ψ_i^2()<M for any i, the it trivially holds that β_k≤ M for any k∈. Analogously, if ψ_i^2≥ M' then α_k≥ M'. This may be weakened to a high probability lower bound (see Remark <ref>). * RBF and shift-invariant kernels in X⊆^d: The features ϕ_i for an RBF kernel on X⊆^d with nonempty interior (i.e X^∘≠∅) are given by <cit.>[Theorem 3.7]. If for simplicity X⊆ [-1, 1],then ϕ_i are bounded, implying that ψ_i are also bounded. Hence, by the previous item, β_k=_k,n(1). A simple and easy-to-understand construction of the Mercer Decomposition for general shift-invariant kernels on [0,1] is provided in <cit.>. * Kernels on the Hypercube {-1, 1}^d: With a uniform distribution, the hypercube has a Fourier decomposition given by monomials <cit.>. As a result, for kernels of the form K(,')=h(⟨, ' ⟩/', ^2/d, '^2/d) for some h:^3→, the eigenfunctions ψ_i are given by monomials <cit.>. In particular, for any i, ψ_i^2≡ 1 and thus α_k = β_k = 1 for any k.§ COMPUTATION OF KERNELS IN EXPERIMENTS We plot the variance for a 3-layer fully connected NTK and polynomial kernel in fig:multiple_descent and 3-layer fully connected GPK in fig:low_dimensional. Background on the NTK and GPK is given in appendix:dot-product; however, we note here that there is a closed form for the expectations <cit.>, which we used when computing the figures. First, letκ_0(u):=1/π(π - arccos(u)), κ_1(u) := 1/π(u(π - arccos(u)) + √(1 - u^2)).The L layer GPK on ^d-1 is equal to K_GPK^(L)(, ') := κ_1(K_GPK^(L-1)(, ')),K^(0)(, '):= ^⊤',and the L layer NTK on ^d-1 isΘ^(L)(, ') := Θ^(L-1)(, ')κ_0(K_GPK^(L-1)(, ')) + K_GPK^(L)(, ').
http://arxiv.org/abs/2312.15995v1
{ "authors": [ "Daniel Barzilai", "Ohad Shamir" ], "categories": [ "cs.LG", "cs.AI", "stat.ML" ], "primary_category": "cs.LG", "published": "20231226105520", "title": "Generalization in Kernel Regression Under Realistic Assumptions" }
A Self Supervised StyleGAN for Image Annotation and Classification with Extremely Limited Labels Dana Cohen Hochberg, Hayit Greenspan Member, IEEE, Raja Giryes Member, IEEE D.C. Hochberg is with the School of Electrical Engineering, Tel-Aviv University, Tel-Aviv 6997801, Israel (email: [email protected]) H. Greenspan is with the School of Biomedical Engineering, Tel-Aviv University, Tel-Aviv 6997801, Israel (email: [email protected]) R. Giryes is with the School of Electrical Engineering, Tel-Aviv University, Tel-Aviv 6997801, Israel (email: [email protected]) This work was supported by the Ministry of Science and Technology, Israel. The work of RG is supported by ERC StG under Grant 757497. 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: https://doi.org/10.1109/TMI.2022.318717010.1109/TMI.2022.3187170January 14, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper introduces a learnable Deformable Hypothesis Sampler (DeformSampler) to address the challenging issue of noisy depth estimation for accurate PatchMatch Multi-View Stereo (MVS). We observe that the heuristic depth hypothesis sampling modes employed by PatchMatch MVS solvers are insensitive to (1) the piece-wise smooth distribution of depths across the object surface, and (2) the implicit multi-modal distribution of depth prediction probabilities along the ray direction on the surface points. Accordingly, we develop DeformSampler to learn distribution-sensitive sample spaces to (1) propagate depths consistent with the scene's geometry across the object surface, and (2) fit a Laplace Mixture model that approaches the point-wise probabilities distribution of the actual depths along the ray direction. We integrate DeformSampler into a learnable PatchMatch MVS system to enhance depth estimation in challenging areas, such as piece-wise discontinuous surface boundaries and weakly-textured regions. Experimental results on DTU and Tanks & Temples datasets demonstrate its superior performance and generalization capabilities compared to state-of-the-art competitors. Code is available at <https://github.com/Geo-Tell/DS-PMNet>. § INTRODUCTION Multi-View Stereo (MVS) aims to reconstruct dense 3D scene geometry from image sequences with known cameras, which has been widely used in robot perception, 3D reconstruction, and virtual reality. MVS is typically treated as a dense correspondence search problem <cit.>, but many traditional methods have difficulty in achieving reliable matching within the low-texture, specular, and reflective regions. Learning-based MVS has recently attracted interest in solving this problem by introducing global semantic information for robust matching <cit.>. Although achievements have been made, they still face the challenge of bridging the gap between accuracy and efficiency.Learning-based methods commonly involve building a 3D cost volume, followed by a regularization using the 3D CNN for depth regression <cit.>. Consequently, the 3D forms of both cost volume and CNN are undoubtedly restricted by limited resources. To overcome these limitations, many efforts have been made to reduce the cost volume size <cit.> and modify the regularization techniques <cit.>. Recently, a promising solution has emerged, which forgoes the common learning paradigm and re-evolves the traditional PatchMatch MVS into an end-to-end framework, like PatchMatchNet <cit.> and PatchMatch-RL <cit.>. These methods follow the idea of patch-based searching and achieve improved results in efficiency and quality. However, we observe that they only transform the traditional pipeline into a trainable one, without adequately considering the implicit depth distribution within scenes for guiding depth hypothesis sampling during depth propagation and perturbation. This flaw directly degenerates the depth estimation performance, as shown in Figure<ref>(d). Although PatchMatchNet introduces variability to sampling with CNNs, it remains insensitive to the underlying depth distribution. This will hamper the sampling of optimal hypotheses, thereby imposing additional burden on the subsequent learning modules. Therefore, our study raises two crucial questions for hypothesis sampling: (i) What implicit depth distributions should be learned? (ii) How the learned distribution be leveraged to guide hypothesis sampling?At the propagation stage, hypotheses of neighboring pixels are sampled to generate a collection for enhancing each pixel's hypothesis space. An implicit piece-wise smooth depth distribution is contained in the depth map due to the scene regularity in the real world. In other words, the depth distribution tends to be smooth within coherent surfaces but can have abrupt shifts between distinct objects or scene elements. However, a preset sampling template is vulnerable when dealing with this implicit distribution <cit.>. This unreasonable hypothesis sampling results in a lot of noises with significant hypothesis differences in the collection, thereby causing unstable hypothesis evaluation, as revealed by Figure <ref>(b). Thus, a well-designed sampling scheme is required to select hypotheses from pixels that align more closely with the object's surface.At the perturbation stage, fine-grained hypotheses over the scene depth range are expected to be sampled for refining the previously estimated depths. The optimization at this stage has received little attention in recent works. Gipuma <cit.> employs a bisection strategy to refine sampling, while COLMAP <cit.> combines randomly perturbed samples with previous results in various ways. These methods lack the consideration of the uncertainty inherent in previous estimates, leading to redundant and coarse sampling. In this work, we intend to utilize this uncertainty to adaptively adjust the range of perturbations, rather than uniformly sampling for each pixel. In other words, for pixels with high confidence, the sampling should focus on hypotheses closely distributed around the previous estimates. Conversely, the sampling should encompass more dispersed hypotheses for pixels with significant uncertainty to provide a higher likelihood for correcting the estimates. In fact, we find that the cost distribution induced by previous sampling hypotheses offers a good representation of the uncertainty. However, due to the influence of varying imaging conditions, such as lighting, viewpoint, and other factors, this distribution often possesses multi-modal characteristics, as illustrated in Figure <ref>(c). This means that there is not a single distinct peak representing the lowest cost, which leads to even the true hypothesis failing to derive the lowest cost. Based on the discussion above, we develop a learnable Deformable Hypothesis Sampler (DeformSampler) to learn the implicit depth distributions to guide reliable sampling in the learning-based PatchMatch framework. DeformSampler supports each pixel to sample optimal hypotheses at the stages of propagation and perturbation. Two modules are designed to drive this sampler: a plane indicator and a probabilistic matcher. The plane indicator encodes the intra-view feature consistency to learn the implicit depth distribution across the object surface. Using a Laplace Mixture model, the probabilistic matcher models multi-model distribution of depth prediction probabilities along the ray direction. By integrating this sampler into a learning-based PatchMatch framework, we can achieve excellent depth estimation performance, especially in the challenging piece-wise discontinuous surface boundaries and weakly-textured regions. Our method also shows strong generalization ability in both outdoor and indoor scenes, as shown in Figure <ref>(d).In summary, our contributions are as follows: * We develop a learnable PatchMatch-based MVS network (DS-PMNet) embedded with DeformSampler to learn implicit depth distribution for guiding the deformable hypothesis sampling.* A plane indicator is designed to capture piece-wise smooth depth distribution for structure-aware depth propagation. * A probability matcher is designed to model the multi-modal distribution of depth prediction probabilities for uncertainty-aware perturbation. § RELATED WORKS§.§ TraditionalMVS Traditional MVS methods mainly rely on 3D representations, such as voxel, level-set, polygon mesh, and depth map <cit.> for dense scene geometry reconstruction. Among them, the depth map-based methods usually gain more robust performance for large-scale dense 3D scene recovery by treating MVS as a dense correspondence search problem. In this line of research, PatchMatch MVS is a milestone, which replaces the costly dense point-based search with efficient patch-based search via a random and iterative strategy. Later, the propagation scheme of PatchMatch MVS was optimized for higher efficiency in some works, like Gipuma  <cit.> and COLAMP  <cit.> . Recently, some methods have promoted the performance of PatchMatch MVS in terms of propagation and evaluation for accurate depth estimation. For propagation, ACMH  <cit.> adopted an Adaptive Checkerboard Sampling scheme to prioritize good hypotheses. HPM-MVS  <cit.> enlarged the local propagation region by introducing a Non-local Extensible Sampling Pattern. While these methods provide sensitivity to scene region information, they still have difficulty explicitly capturing such details. To improve hypothesis evaluation in weakly-textured regions, approaches such as utilizing multiscale evaluation strategies, incorporating planar priors <cit.>, and employing deformable evaluation regions <cit.> have been adopted. In this work, we develop the DeformSampler to effectively impose awareness to scene structures and learn the implicit depth distribution from the current viewpoint for improved hypothesis propagation.§.§ Learning-based MVSWhile traditional solutions perform well in ideal Lambertian scenes, learning-based methods offer better semantic insight and stronger robustness in challenging scenarios. Most learning-based methods were built on MVSNet's foundation. They use warped multi-view features to create cost volumes and adopt 3D CNNs for regularization. Finally, the depths are predicted via regression. Recent works aim to enhance the quality of 3D cost volumes, reduce their size, and refine regularization techniques. To improve quality, techniques like attention mechanisms <cit.>, epipolar-assembling kernels <cit.>, and pixel-wise visibility computation modules <cit.> were utilized. For computational efficiency, a common approach is to utilize a coarse-to-fine strategy <cit.>, which involves a multi-stage hypothesis sampling. Some variants, like UCS-Net<cit.> and IS-MVSNet <cit.>, adaptively adjust sampling by incorporating uncertainty from depth estimation in earlier stages. For regularization, several studies adopted hybrid 3D U-Net <cit.>, RNN-CNN <cit.>. In our work, the probabilistic matcher within the DeformSampler employs the same coarse-to-fine strategy but utilizes a more powerful modeling approach to capture the implicit multi-modal distribution of depth prediction probabilities, guiding fine-grained sampling. §.§ Learning-based PatchMatch MVSRecent advancements have integrated the idea of PatchMatch MVS into end-to-end training frameworks, such as PatchMatchNet <cit.> and PatchMatch-RL <cit.>, which have partially bridged the gap between quality and efficiency for learning-based MVS. PatchMatchNet incorporated adaptive propagation and evaluation strategies to achieve efficient depth estimation. PatchMatch-RL argued that traditional methods perform better than learning-based MVS in wide-baseline scenes due to their joint optimization over pixel-wise depth, normals, and visibility estimates. In their follow-up work <cit.>, they further considered the optimization over many high-resolution views and the usage of geometric consistency constraints. Our learnable DS-PMNet is also built upon the PatchMatch MVS but addresses the unreasonable hypothesis sampling issue. The core module, DeformSampler, provides more reliable guidance for propagation and perturbation, leading to a significant performance improvement.§ METHOD Figure <ref> shows the entire pipeline of our method. In the following subsections, we first provide an overview of the end-to-end learnable PatchMatch MVS embedded with the DeformSampler (DS-PMNet). Then, we discuss how the two modules (the Plane Indicator 𝒫_θ and Probability Matcher ℳ_θ) learn the implicit depth distribution to drive the sampler for implementing deformable depth sampling. §.§ OverviewIn the PatchMatch MVS paradigm, each image is sequentially used as a reference imageI^r, while the remaining images serve as source images { I_i^s}_i=1^N-1 to assist in estimating the depth map of I^r. The estimation process involves stages of initialization, propagation, evaluation, and perturbation, with the latter three stages iterating multiple times. In this work, we perform optimization at four different feature scales, with only one iteration per scale. The detailed DS-PMNet framework is presented in Algorithm 1 of the supplementary material.We first extract a feature pyramid Ψ ={φ_ℓ}_ℓ=1^L for each input image to capture the low-level details and high-level contextual information denoted as follows,{φ ^r_ℓ}_ℓ=1^L,{φ ^s_ℓ}_ℓ=1^L=F_θ ( I^r, I^s ),where F_θ is an encoder, and ℓ∈{ 1,...,L } is the indices for the multi-scale features.In this work, the feature pyramid is constructed with four scales, denoted as L=4, corresponding to 1/8, 1/4, 1/2, and 1 of the original image size. To avoid confusion, we only describe four stages of one iteration in the following content, i.e., the subscript ℓ is discarded.In stage 1, we randomly initialize a depth map D^0 for I^r. The known depth range is first divided into m_0 intervals in the inverse depth space. Then, we randomly sample a depth candidate for each pixel at each interval. This means that each pixel is assigned with m_0 candidates { d_j}_j=1^m_0, which ensures that true hypotheses can be propagated quickly under a limited number of iterations.In stage 2, the plane indicator 𝒫_θ guides the structure-aware hypothesis propagation by capturing implicit piece-wise smooth depth distribution of the object's surface. 𝒫_θ encodes the intra-view feature consistency to estimate a plane flow field ℱ for I^ r. For each pixel, ℱ provides m_1 neighboring coplanar points to sample hypotheses, resulting in a reliable hypothesis collection { d_j }_j=1^m_0+m_1.In stage 3, the probabilistic matcher ℳ_θ enhances the evaluation of depth candidates in { d_j }_j=1^m_0+m_1 by modeling implicit multi-modal distribution of depth prediction probabilities, and outputs the prediction uncertainty to guide the subsequent perturbation. ℳ_θfirst generates a multi-view cost volume 𝒮={S_i}_i=1^N-1, where each element S_i encodes a matching cost introduced by the depth-induced homography matrix set {H_j}_j=1^m_0+m_1between φ ^r and φ ^s_i. Then, for each pixel in I^r, the parameter set of Laplace Mixture distribution {ψ_i}_i=1^N-1 is decoded from 𝒮 to predict depth map D and the corresponding uncertainty map set 𝒰 ={U_i}_i=1^N-1. In stage 4, the inferred Laplace Mixture distribution is used to guide the uncertainty-aware perturbation, and a fine-grained hypothesis collection { d_j }_j=1^m_2 is sampled. Then, this collection is further input into stage 2, and m_0← m_2. §.§ Plane Indicator for Deformable Propagation The plane indicator 𝒫_θ encodes the self-similarity of features within the reference view to learn the relationship between scene structure and depth under the whole PatchMatch solver, thereby decoding a plane flow field ℱ∈ℝ^H× W × 2M that represents the planar regions of the scene. This field contains M offset maps, where each element in the offset map represents the directional displacement between a location and its neighboring points in the same scene plane. Examples of offset maps are shown in Figure <ref>(a). Utilizing this ℱ, each pixel is guided to sample reliable depth hypotheses from m_1(m_1≤ M)neighboring points, as shown in Figure <ref>(b). In general, our 𝒫_θ consists of two components: an intra-view correlation pyramid C={𝒞_ℓ}_ℓ=1^L-1 and a plane flow decoder 𝒟_θ in Figure <ref>. §.§.§ Intra-view Correlation Pyramid ConstructionEach 𝒞_ℓ is generated by calculating the dot product between every pixel in the ℓ_th feature map and all the pixels within its designated neighborhood. The search radius R_1 determines the neighborhood range. Specifically, given the feature map φ_ℓ ^r, each element c_ℓ(p,η) in 𝒞_ℓ∈ℝ^H_ℓ× W_ℓ× R_1 is defined as c_ℓ ( p,η )=1/√(h_ℓ)⟨φ_ℓ^ℛ[p],φ_ℓ^ℛ[p+η]⟩,η_∞≤ R_1,whereh_ℓ represents the channel number of ℓ_th feature map, p∈ℝ^2 is a coordinate on the feature image and η is the offset from this location. The offset is constrained to η_∞≤ R_1. The symbol [ · ] is used to extract the features at a specific coordinate from the feature map. Each level's search radius R_1 remains fixed. Therefore, the radius covers the largest feature map area at the top level, gradually reducing with each subsequent level.§.§.§ Plane Flow DecoderThis decoder 𝒟_θ is designed to infer a plane flow field ℱ_ℓ∈ℝ^H_ℓ× W_ℓ× 2M progressively from the pyramid, which achieves a refined field ℱ at last. Inspired by <cit.>, the decoder incorporates four Conv-BN-LeayReLU (CBL) blocks and one predictor, as shown in Figure <ref>.Dense connections are added among the four blocks to enhance information exchange. Here, a slight adjustment is made in the predictor when inputting elements from different pyramid levels. At level ℓ=1, the predictor gives a coarse plane flow field ℱ_1∈ℝ^H_1× W_1× 2M. At subsequent levels, the predictor generates a residual ℱ̃_ℓ∈ℝ^H_ℓ× W_ℓ× 2 to refine the coarse field further, i.e.,ℱ_ℓ [ p ]= γ·ℱ_1^↑ [ p ]+ ℱ̃_ℓ [ γ·ℱ_1^↑ [ p ]+ p⊗1_M ],where p∈ℝ^2 is the coordinate on the field, γ is the up-sampling factor, 1_M is a M×1 identity matrix. The symbols ↑, [ · ], and ⊗ represent the up-sampling, fetch operation, and Kronecker product operation, respectively. §.§ Probabilistic Matcher for Deformable PerturbationThe probability matcher ℳ_θ is designed to model the multi-modal distribution of the depth prediction probabilities for guiding the fine-grained sampling during perturbation. We adopt a Laplace Mixture model containing two components (K=2) to tackle this multi-peak issue, i.e.,p ( y|ψ ) =α _11/√(2)σ_1e^√(2)/σ_1 | y-μ |+α _21/√(2)σ_2e^√(2)/σ_2 | y-μ |,where ψ={μ,α _1, α _2,σ_1,σ_2} is the distribution parameter set to be estimated, α _1+α _2=1, y is the depth of a specific pixel. The mean μ of both components is set to be the same to ensure only one peak exists. Additionally, we achieve this by setting σ_1 as a constant for the former to represent the most accurate depth prediction and imposing the constraint σ_2 > σ_1>0 for the latter to model larger errors. Figure <ref> visualizes an example of pixel-wise distribution parameters. §.§.§ Probabilistic MatcherMatcher ℳ_θ takes the inter-view cost pyramid as input, where each level of the pyramid contains a multi-view cost volume𝒮={S_i}_i=1^N-1 introduced by the source images.For each level of the pyramid, the matcher predicts the distribution parameter set {ψ_i}_i=1^N-1 for each pixel in the reference image, representing the matching uncertainty between the reference and different source images. The detailed structure can be seen in Figure <ref>. This matcher contains two branches. In the first branch, 𝒮 is encoded into an uncertainty embedding, which is then decoded into {σ^i_2, α^i _1, α^i _2}_i=1^N-1 for each pixel. According to the inferred parameters, an uncertainty map set 𝒰={U_i}_i=1^N-1 between the reference and source images can be obtained by computing depth prediction probabilities on each pixel, i.e., u_i=P (| y-μ |<R_2 ), u_i=α^i _1 [ 1-e^ -√(2)R_2 ]^2+α^i _2 [1-e^ -√(2)R_2/σ^i_2 ]^2,where u_i is an element of a specific coordinate on U_i, R_2 is the hyper-parameter determining the acceptable deviation between ground truth and predicted depth map μ. Utilizing those visibility maps {U_i}_i=1^N-1, an unified cost volume can be obtained by integrating ∑_i=1^N-1U_iS_i. In the second branch, a 3D CNN-based regularizer is adopted to process the unified cost volume to estimate a weighted depth map D(μ) with the sampled depth hypotheses. More details can be seen in the supplementary.§.§.§ Probabilistic Perturbationσ _2 is adopted to guide hypothesis sampling because it represents high matching uncertainty. We first integrate the variances from different views into a unified value E ( σ _2 )=∑ ^N-1_i=1 u_iσ _2^i. Then, a sampling space is defined as [ μ±ε E ( σ _2 )] for each pixel, where ε is the hyper-parameter. Then, we divide this range into m_2 bins, each containing an equal portion of probability mass. This ensures that pixels with low uncertainty sample candidates are closer to μ while pixels with high uncertainty sample more dispersed candidates to rectify μ. Subsequently, we sample the midpoint of each bin as a potential depth candidate. Thus, j_th depth candidate is defined as d_j = 1/2 [ Φ( j-1/m_2P̃ + P^*/2) +Φ( j/m_2P̃+ P^*/2) ],where Φ ( · ) is used to transform the cumulative probability into coordinates of the Laplace distribution, m_2 is the number of bins, P̃ is the probability mass covered by range [ μ±ε E ( σ _2 )], P^*=1-P̃. §.§ Loss FunctionWe first compute the loss ℒ_depth = ∑_ℓ=1^LD^gt-D_ℓ between all predicted depth maps {D_ℓ}_ℓ=1^L and ground truth D^gt. Then, a negative log-likelihood loss ℒ_NLL is adopted to supervise the fitted mixed Laplace distribution, i.e.,ℒ_NLL=-1/N-1∑_ℓ=1^L∑_i=1^N-1logp ( D^gt| ψ_i ).Thus, the total loss ℒ_total is defined as ℒ_total=λ_1ℒ_depth+λ_2ℒ_NLL, where λ_1, λ_2 are the weight factors.§ EXPERIMENTS§.§ Implementation Setup§.§.§ Training and Testing Following the previous works like TransMVSNet <cit.>, we initially train our DS-PMNet on the DTU dataset <cit.> for DTU evaluation. Then, we fine-tune the model on the BlendedMVS dataset <cit.>, and test it on the Tanks and Temples benchmark <cit.>. For training, we use 6 DTU images cropped to 512×640 as input in each batch. Our model is trained for 16 epochs with Adam optimizer, starting with a learning rate of 0.001, reduced by 0.2 at epochs 5, 9, and 13. To stabilize training against initial errors from random depth hypotheses, we set the initial learning rate of the probability matcher ℳ_θ to 10^-5. As for Fine-tuning on BlendedMVS, our model undergoes 10 epochs with an initial learning rate of 0.0002, using 6 input images at a resolution of 576 × 768. The batch size is set to two on NVIDIA RTX 3090 for DTU and one for BlendedMVS.When assessing the DTU, we use 6 input images at 1152×1600 resolution (N=6). For the Tanks and Temples dataset, N is set to 8, with images at 1024×1920 resolution. We report the results in terms of the accuracy (Acc.), completeness (Comp.), and overall metrics for DTU dataset and evaluate the performance of precision (Pre.), recall (Rec.), and F_1-score (F_1) for Tanks and Temples benchmark.§.§ Benchmark Performance §.§.§ Results on DTU We first evaluate our DS-PMNet on the DTU testing set, where the model is only trained on the DTU dataset. The quantitative results are reported in Table <ref>. Our DS-PMNet outperforms traditional and learning-based methods in the overall metric, achieving a highest score of 0.290. In terms of the accuracy, Gipuma <cit.> achieves the best results, while our approach demonstrates state-of-the-art performance for completeness. Additionally, we compare our method with PatchMatchNet <cit.> and UniMVSNet <cit.> for estimated depth maps. As shown in Figure <ref>, our method excels in recovering the depth of thin structures and object boundaries, where the edges align better with object boundaries than other methods.§.§.§ Results on Tanks and Temples We further validate the generalization ability of our method on the challenging Tanks and Temples dataset. As shown in Table <ref>, our DS-PMNet achieves competitive performance in precision and recall compared to the MVSNet variants. Specifically, our method ranks 1st and 3rd on the intermediate set and advanced set in terms of F_1 score respectively, outperforming most of methods. Compared with the existing learnable PatchMatch MVS methods, DS-PMNet outperforms better in all metrics. Figure <ref> provides a qualitative comparison of the reconstructed point clouds for the differen methods, where our method show an enhanced precision and comprehensiveness. The above results demonstrate the robustness and generalization ability of our method. §.§ Ablation Studies Ablation Studies are first conducted to independently validate the two modules of DeformSampler (i.e., Plane Indicator 𝒫_θ and Probability Matcher ℳ_θ). Then, comprehensive analysis are made on various ε settings in the probability matcher to showcase our choice of the parameters.Effectiveness of DeformSampler We first establish a baseline by incorporating Gipuma's fixed sampling modes in the propagation and perturbation stages into our learnable PatchMatch solver. Then, we progressively replace the sampling modes with our proposed plane indicator and probabilistic matcher for validating the effectiveness of our DeformSampler. Quantitative results evaluated on DTU are reported in Table <ref>. The results show that the solver incorporating both modules achieves the highest accuracy and completeness. This demonstrates the capability of our method in effectively learning the underlying depth distributions, guiding reliable hypothesis sampling during the propagation and perturbation stages. Additionally, our baseline outperforms PatchMatchNet in all metrics, demonstrating the superiority of our network design. Parameter SensitivityDuring the perturbation process, the parameter ε controls the range of the perturbation, i.e., [μ-εσ, μ+εσ], and different perturbation ranges result in different sampling fineness. Therefore, we constrain the parameters to the subset {1,2,3} to verify the optimality of our setting. Due to the step-by-step refinement in each iteration, ε in subsequent iterations must be less than or equal to the previous one. Additionally, the first iteration does not involve the perturbation process. The results of quantitative analysis on DTU are reported in Table <ref>. The best performance is achieved when ε is set to {2, 2, 1}, followed by {3, 2, 1}. These differences primarily stem from the initial ε setting. If ε is set too small (ε=1), the potentially valid hypotheses are excluded, while if too large (ε=3), the redundant noises hamper fine-grained sampling.§ CONCLUSION This paper presents a learnable DeformSampler that is embedded into PatchMatch MVS framework to facilitate the accurate depth estimation in complex scenarios. The proposed DeformSampler can help to sample distribution-sensitive hypothesis space during the propagation and perturbation. Extensive Experiments conducted on several challenging MVS datasets show that DeformSampler can effectively learn the piece-wise smooth depth distribution on the object surface for reliably propagating depth, while successfully capture the multi-modal distribution of depth prediction probabilities to allow for fine-grained hypothesis sampling. Comparisons with existing methods also demonstrate that our method can achieve state-of-the-art performance on MVS benchmarks.§ ACKNOWLEDGMENTSThis research was supported by NSFC-projects under Grant 42071370, the Fundamental Research Funds for the Central Universities of China under Grant 2042022dx0001, and Wuhan University-Huawei Geoinformatics Innovation Laboratory.
http://arxiv.org/abs/2312.15970v1
{ "authors": [ "Hongjie Li", "Yao Guo", "Xianwei Zheng", "Hanjiang Xiong" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226093621", "title": "Learning Deformable Hypothesis Sampling for Accurate PatchMatch Multi-View Stereo" }
http://arxiv.org/abs/2312.16324v1
{ "authors": [ "V. K. Oikonomou", "Gregory K. Kafanelis" ], "categories": [ "gr-qc", "astro-ph.CO" ], "primary_category": "gr-qc", "published": "20231226200856", "title": "Primordial Cosmology of an Emergent-like Universe from Modified Gravity: Reconstruction and Phenomenology Optimization with a Genetic Algorithm" }